Search results
1 – 10 of 130Deepak S. Uplaonkar, Virupakshappa and Nagabhushan Patil
The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.
Abstract
Purpose
The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.
Design/methodology/approach
After collecting the ultrasound images, contrast-limited adaptive histogram equalization approach (CLAHE) is applied as preprocessing, in order to enhance the visual quality of the images that helps in better segmentation. Then, adaptively regularized kernel-based fuzzy C means (ARKFCM) is used to segment tumor from the enhanced image along with local ternary pattern combined with selective level set approaches.
Findings
The proposed segmentation algorithm precisely segments the tumor portions from the enhanced images with lower computation cost. The proposed segmentation algorithm is compared with the existing algorithms and ground truth values in terms of Jaccard coefficient, dice coefficient, precision, Matthews correlation coefficient, f-score and accuracy. The experimental analysis shows that the proposed algorithm achieved 99.18% of accuracy and 92.17% of f-score value, which is better than the existing algorithms.
Practical implications
From the experimental analysis, the proposed ARKFCM with enhanced level set algorithm obtained better performance in ultrasound liver tumor segmentation related to graph-based algorithm. However, the proposed algorithm showed 3.11% improvement in dice coefficient compared to graph-based algorithm.
Originality/value
The image preprocessing is carried out using CLAHE algorithm. The preprocessed image is segmented by employing selective level set model and Local Ternary Pattern in ARKFCM algorithm. In this research, the proposed algorithm has advantages such as independence of clustering parameters, robustness in preserving the image details and optimal in finding the threshold value that effectively reduces the computational cost.
Details
Keywords
Prajakta Thakare and Ravi Sankar V.
Agriculture is the backbone of a country, contributing more than half of the sector of economy throughout the world. The need for precision agriculture is essential in evaluating…
Abstract
Purpose
Agriculture is the backbone of a country, contributing more than half of the sector of economy throughout the world. The need for precision agriculture is essential in evaluating the conditions of the crops with the aim of determining the proper selection of pesticides. The conventional method of pest detection fails to be stable and provides limited accuracy in the prediction. This paper aims to propose an automatic pest detection module for the accurate detection of pests using the hybrid optimization controlled deep learning model.
Design/methodology/approach
The paper proposes an advanced pest detection strategy based on deep learning strategy through wireless sensor network (WSN) in the agricultural fields. Initially, the WSN consisting of number of nodes and a sink are clustered as number of clusters. Each cluster comprises a cluster head (CH) and a number of nodes, where the CH involves in the transfer of data to the sink node of the WSN and the CH is selected using the fractional ant bee colony optimization (FABC) algorithm. The routing process is executed using the protruder optimization algorithm that helps in the transfer of image data to the sink node through the optimal CH. The sink node acts as the data aggregator and the collection of image data thus obtained acts as the input database to be processed to find the type of pest in the agricultural field. The image data is pre-processed to remove the artifacts present in the image and the pre-processed image is then subjected to feature extraction process, through which the significant local directional pattern, local binary pattern, local optimal-oriented pattern (LOOP) and local ternary pattern (LTP) features are extracted. The extracted features are then fed to the deep-convolutional neural network (CNN) in such a way to detect the type of pests in the agricultural field. The weights of the deep-CNN are tuned optimally using the proposed MFGHO optimization algorithm that is developed with the combined characteristics of navigating search agents and the swarming search agents.
Findings
The analysis using insect identification from habitus image Database based on the performance metrics, such as accuracy, specificity and sensitivity, reveals the effectiveness of the proposed MFGHO-based deep-CNN in detecting the pests in crops. The analysis proves that the proposed classifier using the FABC+protruder optimization-based data aggregation strategy obtains an accuracy of 94.3482%, sensitivity of 93.3247% and the specificity of 94.5263%, which is high as compared to the existing methods.
Originality/value
The proposed MFGHO optimization-based deep-CNN is used for the detection of pest in the crop fields to ensure the better selection of proper cost-effective pesticides for the crop fields in such a way to increase the production. The proposed MFGHO algorithm is developed with the integrated characteristic features of navigating search agents and the swarming search agents in such a way to facilitate the optimal tuning of the hyperparameters in the deep-CNN classifier for the detection of pests in the crop fields.
Details
Keywords
Shervan Fekriershad and Farshad Tajeripour
The purpose of this paper is to propose a color-texture classification approach which uses color sensor information and texture features jointly. High accuracy, low noise…
Abstract
Purpose
The purpose of this paper is to propose a color-texture classification approach which uses color sensor information and texture features jointly. High accuracy, low noise sensitivity and low computational complexity are specified aims for this proposed approach.
Design/methodology/approach
One of the efficient texture analysis operations is local binary patterns (LBP). The proposed approach includes two steps. First, a noise resistant version of color LBP is proposed to decrease its sensitivity to noise. This step is evaluated based on combination of color sensor information using AND operation. In a second step, a significant points selection algorithm is proposed to select significant LBPs. This phase decreases final computational complexity along with increasing accuracy rate.
Findings
The proposed approach is evaluated using Vistex, Outex and KTH-TIPS-2a data sets. This approach has been compared with some state-of-the-art methods. It is experimentally demonstrated that the proposed approach achieves the highest accuracy. In two other experiments, results show low noise sensitivity and low computational complexity of the proposed approach in comparison with previous versions of LBP. Rotation invariant, multi-resolution and general usability are other advantages of our proposed approach.
Originality/value
In the present paper, a new version of LBP is proposed originally, which is called hybrid color local binary patterns (HCLBP). HCLBP can be used in many image processing applications to extract color/texture features jointly. Also, a significant point selection algorithm is proposed for the first time to select key points of images.
Details
Keywords
Loris Nanni, Stefano Ghidoni and Sheryl Brahnam
This work presents a system based on an ensemble of Convolutional Neural Networks (CNNs) and descriptors for bioimage classification that has been validated on different datasets…
Abstract
This work presents a system based on an ensemble of Convolutional Neural Networks (CNNs) and descriptors for bioimage classification that has been validated on different datasets of color images. The proposed system represents a very simple yet effective way of boosting the performance of trained CNNs by composing multiple CNNs into an ensemble and combining scores by sum rule. Several types of ensembles are considered, with different CNN topologies along with different learning parameter sets. The proposed system not only exhibits strong discriminative power but also generalizes well over multiple datasets thanks to the combination of multiple descriptors based on different feature types, both learned and handcrafted. Separate classifiers are trained for each descriptor, and the entire set of classifiers is combined by sum rule. Results show that the proposed system obtains state-of-the-art performance across four different bioimage and medical datasets. The MATLAB code of the descriptors will be available at https://github.com/LorisNanni.
Sheryl Brahnam, Loris Nanni, Shannon McMurtrey, Alessandra Lumini, Rick Brattin, Melinda Slack and Tonya Barrier
Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex…
Abstract
Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex, multifactorial, and geared toward research. The goals of this work are twofold: 1) to develop a new video dataset for automatic neonatal pain detection called iCOPEvid (infant Classification Of Pain Expressions videos), and 2) to present a classification system that sets a challenging comparison performance on this dataset. The iCOPEvid dataset contains 234 videos of 49 neonates experiencing a set of noxious stimuli, a period of rest, and an acute pain stimulus. From these videos 20 s segments are extracted and grouped into two classes: pain (49) and nopain (185), with the nopain video segments handpicked to produce a highly challenging dataset. An ensemble of twelve global and local descriptors with a Bag-of-Features approach is utilized to improve the performance of some new descriptors based on Gaussian of Local Descriptors (GOLD). The basic classifier used in the ensembles is the Support Vector Machine, and decisions are combined by sum rule. These results are compared with standard methods, some deep learning approaches, and 185 human assessments. Our best machine learning methods are shown to outperform the human judges.
Details
Keywords
Shahidha Banu S. and Maheswari N.
Background modelling has played an imperative role in the moving object detection as the progress of foreground extraction during video analysis and surveillance in many real-time…
Abstract
Purpose
Background modelling has played an imperative role in the moving object detection as the progress of foreground extraction during video analysis and surveillance in many real-time applications. It is usually done by background subtraction. This method is uprightly based on a mathematical model with a fixed feature as a static background, where the background image is fixed with the foreground object running over it. Usually, this image is taken as the background model and is compared against every new frame of the input video sequence. In this paper, the authors presented a renewed background modelling method for foreground segmentation. The principal objective of the work is to perform the foreground object detection only in the premeditated region of interest (ROI). The ROI is calculated using the proposed algorithm reducing and raising by half (RRH). In this algorithm, the coordinate of a circle with the frame width as the diameter is considered for traversal to find the pixel difference. The change in the pixel intensity is considered to be the foreground object and the position of it is determined based on the pixel location. Most of the techniques study their updates to the pixels of the complete frame which may result in increased false rate; The proposed system deals these flaw by controlling the ROI object (the region only where the background subtraction is performed) and thus extracts a correct foreground by exactly categorizes the pixel as the foreground and mines the precise foreground object. The broad experimental results and the evaluation parameters of the proposed approach with the state of art methods were compared against the most recent background subtraction approaches. Moreover, the efficiency of the authors’ method is analyzed in different situations to prove that this method is available for real-time videos as well as videos available in the 2014 challenge change detection data set.
Design/methodology/approach
In this paper, the authors presented a fresh background modelling method for foreground segmentation. The main objective of the work is to perform the foreground object detection only on the premeditated ROI. The region for foreground extraction is calculated using proposed RRH algorithm. Most of the techniques study their updates to the pixels of the complete frame which may result in increased false rate; most challenging case is that, the slow moving object is updated quickly to detect the foreground region. The anticipated system deals these flaw by controlling the ROI object (the region only where the background subtraction is performed) and thus extracts a correct foreground by exactly categorizing the pixel as the foreground and mining the precise foreground object.
Findings
Plum Analytics provide a new conduit for documenting and contextualizing the public impact and reach of research within digitally networked environments. While limitations are notable, the metrics promoted through the platform can be used to build a more comprehensive view of research impact.
Originality/value
The algorithm used in the work was proposed by the authors and are used for experimental evaluations.
Details
Keywords
Rajashekhar U., Neelappa and Harish H.M.
The natural control, feedback, stimuli and protection of these subsequent principles founded this project. Via properly conducted experiments, a multilayer computer rehabilitation…
Abstract
Purpose
The natural control, feedback, stimuli and protection of these subsequent principles founded this project. Via properly conducted experiments, a multilayer computer rehabilitation system was created that integrated natural interaction assisted by electroencephalogram (EEG), which enabled the movements in the virtual environment and real wheelchair. For blind wheelchair operator patients, this paper involved of expounding the proper methodology. For educating the value of life and independence of blind wheelchair users, outcomes have proven that virtual reality (VR) with EEG signals has that potential.
Design/methodology/approach
Individuals face numerous challenges with many disorders, particularly when multiple dysfunctions are diagnosed and especially for visually effected wheelchair users. This scenario, in reality, creates in a degree of incapacity on the part of the wheelchair user in terms of performing simple activities. Based on their specific medical needs, confined patients are treated in a modified method. Independent navigation is secured for individuals with vision and motor disabilities. There is a necessity for communication which justifies the use of VR in this navigation situation. For the effective integration of locomotion besides, it must be under natural guidance. EEG, which uses random brain impulses, has made significant progress in the field of health. The custom of an automated audio announcement system modified to have the help of VR and EEG for the training of locomotion and individualized interaction of wheelchair users with visual disability is demonstrated in this study through an experiment. Enabling the patients who were otherwise deemed incapacitated to participate in social activities, as the aim was to have efficient connections.
Findings
To protect their life straightaway and to report all these disputes, the military system should have high speed, more precise portable prototype device for nursing the soldier health, recognition of solider location and report about health sharing system to the concerned system. Field programmable gate array (FPGA)-based soldier’s health observing and position gratitude system is proposed in this paper. Reliant on heart rate which is centered on EEG signals, the soldier’s health is observed on systematic bases. By emerging Verilog hardware description language (HDL) programming language and executing on Artix-7 development FPGA board of part name XC7ACSG100t the whole work is approved in a Vivado Design Suite. Classification of different abnormalities and cloud storage of EEG along with the type of abnormalities, artifact elimination, abnormalities identification based on feature extraction, exist in the segment of suggested architecture. Irregularity circumstances are noticed through developed prototype system and alert the physically challenged (PHC) individual via an audio announcement. An actual method for eradicating motion artifacts from EEG signals that have anomalies in the PHC person’s brain has been established, and the established system is a portable device that can deliver differences in brain signal variation intensity. Primarily the EEG signals can be taken and the undesirable artifact can be detached, later structures can be mined by discrete wavelet transform these are the two stages through which artifact deletion can be completed. The anomalies in signal can be noticed and recognized by using machine learning algorithms known as multirate support vector machine classifiers when the features have been extracted using a combination of hidden Markov model (HMM) and Gaussian mixture model (GMM). Intended for capable declaration about action taken by a blind person, these result signals are protected in storage devices and conveyed to the controller. Pretending daily motion schedules allows the pretentious EEG signals to be caught. Aimed at the validation of planned system, the database can be used and continued with numerous recorded signals of EEG. The projected strategy executes better in terms of re-storing theta, delta, alpha and beta complexes of the original EEG with less alteration and a higher signal to noise ratio (SNR) value of the EEG signal, which illustrates in the quantitative analysis. The projected method used Verilog HDL and MATLAB software for both formation and authorization of results to yield improved results. Since from the achieved results, it is initiated that 32% enhancement in SNR, 14% in mean squared error (MSE) and 65% enhancement in recognition of anomalies, hence design is effectively certified and proved for standard EEG signals data sets on FPGA.
Originality/value
The proposed system can be used in military applications as it is high speed and excellent precise in terms of identification of abnormality, the developed system is portable and very precise. FPGA-based soldier’s health observing and position gratitude system is proposed in this paper. Reliant on heart rate which is centered on EEG signals the soldier health is observed in systematic bases. The proposed system is developed using Verilog HDL programming language and executing on Artix-7 development FPGA board of part name XC7ACSG100t and synthesised using in Vivado Design Suite software tool.
Details
Keywords
Deepika Kishor Nagthane and Archana M. Rajurkar
One of the main reasons for increase in mortality rate in woman is breast cancer. Accurate early detection of breast cancer seems to be the only solution for diagnosis. In the…
Abstract
Purpose
One of the main reasons for increase in mortality rate in woman is breast cancer. Accurate early detection of breast cancer seems to be the only solution for diagnosis. In the field of breast cancer research, many new computer-aided diagnosis systems have been developed to reduce the diagnostic test false positives because of the subtle appearance of breast cancer tissues. The purpose of this study is to develop the diagnosis technique for breast cancer using LCFS and TreeHiCARe classifier model.
Design/methodology/approach
The proposed diagnosis methodology initiates with the pre-processing procedure. Subsequently, feature extraction is performed. In feature extraction, the image features which preserve the characteristics of the breast tissues are extracted. Consequently, feature selection is performed by the proposed least-mean-square (LMS)-Cuckoo search feature selection (LCFS) algorithm. The feature selection from the vast range of the features extracted from the images is performed with the help of the optimal cut point provided by the LCS algorithm. Then, the image transaction database table is developed using the keywords of the training images and feature vectors. The transaction resembles the itemset and the association rules are generated from the transaction representation based on a priori algorithm with high conviction ratio and lift. After association rule generation, the proposed TreeHiCARe classifier model emanates in the diagnosis methodology. In TreeHICARe classifier, a new feature index is developed for the selection of a central feature for the decision tree centered on which the classification of images into normal or abnormal is performed.
Findings
The performance of the proposed method is validated over existing works using accuracy, sensitivity and specificity measures. The experimentation of proposed method on Mammographic Image Analysis Society database resulted in classification of normal and abnormal cancerous mammogram images with an accuracy of 0.8289, sensitivity of 0.9333 and specificity of 0.7273.
Originality/value
This paper proposes a new approach for the breast cancer diagnosis system by using mammogram images. The proposed method uses two new algorithms: LCFS and TreeHiCARe. LCFS is used to select optimal feature split points, and TreeHiCARe is the decision tree classifier model based on association rule agreements.
Details
Keywords
Sharanabasappa and Suvarna Nandyal
In order to prevent accidents during driving, driver drowsiness detection systems have become a hot topic for researchers. There are various types of features that can be used to…
Abstract
Purpose
In order to prevent accidents during driving, driver drowsiness detection systems have become a hot topic for researchers. There are various types of features that can be used to detect drowsiness. Detection can be done by utilizing behavioral data, physiological measurements and vehicle-based data. The existing deep convolutional neural network (CNN) models-based ensemble approach analyzed the behavioral data comprises eye or face or head movement captured by using a camera images or videos. However, the developed model suffered from the limitation of high computational cost because of the application of approximately 140 million parameters.
Design/methodology/approach
The proposed model uses significant feature parameters from the feature extraction process such as ReliefF, Infinite, Correlation, Term Variance are used for feature selection. The features that are selected are undergone for classification using ensemble classifier.
Findings
The output of these models is classified into non-drowsiness or drowsiness categories.
Research limitations/implications
In this research work higher end camera are required to collect videos as it is cost-effective. Therefore, researches are encouraged to use the existing datasets.
Practical implications
This paper overcomes the earlier approach. The developed model used complex deep learning models on small dataset which would also extract additional features, thereby provided a more satisfying result.
Originality/value
Drowsiness can be detected at the earliest using ensemble model which restricts the number of accidents.
Details
Keywords
K. Thirumalaisamy and A. Subramanyam Reddy
The analysis of fluid flow and thermal transport performance inside the cavity has found numerous applications in various engineering fields, such as nuclear reactors and solar…
Abstract
Purpose
The analysis of fluid flow and thermal transport performance inside the cavity has found numerous applications in various engineering fields, such as nuclear reactors and solar collectors. Nowadays, researchers are concentrating on improving heat transfer by using ternary nanofluids. With this motivation, the present study analyzes the natural convective flow and heat transfer efficiency of ternary nanofluids in different types of porous square cavities.
Design/methodology/approach
The cavity inclination angle is fixed ω = 0 in case (I) and
Findings
The average heat transfer rate is computed for four combinations of ternary nanofluids:
Practical implications
The purpose of this study is to determine whether the ternary nanofluids may be used to achieve the high thermal transmission in nuclear power systems, generators and electronic device applications.
Social implications
The current analysis is useful to improve the thermal features of nuclear reactors, solar collectors, energy storage and hybrid fuel cells.
Originality/value
To the best of the authors’ knowledge, no research has been carried out related to the magneto-hydrodynamic natural convective
Details