Search results
1 – 10 of 23Abhishek Das and Mihir Narayan Mohanty
In time and accurate detection of cancer can save the life of the person affected. According to the World Health Organization (WHO), breast cancer occupies the most frequent…
Abstract
Purpose
In time and accurate detection of cancer can save the life of the person affected. According to the World Health Organization (WHO), breast cancer occupies the most frequent incidence among all the cancers whereas breast cancer takes fifth place in the case of mortality numbers. Out of many image processing techniques, certain works have focused on convolutional neural networks (CNNs) for processing these images. However, deep learning models are to be explored well.
Design/methodology/approach
In this work, multivariate statistics-based kernel principal component analysis (KPCA) is used for essential features. KPCA is simultaneously helpful for denoising the data. These features are processed through a heterogeneous ensemble model that consists of three base models. The base models comprise recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU). The outcomes of these base learners are fed to fuzzy adaptive resonance theory mapping (ARTMAP) model for decision making as the nodes are added to the F_2ˆa layer if the winning criteria are fulfilled that makes the ARTMAP model more robust.
Findings
The proposed model is verified using breast histopathology image dataset publicly available at Kaggle. The model provides 99.36% training accuracy and 98.72% validation accuracy. The proposed model utilizes data processing in all aspects, i.e. image denoising to reduce the data redundancy, training by ensemble learning to provide higher results than that of single models. The final classification by a fuzzy ARTMAP model that controls the number of nodes depending upon the performance makes robust accurate classification.
Research limitations/implications
Research in the field of medical applications is an ongoing method. More advanced algorithms are being developed for better classification. Still, the scope is there to design the models in terms of better performance, practicability and cost efficiency in the future. Also, the ensemble models may be chosen with different combinations and characteristics. Only signal instead of images may be verified for this proposed model. Experimental analysis shows the improved performance of the proposed model. This method needs to be verified using practical models. Also, the practical implementation will be carried out for its real-time performance and cost efficiency.
Originality/value
The proposed model is utilized for denoising and to reduce the data redundancy so that the feature selection is done using KPCA. Training and classification are performed using heterogeneous ensemble model designed using RNN, LSTM and GRU as base classifiers to provide higher results than that of single models. Use of adaptive fuzzy mapping model makes the final classification accurate. The effectiveness of combining these methods to a single model is analyzed in this work.
Details
Keywords
J.F. Aviles-Viñas, I. Lopez-Juarez and R. Rios-Cabrera
– The purpose of this paper was to propose a method based on an Artificial Neural Network and a real-time vision algorithm, to learn welding skills in industrial robotics.
Abstract
Purpose
The purpose of this paper was to propose a method based on an Artificial Neural Network and a real-time vision algorithm, to learn welding skills in industrial robotics.
Design/methodology/approach
By using an optic camera to measure the bead geometry (width and height), the authors propose a real-time computer vision algorithm to extract training patterns and to enable an industrial robot to acquire and learn autonomously the welding skill. To test the approach, an industrial KUKA robot and a welding gas metal arc welding machine were used in a manufacturing cell.
Findings
Several data analyses are described, showing empirically that industrial robots can acquire the skill even if the specific welding parameters are unknown.
Research limitations/implications
The approach considers only stringer beads. Weave bead and bead penetration are not considered.
Practical implications
With the proposed approach, it is possible to learn specific welding parameters despite of the material, type of robot or welding machine. This is due to the fact that the feedback system produces automatic measurements that are labelled prior to the learning process.
Originality/value
The main contribution is that the complex learning process is reduced into an input-process-output system, where the process part is learnt automatically without human supervision, by registering the patterns with an automatically calibrated vision system.
Details
Keywords
David West and Paul Mangiameli
In treating both sewage and storm runoff, wastewater treatment plants are important to maintaining a healthy environment. If the plant operations managers do not respond correctly…
Abstract
In treating both sewage and storm runoff, wastewater treatment plants are important to maintaining a healthy environment. If the plant operations managers do not respond correctly to plant conditions, environmental damage resulting in the deterioration of human health may be the result. Unfortunately, there are no formal models to help these managers; they rely upon their own intuition to manage the plants. The purpose of this paper is to investigate the effectiveness of various models, originally used for manufacturing, to detect process conditions in wastewater treatment facilities. We compare and contrast the performance of five statistical models and three neural network architectures. The data used in the research is 527 daily measurements of 38 sensor readings of the process state variables of an urban wastewater treatment plant.
Details
Keywords
Mario Peña‐Cabrera, Ismael Lopez‐Juarez, Reyes Rios‐Cabrera and Jorge Corona‐Castuera
Outcome with a novel methodology for online recognition and classification of pieces in robotic assembly tasks and its application into an intelligent manufacturing cell.
Abstract
Purpose
Outcome with a novel methodology for online recognition and classification of pieces in robotic assembly tasks and its application into an intelligent manufacturing cell.
Design/methodology/approach
The performance of industrial robots working in unstructured environments can be improved using visual perception and learning techniques. The object recognition is accomplished using an artificial neural network (ANN) architecture which receives a descriptive vector called CFD&POSE as the input. Experimental results were done within a manufacturing cell and assembly parts.
Findings
Find this vector represents an innovative methodology for classification and identification of pieces in robotic tasks, obtaining fast recognition and pose estimation information in real time. The vector compresses 3D object data from assembly parts and it is invariant to scale, rotation and orientation, and it also supports a wide range of illumination levels.
Research limitations/implications
Provides vision guidance in assembly tasks, current work addresses the use of ANN's for assembly and object recognition separately, future work is oriented to use the same neural controller for all different sensorial modes.
Practical implications
Intelligent manufacturing cells developed with multimodal sensor capabilities, might use this methodology for future industrial applications including robotics fixtureless assembly. The approach in combination with the fast learning capability of ART networks indicates the suitability for industrial robot applications as it is demonstrated through experimental results.
Originality/value
This paper introduces a novel method which uses collections of 2D images to obtain a very fast feature data – ”current frame descriptor vector” – of an object by using image projections and canonical forms geometry grouping for invariant object recognition.
Details
Keywords
V. Srilakshmi, K. Anuradha and C. Shoba Bindu
This paper aims to model a technique that categorizes the texts from huge documents. The progression in internet technologies has raised the count of document accessibility, and…
Abstract
Purpose
This paper aims to model a technique that categorizes the texts from huge documents. The progression in internet technologies has raised the count of document accessibility, and thus the documents available online become countless. The text documents comprise of research article, journal papers, newspaper, technical reports and blogs. These large documents are useful and valuable for processing real-time applications. Also, these massive documents are used in several retrieval methods. Text classification plays a vital role in information retrieval technologies and is considered as an active field for processing massive applications. The aim of text classification is to categorize the large-sized documents into different categories on the basis of its contents. There exist numerous methods for performing text-related tasks such as profiling users, sentiment analysis and identification of spams, which is considered as a supervised learning issue and is addressed with text classifier.
Design/methodology/approach
At first, the input documents are pre-processed using the stop word removal and stemming technique such that the input is made effective and capable for feature extraction. In the feature extraction process, the features are extracted using the vector space model (VSM) and then, the feature selection is done for selecting the highly relevant features to perform text categorization. Once the features are selected, the text categorization is progressed using the deep belief network (DBN). The training of the DBN is performed using the proposed grasshopper crow optimization algorithm (GCOA) that is the integration of the grasshopper optimization algorithm (GOA) and Crow search algorithm (CSA). Moreover, the hybrid weight bounding model is devised using the proposed GCOA and range degree. Thus, the proposed GCOA + DBN is used for classifying the text documents.
Findings
The performance of the proposed technique is evaluated using accuracy, precision and recall is compared with existing techniques such as naive bayes, k-nearest neighbors, support vector machine and deep convolutional neural network (DCNN) and Stochastic Gradient-CAViaR + DCNN. Here, the proposed GCOA + DBN has improved performance with the values of 0.959, 0.959 and 0.96 for precision, recall and accuracy, respectively.
Originality/value
This paper proposes a technique that categorizes the texts from massive sized documents. From the findings, it can be shown that the proposed GCOA-based DBN effectively classifies the text documents.
Details
Keywords
Bengi Aygün and Vehbi Cagri Gungor
The purpose of this paper is to provide a contemporary look at the current state‐of‐the‐art in wireless sensor networks (WSNs) for structure health monitoring (SHM) applications…
Abstract
Purpose
The purpose of this paper is to provide a contemporary look at the current state‐of‐the‐art in wireless sensor networks (WSNs) for structure health monitoring (SHM) applications and discuss the still‐open research issues in this field and, hence, to make the decision‐making process more effective and direct.
Design/methodology/approach
This paper presents a comprehensive review of WSNs for SHM. It also introduces research challenges, opportunities, existing and potential applications. Network architecture and the state‐of‐the‐art wireless sensor communication technologies and standards are explained. Hardware and software of the existing systems are also clarified.
Findings
Existing applications and systems are presented along with their advantages and disadvantages. A comparison landscape and open research issues are also presented.
Originality/value
The paper presents a comprehensive and recent review of WSN systems for SHM applications along with open research issues.
Details
Keywords
Tomasz Chady, Ryszard Sikora, Mariusz Szwagiel, Bogdan Grzywacz, Leszek Misztal, Pawel Waszczuk, Michal Szydlowski and Barbara Szymanik
The purpose of this paper is to describe a multisource system for nondestructive inspection of welded elements exploited in aircraft industry developed in West Pomeranian…
Abstract
Purpose
The purpose of this paper is to describe a multisource system for nondestructive inspection of welded elements exploited in aircraft industry developed in West Pomeranian University of Technology, Szczecin in the frame of CASELOT project. The system task is to support the operator in flaws identification of welded aircraft elements using data obtained from X-ray inspection and 3D triangulation laser scanners.
Design/methodology/approach
For proper defects detection a set of special processing algorithms were developed. For easier system exploitation and integration of all components a user friendly interface in LabVIEW environment was designed.
Findings
It is possible to create the fully independent, intelligent system for welds’ flaws detection. This kind of technology might be crucial in further development of aircraft industry.
Originality/value
In this paper a number of innovative solutions (new algorithms, algorithms’ combinations) for defects’ detection in welds are presented. All of these solutions are the basis of presented complete system. One of the main original solution is a combination of the systems based on 3D triangulation laser scanner and X-ray testing.
Details
Keywords
Michele Cedolin and Mujde Erol Genevois
The research objective is to increase the computational efficiency of the automated teller machine (ATM) cash demand forecasting problem. It proposes a practical decision-making…
Abstract
Purpose
The research objective is to increase the computational efficiency of the automated teller machine (ATM) cash demand forecasting problem. It proposes a practical decision-making process that uses aggregated time series of a bank's ATM network. The purpose is to decrease ATM numbers that will be forecasted by individual models, by finding the machines’ cluster where the forecasting results of the aggregated series are appropriate to use.
Design/methodology/approach
A comparative statistical forecasting approach is proposed in order to reduce the calculation complexity of an ATM network by using the NN5 competition data set. Integrated autoregressive moving average (ARIMA) and its seasonal version SARIMA are fitted to each time series. Then, averaged time series are introduced to simplify the forecasting process carried out for each ATM. The ATMs that are forecastable with the averaged series are identified by calculating the forecasting accuracy change in each machine.
Findings
The proposed approach is evaluated by different error metrics and is compared to the literature findings. The results show that the ATMs that have tolerable accuracy loss may be considered as a cluster and can be forecasted with a single model based on the aggregated series.
Research limitations/implications
The research is based on the public data set. Financial institutions do not prefer to share their ATM transactions data, therefore accessible data are limited.
Practical implications
The proposed practical approach will be beneficial for financial institutions to use, that hold an excessive number of ATMs because it reduces the computational time and resources allocated for the forecasting process.
Originality/value
This study offers an effective simplified methodology to the challenging cash demand forecasting process by introducing an aggregated time series approach.
Details
Keywords
Nageswara Rao Eluri, Gangadhara Rao Kancharla, Suresh Dara and Venkatesulu Dondeti
Gene selection is considered as the fundamental process in the bioinformatics field. The existing methodologies pertain to cancer classification are mostly clinical basis, and its…
Abstract
Purpose
Gene selection is considered as the fundamental process in the bioinformatics field. The existing methodologies pertain to cancer classification are mostly clinical basis, and its diagnosis capability is limited. Nowadays, the significant problems of cancer diagnosis are solved by the utilization of gene expression data. The researchers have been introducing many possibilities to diagnose cancer appropriately and effectively. This paper aims to develop the cancer data classification using gene expression data.
Design/methodology/approach
The proposed classification model involves three main phases: “(1) Feature extraction, (2) Optimal Feature Selection and (3) Classification”. Initially, five benchmark gene expression datasets are collected. From the collected gene expression data, the feature extraction is performed. To diminish the length of the feature vectors, optimal feature selection is performed, for which a new meta-heuristic algorithm termed as quantum-inspired immune clone optimization algorithm (QICO) is used. Once the relevant features are selected, the classification is performed by a deep learning model called recurrent neural network (RNN). Finally, the experimental analysis reveals that the proposed QICO-based feature selection model outperforms the other heuristic-based feature selection and optimized RNN outperforms the other machine learning methods.
Findings
The proposed QICO-RNN is acquiring the best outcomes at any learning percentage. On considering the learning percentage 85, the accuracy of the proposed QICO-RNN was 3.2% excellent than RNN, 4.3% excellent than RF, 3.8% excellent than NB and 2.1% excellent than KNN for Dataset 1. For Dataset 2, at learning percentage 35, the accuracy of the proposed QICO-RNN was 13.3% exclusive than RNN, 8.9% exclusive than RF and 14.8% exclusive than NB and KNN. Hence, the developed QICO algorithm is performing well in classifying the cancer data using gene expression data accurately.
Originality/value
This paper introduces a new optimal feature selection model using QICO and QICO-based RNN for effective classification of cancer data using gene expression data. This is the first work that utilizes an optimal feature selection model using QICO and QICO-RNN for effective classification of cancer data using gene expression data.
Details
Keywords
Alexandr Seleznyov and Seppo Puuronen
Nowadays computer and network intrusions have become more common and more complicated, challenging the intrusion detection systems. Also, network traffic has been constantly…
Abstract
Nowadays computer and network intrusions have become more common and more complicated, challenging the intrusion detection systems. Also, network traffic has been constantly increasing. As a consequence, the amount of data to be processed by an intrusion detection system has been growing, making it difficult to efficiently detect intrusions online. Proposes an approach for continuous user authentication based on the user’s behaviour, aiming at development of an efficient and portable anomaly intrusion detection system. A prototype of a host‐based intrusion detection system was built. It detects masqueraders by comparing the current user behaviour with his/her stored behavioural model. The model itself is represented by a number of patterns that describe sequential and temporal behavioural regularities of the users. This paper also discusses implementation issues, describes the authors’ solutions, and provides performance results of the prototype.
Details