Search results

1 – 10 of 330
Article
Publication date: 13 August 2020

Chandra Sekhar Kolli and Uma Devi Tatavarthi

Fraud transaction detection has become a significant factor in the communication technologies and electronic commerce systems, as it affects the usage of electronic payment. Even…

Abstract

Purpose

Fraud transaction detection has become a significant factor in the communication technologies and electronic commerce systems, as it affects the usage of electronic payment. Even though, various fraud detection methods are developed, enhancing the performance of electronic payment by detecting the fraudsters results in a great challenge in the bank transaction.

Design/methodology/approach

This paper aims to design the fraud detection mechanism using the proposed Harris water optimization-based deep recurrent neural network (HWO-based deep RNN). The proposed fraud detection strategy includes three different phases, namely, pre-processing, feature selection and fraud detection. Initially, the input transactional data is subjected to the pre-processing phase, where the data is pre-processed using the Box-Cox transformation to remove the redundant and noise values from data. The pre-processed data is passed to the feature selection phase, where the essential and the suitable features are selected using the wrapper model. The selected feature makes the classifier to perform better detection performance. Finally, the selected features are fed to the detection phase, where the deep recurrent neural network classifier is used to achieve the fraud detection process such that the training process of the classifier is done by the proposed Harris water optimization algorithm, which is the integration of water wave optimization and Harris hawks optimization.

Findings

Moreover, the proposed HWO-based deep RNN obtained better performance in terms of the metrics, such as accuracy, sensitivity and specificity with the values of 0.9192, 0.7642 and 0.9943.

Originality/value

An effective fraud detection method named HWO-based deep RNN is designed to detect the frauds in the bank transaction. The optimal features selected using the wrapper model enable the classifier to find fraudulent activities more efficiently. However, the accurate detection result is evaluated through the optimization model based on the fitness measure such that the function with the minimal error value is declared as the best solution, as it yields better detection results.

Article
Publication date: 16 September 2021

Sireesha Jasti

Internet has endorsed a tremendous change with the advancement of the new technologies. The change has made the users of the internet to make comments regarding the service or…

Abstract

Purpose

Internet has endorsed a tremendous change with the advancement of the new technologies. The change has made the users of the internet to make comments regarding the service or product. The Sentiment classification is the process of analyzing the reviews for helping the user to decide whether to purchase the product or not.

Design/methodology/approach

A rider feedback artificial tree optimization-enabled deep recurrent neural networks (RFATO-enabled deep RNN) is developed for the effective classification of sentiments into various grades. The proposed RFATO algorithm is modeled by integrating the feedback artificial tree (FAT) algorithm in the rider optimization algorithm (ROA), which is used for training the deep RNN classifier for the classification of sentiments in the review data. The pre-processing is performed by the stemming and the stop word removal process for removing the redundancy for smoother processing of the data. The features including the sentiwordnet-based features, a variant of term frequency-inverse document frequency (TF-IDF) features and spam words-based features are extracted from the review data to form the feature vector. Feature fusion is performed based on the entropy of the features that are extracted. The metrics employed for the evaluation in the proposed RFATO algorithm are accuracy, sensitivity, and specificity.

Findings

By using the proposed RFATO algorithm, the evaluation metrics such as accuracy, sensitivity and specificity are maximized when compared to the existing algorithms.

Originality/value

The proposed RFATO algorithm is modeled by integrating the FAT algorithm in the ROA, which is used for training the deep RNN classifier for the classification of sentiments in the review data. The pre-processing is performed by the stemming and the stop word removal process for removing the redundancy for smoother processing of the data. The features including the sentiwordnet-based features, a variant of TF-IDF features and spam words-based features are extracted from the review data to form the feature vector. Feature fusion is performed based on the entropy of the features that are extracted.

Details

International Journal of Web Information Systems, vol. 17 no. 6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 17 January 2020

Wei Feng, Yuqin Wu and Yexian Fan

The purpose of this paper is to solve the shortage of the existing methods for the prediction of network security situations (NSS). Because the conventional methods for the…

Abstract

Purpose

The purpose of this paper is to solve the shortage of the existing methods for the prediction of network security situations (NSS). Because the conventional methods for the prediction of NSS, such as support vector machine, particle swarm optimization, etc., lack accuracy, robustness and efficiency, in this study, the authors propose a new method for the prediction of NSS based on recurrent neural network (RNN) with gated recurrent unit.

Design/methodology/approach

This method extracts internal and external information features from the original time-series network data for the first time. Then, the extracted features are applied to the deep RNN model for training and validation. After iteration and optimization, the accuracy of predictions of NSS will be obtained by the well-trained model, and the model is robust for the unstable network data.

Findings

Experiments on bench marked data set show that the proposed method obtains more accurate and robust prediction results than conventional models. Although the deep RNN models need more time consumption for training, they guarantee the accuracy and robustness of prediction in return for validation.

Originality/value

In the prediction of NSS time-series data, the proposed internal and external information features are well described the original data, and the employment of deep RNN model will outperform the state-of-the-arts models.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 9 March 2022

G.L. Infant Cyril and J.P. Ananth

The bank is termed as an imperative part of the marketing economy. The failure or success of an institution relies on the ability of industries to compute the credit risk. The…

Abstract

Purpose

The bank is termed as an imperative part of the marketing economy. The failure or success of an institution relies on the ability of industries to compute the credit risk. The loan eligibility prediction model utilizes analysis method that adapts past and current information of credit user to make prediction. However, precise loan prediction with risk and assessment analysis is a major challenge in loan eligibility prediction.

Design/methodology/approach

This aim of the research technique is to present a new method, namely Social Border Collie Optimization (SBCO)-based deep neuro fuzzy network for loan eligibility prediction. In this method, box cox transformation is employed on input loan data to create the data apt for further processing. The transformed data utilize the wrapper-based feature selection to choose suitable features to boost the performance of loan eligibility calculation. Once the features are chosen, the naive Bayes (NB) is adapted for feature fusion. In NB training, the classifier builds probability index table with the help of input data features and groups values. Here, the testing of NB classifier is done using posterior probability ratio considering conditional probability of normalization constant with class evidence. Finally, the loan eligibility prediction is achieved by deep neuro fuzzy network, which is trained with designed SBCO. Here, the SBCO is devised by combining the social ski driver (SSD) algorithm and Border Collie Optimization (BCO) to produce the most precise result.

Findings

The analysis is achieved by accuracy, sensitivity and specificity parameter by. The designed method performs with the highest accuracy of 95%, sensitivity and specificity of 95.4 and 97.3%, when compared to the existing methods, such as fuzzy neural network (Fuzzy NN), multiple partial least squares regression model (Multi_PLS), instance-based entropy fuzzy support vector machine (IEFSVM), deep recurrent neural network (Deep RNN), whale social optimization algorithm-based deep RNN (WSOA-based Deep RNN).

Originality/value

This paper devises SBCO-based deep neuro fuzzy network for predicting loan eligibility. Here, the deep neuro fuzzy network is trained with proposed SBCO, which is devised by combining the SSD and BCO to produce most precise result for loan eligibility prediction.

Details

Kybernetes, vol. 52 no. 8
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 11 March 2022

Snehal R. Rathi and Yogesh D. Deshpande

Affective states in learning have gained immense attention in education. The precise affective-states prediction can increase the learning gain by adapting targeted interventions…

Abstract

Purpose

Affective states in learning have gained immense attention in education. The precise affective-states prediction can increase the learning gain by adapting targeted interventions that can adjust the changes in individual affective states of students. Several techniques are devised for predicting the affective states considering audio, video and biosensors. Still, the system that relies on analyzing audio and video cannot certify anonymity and is subjected to privacy problems.

Design/methodology/approach

A new strategy, termed rider squirrel search algorithm-based deep long short-term memory (RiderSSA-based deep LSTM) is devised for affective-state prediction. The deep LSTM training is done by the proposed RiderSSA. Here, RiderSSA-based deep LSTM effectively predicts the affective states like confusion, engagement, frustration, anger, happiness, disgust, boredom, surprise and so on. In addition, the learning styles are predicted based on the extracted features using rider neural network (RideNN), for which the Felder–Silverman learning-style model (FSLSM) is considered. Here, the RideNN classifies the learners. Finally, the course ID, student ID, affective state, learning style, exam score and course completion are taken as output data to determine the correlative study.

Findings

The proposed RiderSSA-based deep LSTM provided enhanced efficiency with elevated accuracy of 0.962 and the highest correlation of 0.406.

Originality/value

The proposed method based on affective prediction obtained maximal accuracy and the highest correlation. Thus, the method can be applied to the course recommendation system based on affect prediction.

Details

Kybernetes, vol. 52 no. 9
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 1 April 2022

Qiong Jia, Ying Zhu, Rui Xu, Yubin Zhang and Yihua Zhao

Abundant studies of outpatient visits apply traditional recurrent neural network (RNN) approaches; more recent methods, such as the deep long short-term memory (DLSTM) model, have…

Abstract

Purpose

Abundant studies of outpatient visits apply traditional recurrent neural network (RNN) approaches; more recent methods, such as the deep long short-term memory (DLSTM) model, have yet to be implemented in efforts to forecast key hospital data. Therefore, the current study aims to reports on an application of the DLSTM model to forecast multiple streams of healthcare data.

Design/methodology/approach

As the most advanced machine learning (ML) method, static and dynamic DLSTM models aim to forecast time-series data, such as daily patient visits. With a comparative analysis conducted in a high-level, urban Chinese hospital, this study tests the proposed DLSTM model against several widely used time-series analyses as reference models.

Findings

The empirical results show that the static DLSTM approach outperforms seasonal autoregressive integrated moving averages (SARIMA), single and multiple RNN, deep gated recurrent units (DGRU), traditional long short-term memory (LSTM) and dynamic DLSTM, with smaller mean absolute, root mean square, mean absolute percentage and root mean square percentage errors (RMSPE). In particular, static DLSTM outperforms all other models for predicting daily patient visits, the number of daily medical examinations and prescriptions.

Practical implications

With these results, hospitals can achieve more precise predictions of outpatient visits, medical examinations and prescriptions, which can inform hospitals' construction plans and increase the efficiency with which the hospitals manage relevant information.

Originality/value

To address a persistent gap in smart hospital and ML literature, this study offers evidence of the best forecasting models with a comparative analysis. The study extends predictive methods for forecasting patient visits, medical examinations and prescriptions and advances insights into smart hospitals by testing a state-of-the-art, deep learning neural network method.

Details

Industrial Management & Data Systems, vol. 122 no. 10
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 3 November 2020

Jagroop Kaur and Jaswinder Singh

Normalization is an important step in all the natural language processing applications that are handling social media text. The text from social media poses a different kind of…

Abstract

Purpose

Normalization is an important step in all the natural language processing applications that are handling social media text. The text from social media poses a different kind of problems that are not present in regular text. Recently, a considerable amount of work has been done in this direction, but mostly in the English language. People who do not speak English code mixed the text with their native language and posted text on social media using the Roman script. This kind of text further aggravates the problem of normalizing. This paper aims to discuss the concept of normalization with respect to code-mixed social media text, and a model has been proposed to normalize such text.

Design/methodology/approach

The system is divided into two phases – candidate generation and most probable sentence selection. Candidate generation task is treated as machine translation task where the Roman text is treated as source language and Gurmukhi text is treated as the target language. Character-based translation system has been proposed to generate candidate tokens. Once candidates are generated, the second phase uses the beam search method for selecting the most probable sentence based on hidden Markov model.

Findings

Character error rate (CER) and bilingual evaluation understudy (BLEU) score are reported. The proposed system has been compared with Akhar software and RB\_R2G system, which are also capable of transliterating Roman text to Gurmukhi. The performance of the system outperforms Akhar software. The CER and BLEU scores are 0.268121 and 0.6807939, respectively, for ill-formed text.

Research limitations/implications

It was observed that the system produces dialectical variations of a word or the word with minor errors like diacritic missing. Spell checker can improve the output of the system by correcting these minor errors. Extensive experimentation is needed for optimizing language identifier, which will further help in improving the output. The language model also seeks further exploration. Inclusion of wider context, particularly from social media text, is an important area that deserves further investigation.

Practical implications

The practical implications of this study are: (1) development of parallel dataset containing Roman and Gurmukhi text; (2) development of dataset annotated with language tag; (3) development of the normalizing system, which is first of its kind and proposes translation based solution for normalizing noisy social media text from Roman to Gurmukhi. It can be extended for any pair of scripts. (4) The proposed system can be used for better analysis of social media text. Theoretically, our study helps in better understanding of text normalization in social media context and opens the doors for further research in multilingual social media text normalization.

Originality/value

Existing research work focus on normalizing monolingual text. This study contributes towards the development of a normalization system for multilingual text.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 2 February 2021

Hao Wang, Guangming Dong and Jin Chen

The purpose of this paper is building the regression model related to tool wear, and the regression model is used to identify the state of tool wear.

Abstract

Purpose

The purpose of this paper is building the regression model related to tool wear, and the regression model is used to identify the state of tool wear.

Design/methodology/approach

In this paper, genetic programming (GP), which is originally used to solve the symbolic regression problem, is used to build the regression model related to tool wear with the strong regression ability. GP is improved in genetic operation and weighted matrix. The performance of GP is verified in the tool vibration, force and acoustic emission data provided by 2010 prognostics health management.

Findings

In result, the regression model discovered by GP can identify the state of tool wear. Compared to other regression algorithms, e.g. support vector regression and polynomial regression, the identification of GP is more precise.

Research limitations/implications

The regression models built in this paper can only make an assessment of the current wear state with current signals of tool. It cannot predict or estimate the tool wear after the current state. In addition, the generalization of model has some limitations. The performance of models is just proved in the signals from the same type of tools and under the same work condition, and different tools and different work conditions may have influences on the performance of models.

Originality/value

In this study, the discovered regression model can identify the state of tool wear precisely, and the identification performances of model applied in other tools are also excellent. It can provide a significant information about the health of tool, so the tools can be replaced or repaired in time, and the loss caused by tool damage can be avoided.

Details

Engineering Computations, vol. 38 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 25 October 2021

Venkata Dasu Marri, Veera Narayana Reddy P. and Chandra Mohan Reddy S.

Image classification is a fundamental form of digital image processing in which pixels are labeled into one of the object classes present in the image. Multispectral image…

Abstract

Purpose

Image classification is a fundamental form of digital image processing in which pixels are labeled into one of the object classes present in the image. Multispectral image classification is a challenging task due to complexities associated with the images captured by satellites. Accurate image classification is highly essential in remote sensing applications. However, existing machine learning and deep learning–based classification methods could not provide desired accuracy. The purpose of this paper is to classify the objects in the satellite image with greater accuracy.

Design/methodology/approach

This paper proposes a deep learning-based automated method for classifying multispectral images. The central issue of this work is that data sets collected from public databases are first divided into a number of patches and their features are extracted. The features extracted from patches are then concatenated before a classification method is used to classify the objects in the image.

Findings

The performance of proposed modified velocity-based colliding bodies optimization method is compared with existing methods in terms of type-1 measures such as sensitivity, specificity, accuracy, net present value, F1 Score and Matthews correlation coefficient and type 2 measures such as false discovery rate and false positive rate. The statistical results obtained from the proposed method show better performance than existing methods.

Originality/value

In this work, multispectral image classification accuracy is improved with an optimization algorithm called modified velocity-based colliding bodies optimization.

Details

International Journal of Pervasive Computing and Communications, vol. 17 no. 5
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 19 February 2021

Zulkifli Halim, Shuhaida Mohamed Shuhidan and Zuraidah Mohd Sanusi

In the previous study of financial distress prediction, deep learning techniques performed better than traditional techniques over time-series data. This study investigates the…

Abstract

Purpose

In the previous study of financial distress prediction, deep learning techniques performed better than traditional techniques over time-series data. This study investigates the performance of deep learning models: recurrent neural network, long short-term memory and gated recurrent unit for the financial distress prediction among the Malaysian public listed corporation over the time-series data. This study also compares the performance of logistic regression, support vector machine, neural network, decision tree and the deep learning models on single-year data.

Design/methodology/approach

The data used are the financial data of public listed companies that been classified as PN17 status (distress) and non-PN17 (not distress) in Malaysia. This study was conducted using machine learning library of Python programming language.

Findings

The findings indicate that all deep learning models used for this study achieved 90% accuracy and above with long short-term memory (LSTM) and gated recurrent unit (GRU) getting 93% accuracy. In addition, deep learning models consistently have good performance compared to the other models over single-year data. The results show LSTM and GRU getting 90% and recurrent neural network (RNN) 88% accuracy. The results also show that LSTM and GRU get better precision and recall compared to RNN. The findings of this study show that the deep learning approach will lead to better performance in financial distress prediction studies. To be added, time-series data should be highlighted in any financial distress prediction studies since it has a big impact on credit risk assessment.

Research limitations/implications

The first limitation of this study is the hyperparameter tuning only applied for deep learning models. Secondly, the time-series data are only used for deep learning models since the other models optimally fit on single-year data.

Practical implications

This study proposes recommendations that deep learning is a new approach that will lead to better performance in financial distress prediction studies. Besides that, time-series data should be highlighted in any financial distress prediction studies since the data have a big impact on the assessment of credit risk.

Originality/value

To the best of authors' knowledge, this article is the first study that uses the gated recurrent unit in financial distress prediction studies based on time-series data for Malaysian public listed companies. The findings of this study can help financial institutions/investors to find a better and accurate approach for credit risk assessment.

Details

Business Process Management Journal, vol. 27 no. 4
Type: Research Article
ISSN: 1463-7154

Keywords

1 – 10 of 330