Search results
1 – 10 of over 4000Krishna Teja Perannagari and Shaphali Gupta
Artificial neural networks (ANNs), which represent computational models simulating the biological neural systems, have become a dominant paradigm for solving complex analytical…
Abstract
Artificial neural networks (ANNs), which represent computational models simulating the biological neural systems, have become a dominant paradigm for solving complex analytical problems. ANN applications have been employed in various disciplines such as psychology, computer science, mathematics, engineering, medicine, manufacturing, and business studies. Academic research on ANNs is witnessing considerable publication activity, and there exists a need to track the intellectual structure of the existing research for a better comprehension of the domain. The current study uses a bibliometric approach to ANN business literature extracted from the Web of Science database. The study also performs a chronological review using science mapping and examines the evolution trajectory to determine research areas relevant to future research. The authors suggest that researchers focus on ANN deep learning models as the bibliometric results predict an expeditious growth of the research topic in the upcoming years. The findings reveal that business research on ANNs is flourishing and suggest further work on domains, such as back-propagation neural networks, support vector machines, and predictive modeling. By providing a systematic and dynamic understanding of ANN business research, the current study enhances the readers' understanding of existing reviews and complements the domain knowledge.
Details
Keywords
Mithun B. Patil and Rekha Patil
Vertical handoff mechanism (VHO) becomes very popular because of the improvements in the mobility models. These developments are less to certain circumstances and thus do not…
Abstract
Purpose
Vertical handoff mechanism (VHO) becomes very popular because of the improvements in the mobility models. These developments are less to certain circumstances and thus do not provide support in generic mobility, but the vertical handover management providing in the heterogeneous wireless networks (HWNs) is crucial and challenging. Hence, this paper introduces the vertical handoff management approach based on an effective network selection scheme.
Design/methodology/approach
This paper aims to improve the working principle of previous methods and make VHO more efficient and reliable for the HWN.Initially, the handover triggering techniques is modelled for identifying an appropriate place to initiate handover based on the computed coverage area of cellular base station or wireless local area network (WLAN) access point. Then, inappropriate networks are eliminated for determining the better network to perform handover. Accordingly, a network selection approach is introduced on the basis ofthe Fractional-dolphin echolocation-based support vector neural network (Fractional-DE-based SVNN). The Fractional-DE is designed by integrating Fractional calculus (FC) in Dolphin echolocation (DE), and thereby, modifying the update rule of the DE algorithm based on the location of the solutions in past iterations. The proposed Fractional-DE algorithm is used to train Support vector neural network (SVNN) for selecting the best weights. Several parameters, like Bit error rate (BER), End to end delay (EED), jitter, packet loss, and energy consumption are considered for choosing the best network.
Findings
The performance of the proposed VHO mechanism based on Fractional-DE is evaluated based on delay, energy consumption, staytime, and throughput. The proposed Fractional-DE method achieves the minimal delay of 0.0100 sec, the minimal energy consumption of 0.348, maximal staytime of 4.373 sec, and the maximal throughput of 109.20 kbps.
Originality/value
In this paper, a network selection approach is introduced on the basis of the Fractional-Dolphin Echolocation-based Support vector neural network (Fractional-DE-based SVNN). The Fractional-DE is designed by integrating Fractional calculus (FC) in Dolphin echolocation (DE), and thereby, modifying the update rule of the DE algorithm based on the location of the solutions in past iterations. The proposed Fractional-DE algorithm is used to train SVNN for selecting the best weights. Several parameters, like Bit error rate (BER), End to end delay (EED), jitter, packet loss, and energy consumption are considered for choosing the best network.The performance of the proposed VHO mechanism based on Fractional-DE is evaluated based on delay, energy consumption, staytime, and throughput, in which the proposed method offers the best performance.
Details
Keywords
Joseph Awoamim Yacim and Douw Gert Brand Boshoff
The paper introduced the use of a hybrid system of neural networks support vector machines (NNSVMs) consisting of artificial neural networks (ANNs) and support vector machines…
Abstract
Purpose
The paper introduced the use of a hybrid system of neural networks support vector machines (NNSVMs) consisting of artificial neural networks (ANNs) and support vector machines (SVMs) to price single-family properties.
Design/methodology/approach
The mechanism of the hybrid system is such that its output is given by the SVMs which utilise the results of the ANNs as their input. The results are compared to other property pricing modelling techniques including the standalone ANNs, SVMs, geographically weighted regression (GWR), spatial error model (SEM), spatial lag model (SLM) and the ordinary least squares (OLS). The techniques were applied to a dataset of 3,225 properties sold during the period, January 2012 to May 2014 in Cape Town, South Africa.
Findings
The results demonstrate that the hybrid system performed better than ANNs, SVMs and the OLS. However, in comparison to the spatial models (GWR, SEM and SLM) the hybrid system performed abysmally under with SEM favoured as the best pricing technique.
Originality/value
The findings extend the debate in the body of knowledge that the results of the OLS can significantly be improved through the use of spatial models that correct bias estimates and vary prices across the different property locations. Additionally, utilising the result of the hybrid system is thus affected by the black-box nature of the ANNs and SVMs limiting its use to purposes of checks on estimates predicted by the regression-based models.
Details
Keywords
To identify and analyze the occurrence of Internet financial market risk, data mining technology is combined with deep learning to process and analyze. The market risk management…
Abstract
Purpose
To identify and analyze the occurrence of Internet financial market risk, data mining technology is combined with deep learning to process and analyze. The market risk management of the Internet is to improve the management level of Internet financial risk, improve the policy of Internet financial supervision and promote the healthy development of Internet finance.
Design/methodology/approach
In this exploration, data mining technology is combined with deep learning to mine the Internet financial data, warn the potential risks in the market and provide targeted risk management measures. Therefore, in this article, to improve the application ability of data mining in dealing with Internet financial risk management, the radial basis function (RBF) neural network algorithm optimized by ant colony optimization (ACO) is proposed.
Findings
The results show that the actual error of the ACO optimized RBF neural network is 0.249, which is 0.149 different from the target error, indicating that the optimized algorithm can make the calculation results more accurate. The fitting results of the RBF neural network and ACO optimized RBF neural network for nonlinear function are compared. Compared with the performance of other algorithms, the error of ACO optimized RBF neural network is 0.249, the running time is 2.212 s, and the number of iterations is 36, which is far less than the actual results of the other two algorithms.
Originality/value
The optimized algorithm has a better spatial mapping and generalization ability and can get higher accuracy in short-term training. Therefore, the ACO optimized RBF neural network algorithm designed in this exploration has a high accuracy for the prediction of Internet financial market risk.
Details
Keywords
Automated crop prediction is needed for the following reasons: First, agricultural yields were decided by a farmer's ability to work in a certain field and with a particular crop…
Abstract
Purpose
Automated crop prediction is needed for the following reasons: First, agricultural yields were decided by a farmer's ability to work in a certain field and with a particular crop previously. They were not always able to predict the crop and its yield solely on that idea alone. Second, seed firms frequently monitor how well new plant varieties would grow in certain settings. Third, predicting agricultural production is critical for solving emerging food security concerns, especially in the face of global climate change. Accurate production forecasts not only assist farmers in making informed economic and management decisions but they also aid in the prevention of famine. This results in farming systems’ efficiency and productivity gains, as well as reduced risk from environmental factors.
Design/methodology/approach
This research paper proposes a machine learning technique for effective autonomous crop and yield prediction, which makes use of solution encoding to create solutions randomly, and then for every generated solution, fitness is evaluated to meet highest accuracy. Major focus of the proposed work is to optimize the weight parameter in the input data. The algorithm continues until the optimal agent or optimal weight is selected, which contributes to maximum accuracy in automated crop prediction.
Findings
Performance of the proposed work is compared with different existing algorithms, such as Random Forest, support vector machine (SVM) and artificial neural network (ANN). The proposed method support vector neural network (SVNN) with gravitational search agent (GSA) is analysed based on different performance metrics, such as accuracy, sensitivity, specificity, CPU memory usage and training time, and maximum performance is determined.
Research limitations/implications
Rather than real-time data collected by Internet of Things (IoT) devices, this research focuses solely on historical data; the proposed work does not impose IoT-based smart farming, which enhances the overall agriculture system by monitoring the field in real time. The present study only predicts the sort of crop to sow not crop production.
Originality/value
The paper proposes a novel optimization algorithm, which is based on the law of gravity and mass interactions. The search agents in the proposed algorithm are a cluster of weights that interact with one another using Newtonian gravity and motion principles. A comparison was made between the suggested method and various existing strategies. The obtained results confirm the high-performance in solving diverse nonlinear functions.
Details
Keywords
Armin Mahmoodi, Leila Hashemi, Milad Jasemi, Jeremy Laliberté, Richard C. Millar and Hamed Noshadi
In this research, the main purpose is to use a suitable structure to predict the trading signals of the stock market with high accuracy. For this purpose, two models for the…
Abstract
Purpose
In this research, the main purpose is to use a suitable structure to predict the trading signals of the stock market with high accuracy. For this purpose, two models for the analysis of technical adaptation were used in this study.
Design/methodology/approach
It can be seen that support vector machine (SVM) is used with particle swarm optimization (PSO) where PSO is used as a fast and accurate classification to search the problem-solving space and finally the results are compared with the neural network performance.
Findings
Based on the result, the authors can say that both new models are trustworthy in 6 days, however, SVM-PSO is better than basic research. The hit rate of SVM-PSO is 77.5%, but the hit rate of neural networks (basic research) is 74.2.
Originality/value
In this research, two approaches (raw-based and signal-based) have been developed to generate input data for the model: raw-based and signal-based. For comparison, the hit rate is considered the percentage of correct predictions for 16 days.
Details
Keywords
Hera Khan, Ayush Srivastav and Amit Kumar Mishra
A detailed description will be provided of all the classification algorithms that have been widely used in the domain of medical science. The foundation will be laid by giving a…
Abstract
A detailed description will be provided of all the classification algorithms that have been widely used in the domain of medical science. The foundation will be laid by giving a comprehensive overview pertaining to the background and history of the classification algorithms. This will be followed by an extensive discussion regarding various techniques of classification algorithm in machine learning (ML) hence concluding with their relevant applications in data analysis in medical science and health care. To begin with, the initials of this chapter will deal with the basic fundamentals required for a profound understanding of the classification techniques in ML which will comprise of the underlying differences between Unsupervised and Supervised Learning followed by the basic terminologies of classification and its history. Further, it will include the types of classification algorithms ranging from linear classifiers like Logistic Regression, Naïve Bayes to Nearest Neighbour, Support Vector Machine, Tree-based Classifiers, and Neural Networks, and their respective mathematics. Ensemble algorithms such as Majority Voting, Boosting, Bagging, Stacking will also be discussed at great length along with their relevant applications. Furthermore, this chapter will also incorporate comprehensive elucidation regarding the areas of application of such classification algorithms in the field of biomedicine and health care and their contribution to decision-making systems and predictive analysis. To conclude, this chapter will devote highly in the field of research and development as it will provide a thorough insight to the classification algorithms and their relevant applications used in the cases of the healthcare development sector.
Details
Keywords
Kalyan Nagaraj, Biplab Bhattacharjee, Amulyashree Sridhar and Sharvani GS
Phishing is one of the major threats affecting businesses worldwide in current times. Organizations and customers face the hazards arising out of phishing attacks because of…
Abstract
Purpose
Phishing is one of the major threats affecting businesses worldwide in current times. Organizations and customers face the hazards arising out of phishing attacks because of anonymous access to vulnerable details. Such attacks often result in substantial financial losses. Thus, there is a need for effective intrusion detection techniques to identify and possibly nullify the effects of phishing. Classifying phishing and non-phishing web content is a critical task in information security protocols, and full-proof mechanisms have yet to be implemented in practice. The purpose of the current study is to present an ensemble machine learning model for classifying phishing websites.
Design/methodology/approach
A publicly available data set comprising 10,068 instances of phishing and legitimate websites was used to build the classifier model. Feature extraction was performed by deploying a group of methods, and relevant features extracted were used for building the model. A twofold ensemble learner was developed by integrating results from random forest (RF) classifier, fed into a feedforward neural network (NN). Performance of the ensemble classifier was validated using k-fold cross-validation. The twofold ensemble learner was implemented as a user-friendly, interactive decision support system for classifying websites as phishing or legitimate ones.
Findings
Experimental simulations were performed to access and compare the performance of the ensemble classifiers. The statistical tests estimated that RF_NN model gave superior performance with an accuracy of 93.41 per cent and minimal mean squared error of 0.000026.
Research limitations/implications
The research data set used in this study is publically available and easy to analyze. Comparative analysis with other real-time data sets of recent origin must be performed to ensure generalization of the model against various security breaches. Different variants of phishing threats must be detected rather than focusing particularly toward phishing website detection.
Originality/value
The twofold ensemble model is not applied for classification of phishing websites in any previous studies as per the knowledge of authors.
Details
Keywords
Renze Zhou, Zhiguo Xing, Haidou Wang, Zhongyu Piao, Yanfei Huang, Weiling Guo and Runbo Ma
With the development of deep learning-based analytical techniques, increased research has focused on fatigue data analysis methods based on deep learning, which are gaining in…
Abstract
Purpose
With the development of deep learning-based analytical techniques, increased research has focused on fatigue data analysis methods based on deep learning, which are gaining in popularity. However, the application of deep neural networks in the material science domain is mainly inhibited by data availability. In this paper, to overcome the difficulty of multifactor fatigue life prediction with small data sets,
Design/methodology/approach
A multiple neural network ensemble (MNNE) is used, and an MNNE with a general and flexible explicit function is developed to accurately quantify the complicated relationships hidden in multivariable data sets. Moreover, a variational autoencoder-based data generator is trained with small sample sets to expand the size of the training data set. A comparative study involving the proposed method and traditional models is performed. In addition, a filtering rule based on the R2 score is proposed and applied in the training process of the MNNE, and this approach has a beneficial effect on the prediction accuracy and generalization ability.
Findings
A comparative study involving the proposed method and traditional models is performed. The comparative experiment confirms that the use of hybrid data can improve the accuracy and generalization ability of the deep neural network and that the MNNE outperforms support vector machines, multilayer perceptron and deep neural network models based on the goodness of fit and robustness in the small sample case.
Practical implications
The experimental results imply that the proposed algorithm is a sophisticated and promising multivariate method for predicting the contact fatigue life of a coating when data availability is limited.
Originality/value
A data generated model based on variational autoencoder was used to make up lack of data. An MNNE method was proposed to apply in the small data case of fatigue life prediction.
Details
Keywords
Introduction: The insurance industry is one of the lucrative sectors of the economy. However, it is volatile because of the large chunk of data generated by the transactions…
Abstract
Introduction: The insurance industry is one of the lucrative sectors of the economy. However, it is volatile because of the large chunk of data generated by the transactions taking place daily. However, every bit of it is responsible for creating market trends for stock investors to predict the returns. The specialised data mining techniques act as a solution for decision-making, reducing uncertainty in decision-making.
Purpose: There are limited studies that have examined the efficiency and effectiveness of data mining techniques across the companies in the insurance industry to date. To enable the companies to take exact benefit of data mining techniques in insurance, the present study will focus on investigating the efficiency of artificial neural network (ANN) and support vector machine SVM across insurance companies of CNX 500.
Method: For predictive models, various technical indicators were considered independent variables, and change in return, i.e. increase and decrease, was deemed a dependent variable. The indicators were transformed from daily raw data of insurance company’s stock values spanning four years. We formed 90 data sets of varied periods for building the model – specifically six months, one year, two years, and four years for selected six insurance companies.
Findings: The study’s findings revealed that ANN performed best for the ICICIPRULI data model in terms of hit ratio. Whereas the performance of SVM was observed to be the best for the ICICIGI data model. In the case of pairwise comparison among the six selected Indian insurance companies from CNX 500, the extracted data evaluated and concluded that there were eight significantly different pairs based on hit ratio in the case of ANN models and nine significantly different pairs based on hit ratio for SVM models.
Details