Search results
1 – 10 of 46Abhishek Das and Mihir Narayan Mohanty
In time and accurate detection of cancer can save the life of the person affected. According to the World Health Organization (WHO), breast cancer occupies the most frequent…
Abstract
Purpose
In time and accurate detection of cancer can save the life of the person affected. According to the World Health Organization (WHO), breast cancer occupies the most frequent incidence among all the cancers whereas breast cancer takes fifth place in the case of mortality numbers. Out of many image processing techniques, certain works have focused on convolutional neural networks (CNNs) for processing these images. However, deep learning models are to be explored well.
Design/methodology/approach
In this work, multivariate statistics-based kernel principal component analysis (KPCA) is used for essential features. KPCA is simultaneously helpful for denoising the data. These features are processed through a heterogeneous ensemble model that consists of three base models. The base models comprise recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU). The outcomes of these base learners are fed to fuzzy adaptive resonance theory mapping (ARTMAP) model for decision making as the nodes are added to the F_2ˆa layer if the winning criteria are fulfilled that makes the ARTMAP model more robust.
Findings
The proposed model is verified using breast histopathology image dataset publicly available at Kaggle. The model provides 99.36% training accuracy and 98.72% validation accuracy. The proposed model utilizes data processing in all aspects, i.e. image denoising to reduce the data redundancy, training by ensemble learning to provide higher results than that of single models. The final classification by a fuzzy ARTMAP model that controls the number of nodes depending upon the performance makes robust accurate classification.
Research limitations/implications
Research in the field of medical applications is an ongoing method. More advanced algorithms are being developed for better classification. Still, the scope is there to design the models in terms of better performance, practicability and cost efficiency in the future. Also, the ensemble models may be chosen with different combinations and characteristics. Only signal instead of images may be verified for this proposed model. Experimental analysis shows the improved performance of the proposed model. This method needs to be verified using practical models. Also, the practical implementation will be carried out for its real-time performance and cost efficiency.
Originality/value
The proposed model is utilized for denoising and to reduce the data redundancy so that the feature selection is done using KPCA. Training and classification are performed using heterogeneous ensemble model designed using RNN, LSTM and GRU as base classifiers to provide higher results than that of single models. Use of adaptive fuzzy mapping model makes the final classification accurate. The effectiveness of combining these methods to a single model is analyzed in this work.
Details
Keywords
Deep learning (DL) is a new and relatively unexplored field that finds immense applications in many industries, especially ones that must make detailed observations, inferences…
Abstract
Purpose
Deep learning (DL) is a new and relatively unexplored field that finds immense applications in many industries, especially ones that must make detailed observations, inferences and predictions based on extensive and scattered datasets. The purpose of this paper is to answer the following questions: (1) To what extent has DL penetrated the research being done in finance? (2) What areas of financial research have applications of DL, and what quality of work has been done in the niches? (3) What areas still need to be explored and have scope for future research?
Design/methodology/approach
This paper employs bibliometric analysis, a potent yet simple methodology with numerous applications in literature reviews. This paper focuses on citation analysis, author impacts, relevant and vital journals, co-citation analysis, bibliometric coupling and co-occurrence analysis. The authors collected 693 articles published in 2000–2022 from journals indexed in the Scopus database. Multiple software (VOSviewer, RStudio (biblioshiny) and Excel) were employed to analyze the data.
Findings
The findings reveal significant and renowned authors' impact in the field. The analysis indicated that the application of DL in finance has been on an upward track since 2017. The authors find four broad research areas (neural networks and stock market simulations; portfolio optimization and risk management; time series analysis and forecasting; high-frequency trading) with different degrees of intertwining and emerging research topics with the application of DL in finance. This article contributes to the literature by providing a systematic overview of the DL developments, trajectories, objectives and potential future research topics in finance.
Research limitations/implications
The findings of this paper act as a guide for literature review for anyone interested in doing research in the intersection of finance and DL. The article also explores multiple areas of research that have yet to be studied to a great extent and have abundant scope.
Originality/value
Very few studies have explored the applications of machine learning (ML), namely, DL in finance, which is a much more specialized subset of ML. The authors look at the problem from the aspect of different techniques in DL that have been used in finance. This is the first qualitative (content analysis) and quantitative (bibliometric analysis) assessment of current research on DL in finance.
Details
Keywords
Tongzheng Pu, Chongxing Huang, Haimo Zhang, Jingjing Yang and Ming Huang
Forecasting population movement trends is crucial for implementing effective policies to regulate labor force growth and understand demographic changes. Combining migration theory…
Abstract
Purpose
Forecasting population movement trends is crucial for implementing effective policies to regulate labor force growth and understand demographic changes. Combining migration theory expertise and neural network technology can bring a fresh perspective to international migration forecasting research.
Design/methodology/approach
This study proposes a conditional generative adversarial neural network model incorporating the migration knowledge – conditional generative adversarial network (MK-CGAN). By using the migration knowledge to design the parameters, MK-CGAN can effectively address the limited data problem, thereby enhancing the accuracy of migration forecasts.
Findings
The model was tested by forecasting migration flows between different countries and had good generalizability and validity. The results are robust as the proposed solutions can achieve lesser mean absolute error, mean squared error, root mean square error, mean absolute percentage error and R2 values, reaching 0.9855 compared to long short-term memory (LSTM), gated recurrent unit, generative adversarial network (GAN) and the traditional gravity model.
Originality/value
This study is significant because it demonstrates a highly effective technique for predicting international migration using conditional GANs. By incorporating migration knowledge into our models, we can achieve prediction accuracy, gaining valuable insights into the differences between various model characteristics. We used SHapley Additive exPlanations to enhance our understanding of these differences and provide clear and concise explanations for our model predictions. The results demonstrated the theoretical significance and practical value of the MK-CGAN model in predicting international migration.
Details
Keywords
Manpreet Kaur, Amit Kumar and Anil Kumar Mittal
In past decades, artificial neural network (ANN) models have revolutionised various stock market operations due to their superior ability to deal with nonlinear data and garnered…
Abstract
Purpose
In past decades, artificial neural network (ANN) models have revolutionised various stock market operations due to their superior ability to deal with nonlinear data and garnered considerable attention from researchers worldwide. The present study aims to synthesize the research field concerning ANN applications in the stock market to a) systematically map the research trends, key contributors, scientific collaborations, and knowledge structure, and b) uncover the challenges and future research areas in the field.
Design/methodology/approach
To provide a comprehensive appraisal of the extant literature, the study adopted the mixed approach of quantitative (bibliometric analysis) and qualitative (intensive review of influential articles) assessment to analyse 1,483 articles published in the Scopus and Web of Science indexed journals during 1992–2022. The bibliographic data was processed and analysed using VOSviewer and R software.
Findings
The results revealed the proliferation of articles since 2018, with China as the dominant country, Wang J as the most prolific author, “Expert Systems with Applications” as the leading journal, “computer science” as the dominant subject area, and “stock price forecasting” as the predominantly explored research theme in the field. Furthermore, “portfolio optimization”, “sentiment analysis”, “algorithmic trading”, and “crisis prediction” are found as recently emerged research areas.
Originality/value
To the best of the authors’ knowledge, the current study is a novel attempt that holistically assesses the existing literature on ANN applications throughout the entire domain of stock market. The main contribution of the current study lies in discussing the challenges along with the viable methodological solutions and providing application area-wise knowledge gaps for future studies.
Details
Keywords
Nehal Elshaboury, Eslam Mohammed Abdelkader, Abobakr Al-Sakkaf and Ashutosh Bagchi
The energy efficiency of buildings has been emphasized along with the continual development in the building and construction sector that consumes a significant amount of energy…
Abstract
Purpose
The energy efficiency of buildings has been emphasized along with the continual development in the building and construction sector that consumes a significant amount of energy. To this end, the purpose of this research paper is to forecast energy consumption to improve energy resource planning and management.
Design/methodology/approach
This study proposes the application of the convolutional neural network (CNN) for estimating the electricity consumption in the Grey Nuns building in Canada. The performance of the proposed model is compared against that of long short-term memory (LSTM) and multilayer perceptron (MLP) neural networks. The models are trained and tested using monthly electricity consumption records (i.e. from May 2009 to December 2021) available from Concordia’s facility department. Statistical measures (e.g. determination coefficient [R2], root mean squared error [RMSE], mean absolute error [MAE] and mean absolute percentage error [MAPE]) are used to evaluate the outcomes of models.
Findings
The results reveal that the CNN model outperforms the other model predictions for 6 and 12 months ahead. It enhances the performance metrics reported by the LSTM and MLP models concerning the R2, RMSE, MAE and MAPE by more than 4%, 6%, 42% and 46%, respectively. Therefore, the proposed model uses the available data to predict the electricity consumption for 6 and 12 months ahead. In June and December 2022, the overall electricity consumption is estimated to be 195,312 kWh and 254,737 kWh, respectively.
Originality/value
This study discusses the development of an effective time-series model that can forecast future electricity consumption in a Canadian heritage building. Deep learning techniques are being used for the first time to anticipate the electricity consumption of the Grey Nuns building in Canada. Additionally, it evaluates the effectiveness of deep learning and machine learning methods for predicting electricity consumption using established performance indicators. Recognizing electricity consumption in buildings is beneficial for utility providers, facility managers and end users by improving energy and environmental efficiency.
Details
Keywords
Sangeetha Yempally, Sanjay Kumar Singh and S. Velliangiri
Selecting and using the same health monitoring devices for a particular problem is a tedious task. This paper aims to provide a comprehensive review of 40 research papers giving…
Abstract
Purpose
Selecting and using the same health monitoring devices for a particular problem is a tedious task. This paper aims to provide a comprehensive review of 40 research papers giving the Smart health monitoring system using Internet of things (IoT) and Deep learning.
Design/methodology/approach
Health Monitoring Systems play a significant role in the healthcare sector. The development and testing of health monitoring devices using IoT and deep learning dominate the healthcare sector.
Findings
In addition, the detailed conversation and investigation are finished by techniques and development framework. Authors have identified the research gap and presented future research directions in IoT, edge computing and deep learning.
Originality/value
The gathered research articles are examined, and the gaps and issues that the current research papers confront are discussed. In addition, based on various research gaps, this assessment proposes the primary future scope for deep learning and IoT health monitoring model.
Details
Keywords
Suneetha Ch, Srinivasa Rao S and K.S. Ramesh
Electronic devices aid communication during new communication phases and the scope of cognitive radio networks has changed communication paradigms through efficient use of…
Abstract
Purpose
Electronic devices aid communication during new communication phases and the scope of cognitive radio networks has changed communication paradigms through efficient use of spectrums. The communication prototype of cognitive radio networks defines user roles as primary user and secondary user in the context of the spectrum allocation and use. The users who have licensed authority of the spectrum are denoted as primary users, while other eligible users who access the corresponding spectrum are secondary users.
Design/methodology/approach
The multiple factors of transmission service quality can have a negative influence due to improper scheduling of spectrum bands between primary users and secondary users. There are considerable contributions in contemporary literature concerning spectrum band scheduling under spectrum sensing. However, the majority of the scheduling models are intended to manage a limited number of transmission service quality factors. Moreover, these service quality factors are functional and derived algorithmically from the current corresponding spectrum. However, there is evidence of credible performance deficiency regarding contemporary spectrum sensing methods
Findings
This article intends to portray a fuzzy guided integrated factors-based spectrum band sharing within the spectrum used by secondary users. This study attempts to explain the significance of this proposal compared to other contemporary models.
Originality/value
This article intends to portray a fuzzy guided integrated factors-based spectrum band sharing within the spectrum used by secondary users. This study attempts to explain the significance of this proposal compared to other contemporary models.
Details
Keywords
Gang Yu, Zhiqiang Li, Ruochen Zeng, Yucong Jin, Min Hu and Vijayan Sugumaran
Accurate prediction of the structural condition of urban critical infrastructure is crucial for predictive maintenance. However, the existing prediction methods lack precision due…
Abstract
Purpose
Accurate prediction of the structural condition of urban critical infrastructure is crucial for predictive maintenance. However, the existing prediction methods lack precision due to limitations in utilizing heterogeneous sensing data and domain knowledge as well as insufficient generalizability resulting from limited data samples. This paper integrates implicit and qualitative expert knowledge into quantifiable values in tunnel condition assessment and proposes a tunnel structure prediction algorithm that augments a state-of-the-art attention-based long short-term memory (LSTM) model with expert rating knowledge to achieve robust prediction results to reasonably allocate maintenance resources.
Design/methodology/approach
Through formalizing domain experts' knowledge into quantitative tunnel condition index (TCI) with analytic hierarchy process (AHP), a fusion approach using sequence smoothing and sliding time window techniques is applied to the TCI and time-series sensing data. By incorporating both sensing data and expert ratings, an attention-based LSTM model is developed to improve prediction accuracy and reduce the uncertainty of structural influencing factors.
Findings
The empirical experiment in Dalian Road Tunnel in Shanghai, China showcases the effectiveness of the proposed method, which can comprehensively evaluate the tunnel structure condition and significantly improve prediction performance.
Originality/value
This study proposes a novel structure condition prediction algorithm that augments a state-of-the-art attention-based LSTM model with expert rating knowledge for robust prediction of structure condition of complex projects.
Details
Keywords
Suvarna Abhijit Patil and Prasad Kishor Gokhale
With the advent of AI-federated technologies, it is feasible to perform complex tasks in industrial Internet of Things (IIoT) environment by enhancing throughput of the network…
Abstract
Purpose
With the advent of AI-federated technologies, it is feasible to perform complex tasks in industrial Internet of Things (IIoT) environment by enhancing throughput of the network and by reducing the latency of transmitted data. The communications in IIoT and Industry 4.0 requires handshaking of multiple technologies for supporting heterogeneous networks and diverse protocols. IIoT applications may gather and analyse sensor data, allowing operators to monitor and manage production systems, resulting in considerable performance gains in automated processes. All IIoT applications are responsible for generating a vast set of data based on diverse characteristics. To obtain an optimum throughput in an IIoT environment requires efficiently processing of IIoT applications over communication channels. Because computing resources in the IIoT are limited, equitable resource allocation with the least amount of delay is the need of the IIoT applications. Although some existing scheduling strategies address delay concerns, faster transmission of data and optimal throughput should also be addressed along with the handling of transmission delay. Hence, this study aims to focus on a fair mechanism to handle throughput, transmission delay and faster transmission of data. The proposed work provides a link-scheduling algorithm termed as delay-aware resource allocation that allocates computing resources to computational-sensitive tasks by reducing overall latency and by increasing the overall throughput of the network. First of all, a multi-hop delay model is developed with multistep delay prediction using AI-federated neural network long–short-term memory (LSTM), which serves as a foundation for future design. Then, link-scheduling algorithm is designed for data routing in an efficient manner. The extensive experimental results reveal that the average end-to-end delay by considering processing, propagation, queueing and transmission delays is minimized with the proposed strategy. Experiments show that advances in machine learning have led to developing a smart, collaborative link scheduling algorithm for fairness-driven resource allocation with minimal delay and optimal throughput. The prediction performance of AI-federated LSTM is compared with the existing approaches and it outperforms over other techniques by achieving 98.2% accuracy.
Design/methodology/approach
With an increase of IoT devices, the demand for more IoT gateways has increased, which increases the cost of network infrastructure. As a result, the proposed system uses low-cost intermediate gateways in this study. Each gateway may use a different communication technology for data transmission within an IoT network. As a result, gateways are heterogeneous, with hardware support limited to the technologies associated with the wireless sensor networks. Data communication fairness at each gateway is achieved in an IoT network by considering dynamic IoT traffic and link-scheduling problems to achieve effective resource allocation in an IoT network. The two-phased solution is provided to solve these problems for improved data communication in heterogeneous networks achieving fairness. In the first phase, traffic is predicted using the LSTM network model to predict the dynamic traffic. In the second phase, efficient link selection per technology and link scheduling are achieved based on predicted load, the distance between gateways, link capacity and time required as per different technologies supported such as Bluetooth, Wi-Fi and Zigbee. It enhances data transmission fairness for all gateways, resulting in more data transmission achieving maximum throughput. Our proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation.
Findings
Our proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation. It also shows that AI- and IoT-federated devices can communicate seamlessly over IoT networks in Industry 4.0.
Originality/value
The concept is a part of the original research work and can be adopted by Industry 4.0 for easy and seamless connectivity of AI and IoT-federated devices.
Details
Keywords
Volodymyr Novykov, Christopher Bilson, Adrian Gepp, Geoff Harris and Bruce James Vanstone
Machine learning (ML), and deep learning in particular, is gaining traction across a myriad of real-life applications. Portfolio management is no exception. This paper provides a…
Abstract
Purpose
Machine learning (ML), and deep learning in particular, is gaining traction across a myriad of real-life applications. Portfolio management is no exception. This paper provides a systematic literature review of deep learning applications for portfolio management. The findings are likely to be valuable for industry practitioners and researchers alike, experimenting with novel portfolio management approaches and furthering investment management practice.
Design/methodology/approach
This review follows the guidance and methodology of Linnenluecke et al. (2020), Massaro et al. (2016) and Fisch and Block (2018) to first identify relevant literature based on an appropriately developed search phrase, filter the resultant set of publications and present descriptive and analytical findings of the research itself and its metadata.
Findings
The authors find a strong dominance of reinforcement learning algorithms applied to the field, given their through-time portfolio management capabilities. Other well-known deep learning models, such as convolutional neural network (CNN) and recurrent neural network (RNN) and its derivatives, have shown to be well-suited for time-series forecasting. Most recently, the number of papers published in the field has been increasing, potentially driven by computational advances, hardware accessibility and data availability. The review shows several promising applications and identifies future research opportunities, including better balance on the risk-reward spectrum, novel ways to reduce data dimensionality and pre-process the inputs, stronger focus on direct weights generation, novel deep learning architectures and consistent data choices.
Originality/value
Several systematic reviews have been conducted with a broader focus of ML applications in finance. However, to the best of the authors’ knowledge, this is the first review to focus on deep learning architectures and their applications in the investment portfolio management problem. The review also presents a novel universal taxonomy of models used.
Details