Search results

1 – 10 of 12
Article
Publication date: 8 January 2024

Indranil Ghosh, Rabin K. Jana and Dinesh K. Sharma

Owing to highly volatile and chaotic external events, predicting future movements of cryptocurrencies is a challenging task. This paper advances a granular hybrid predictive…

Abstract

Purpose

Owing to highly volatile and chaotic external events, predicting future movements of cryptocurrencies is a challenging task. This paper advances a granular hybrid predictive modeling framework for predicting the future figures of Bitcoin (BTC), Litecoin (LTC), Ethereum (ETH), Stellar (XLM) and Tether (USDT) during normal and pandemic regimes.

Design/methodology/approach

Initially, the major temporal characteristics of the price series are examined. In the second stage, ensemble empirical mode decomposition (EEMD) and maximal overlap discrete wavelet transformation (MODWT) are used to decompose the original time series into two distinct sets of granular subseries. In the third stage, long- and short-term memory network (LSTM) and extreme gradient boosting (XGB) are applied to the decomposed subseries to estimate the initial forecasts. Lastly, sequential quadratic programming (SQP) is used to fetch the forecast by combining the initial forecasts.

Findings

Rigorous performance assessment and the outcome of the Diebold-Mariano’s pairwise statistical test demonstrate the efficacy of the suggested predictive framework. The framework yields commendable predictive performance during the COVID-19 pandemic timeline explicitly as well. Future trends of BTC and ETH are found to be relatively easier to predict, while USDT is relatively difficult to predict.

Originality/value

The robustness of the proposed framework can be leveraged for practical trading and managing investment in crypto market. Empirical properties of the temporal dynamics of chosen cryptocurrencies provide deeper insights.

Details

China Finance Review International, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2044-1398

Keywords

Article
Publication date: 9 October 2023

Manish Bansal

This paper undertakes an extensive and systematic review of the literature on earnings management (EM) over the past three decades (1992–2022). Furthermore, the study identifies…

Abstract

Purpose

This paper undertakes an extensive and systematic review of the literature on earnings management (EM) over the past three decades (1992–2022). Furthermore, the study identifies emerging research themes and proposes future avenues for further investigation in the realm of EM.

Design/methodology/approach

For this study, a comprehensive collection of 2,775 articles on EM published between 1992 and 2022 was extracted from the Scopus database. The author employed various tools, including Microsoft Excel, R studio, Gephi and visualization of similarities viewer, to conduct bibliometric, content, thematic and cluster analyses. Additionally, the study examined the literature across three distinct periods: prior to the enactment of the Sarbanes-Oxley Act (1992–2001), subsequent to the implementation of the Sarbanes-Oxley Act (2002–2012), and after the adoption of International Financial Reporting Standards (2013–2022) to draw more inferences and insights on EM research.

Findings

The study identifies three major themes, namely the operationalization of EM constructs, the trade-off between EM tools (accrual EM, real EM and classification shifting) and the role of corporate governance in mitigating EM in emerging markets. Existing literature in these areas presents mixed and inconclusive findings, suggesting the need for further theoretical development. Further, the study findings observe a shift in research focus over time: initially, understanding manipulation techniques, then evaluating regulatory measures, and more recently, investigating the impact of global accounting standards. Several emerging research themes (technology advancements, cross-cultural and cross-national studies, sustainability, behavioral aspects and non-financial indicators of EM) have been identified. This study subsequent analysis reveals an evolving EM landscape, with researchers from disciplines like data science, computer science and engineering applying their analytical expertise to detect EM anomalies. Furthermore, this study offers significant insights into sophisticated EM techniques such as neural networks, machine learning techniques and hidden Markov models, among others, as well as relevant theories including dynamic capabilities theory, learning curve theory, psychological contract theory and normative institutional theory. These techniques and theories demonstrate the need for further advancement in the field of EM. Lastly, the findings shed light on prominent EM journals, authors and countries.

Originality/value

This study conducts quantitative bibliometric and thematic analyses of the existing literature on EM while identifying areas that require further development to advance EM research.

Details

Journal of Accounting Literature, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-4607

Keywords

Article
Publication date: 17 October 2022

Santosh Kumar B. and Krishna Kumar E.

Deep learning techniques are unavoidable in a variety of domains such as health care, computer vision, cyber-security and so on. These algorithms demand high data transfers but…

50

Abstract

Purpose

Deep learning techniques are unavoidable in a variety of domains such as health care, computer vision, cyber-security and so on. These algorithms demand high data transfers but require bottlenecks in achieving the high speed and low latency synchronization while being implemented in the real hardware architectures. Though direct memory access controller (DMAC) has gained a brighter light of research for achieving bulk data transfers, existing direct memory access (DMA) systems continue to face the challenges of achieving high-speed communication. The purpose of this study is to develop an adaptive-configured DMA architecture for bulk data transfer with high throughput and less time-delayed computation.

Design/methodology/approach

The proposed methodology consists of a heterogeneous computing system integrated with specialized hardware and software. For the hardware, the authors propose an field programmable gate array (FPGA)-based DMAC, which transfers the data to the graphics processing unit (GPU) using PCI-Express. The workload characterization technique is designed using Python software and is implementable for the advanced risk machine Cortex architecture with a suitable communication interface. This module offloads the input streams of data to the FPGA and initiates the FPGA for the control flow of data to the GPU that can achieve efficient processing.

Findings

This paper presents an evaluation of a configurable workload-based DMA controller for collecting the data from the input devices and concurrently applying it to the GPU architecture, bypassing the hardware and software extraneous copies and bottlenecks via PCI Express. It also investigates the usage of adaptive DMA memory buffer allocation and workload characterization techniques. The proposed DMA architecture is compared with the other existing DMA architectures in which the performance of the proposed DMAC outperforms traditional DMA by achieving 96% throughput and 50% less latency synchronization.

Originality/value

The proposed gated recurrent unit has produced 95.6% accuracy in characterization of the workloads into heavy, medium and normal. The proposed model has outperformed the other algorithms and proves its strength for workload characterization.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 25 September 2023

R.S. Sreerag and Prasanna Venkatesan Shanmugam

The choice of a sales channel for fresh vegetables is an important decision a farmer can make. Typically, the farmers rely on their personal experience in directing the produce to…

Abstract

Purpose

The choice of a sales channel for fresh vegetables is an important decision a farmer can make. Typically, the farmers rely on their personal experience in directing the produce to a sales channel. This study examines how sales forecasting of fresh vegetables along multiple channels enables marginal and small-scale farmers to maximize their revenue by proportionately allocating the produce considering their short shelf life.

Design/methodology/approach

Machine learning models, namely long short-term memory (LSTM), convolution neural network (CNN) and traditional methods such as autoregressive integrated moving average (ARIMA) and weighted moving average (WMA) are developed and tested for demand forecasting of vegetables through three different channels, namely direct (Jaivasree), regulated (World market) and cooperative (Horticorp).

Findings

The results show that machine learning methods (LSTM/CNN) provide better forecasts for regulated (World market) and cooperative (Horticorp) channels, while traditional moving average yields a better result for direct (Jaivasree) channel where the sales volume is less as compared to the remaining two channels.

Research limitations/implications

The price of vegetables is not considered as the government sets the base price for the vegetables.

Originality/value

The existing literature lacks models and approaches to predict the sales of fresh vegetables for marginal and small-scale farmers of developing economies like India. In this research, the authors forecast the sales of commonly used fresh vegetables for small-scale farmers of Kerala in India based on a set of 130 weekly time series data obtained from the Kerala Horticorp.

Details

Journal of Agribusiness in Developing and Emerging Economies, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2044-0839

Keywords

Article
Publication date: 13 February 2024

Aleena Swetapadma, Tishya Manna and Maryam Samami

A novel method has been proposed to reduce the false alarm rate of arrhythmia patients regarding life-threatening conditions in the intensive care unit. In this purpose, the…

Abstract

Purpose

A novel method has been proposed to reduce the false alarm rate of arrhythmia patients regarding life-threatening conditions in the intensive care unit. In this purpose, the atrial blood pressure, photoplethysmogram (PLETH), electrocardiogram (ECG) and respiratory (RESP) signals are considered as input signals.

Design/methodology/approach

Three machine learning approaches feed-forward artificial neural network (ANN), ensemble learning method and k-nearest neighbors searching methods are used to detect the false alarm. The proposed method has been implemented using Arduino and MATLAB/SIMULINK for real-time ICU-arrhythmia patients' monitoring data.

Findings

The proposed method detects the false alarm with an accuracy of 99.4 per cent during asystole, 100 per cent during ventricular flutter, 98.5 per cent during ventricular tachycardia, 99.6 per cent during bradycardia and 100 per cent during tachycardia. The proposed framework is adaptive in many scenarios, easy to implement, computationally friendly and highly accurate and robust with overfitting issue.

Originality/value

As ECG signals consisting with PQRST wave, any deviation from the normal pattern may signify some alarming conditions. These deviations can be utilized as input to classifiers for the detection of false alarms; hence, there is no need for other feature extraction techniques. Feed-forward ANN with the Lavenberg–Marquardt algorithm has shown higher rate of convergence than other neural network algorithms which helps provide better accuracy with no overfitting.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 26 December 2023

Farshad Peiman, Mohammad Khalilzadeh, Nasser Shahsavari-Pour and Mehdi Ravanshadnia

Earned value management (EVM)–based models for estimating project actual duration (AD) and cost at completion using various methods are continuously developed to improve the…

Abstract

Purpose

Earned value management (EVM)–based models for estimating project actual duration (AD) and cost at completion using various methods are continuously developed to improve the accuracy and actualization of predicted values. This study primarily aimed to examine natural gradient boosting (NGBoost-2020) with the classification and regression trees (CART) base model (base learner). To the best of the authors' knowledge, this concept has never been applied to EVM AD forecasting problem. Consequently, the authors compared this method to the single K-nearest neighbor (KNN) method, the ensemble method of extreme gradient boosting (XGBoost-2016) with the CART base model and the optimal equation of EVM, the earned schedule (ES) equation with the performance factor equal to 1 (ES1). The paper also sought to determine the extent to which the World Bank's two legal factors affect countries and how the two legal causes of delay (related to institutional flaws) influence AD prediction models.

Design/methodology/approach

In this paper, data from 30 construction projects of various building types in Iran, Pakistan, India, Turkey, Malaysia and Nigeria (due to the high number of delayed projects and the detrimental effects of these delays in these countries) were used to develop three models. The target variable of the models was a dimensionless output, the ratio of estimated duration to completion (ETC(t)) to planned duration (PD). Furthermore, 426 tracking periods were used to build the three models, with 353 samples and 23 projects in the training set, 73 patterns (17% of the total) and six projects (21% of the total) in the testing set. Furthermore, 17 dimensionless input variables were used, including ten variables based on the main variables and performance indices of EVM and several other variables detailed in the study. The three models were subsequently created using Python and several GitHub-hosted codes.

Findings

For the testing set of the optimal model (NGBoost), the better percentage mean (better%) of the prediction error (based on projects with a lower error percentage) of the NGBoost compared to two KNN and ES1 single models, as well as the total mean absolute percentage error (MAPE) and mean lags (MeLa) (indicating model stability) were 100, 83.33, 5.62 and 3.17%, respectively. Notably, the total MAPE and MeLa for the NGBoost model testing set, which had ten EVM-based input variables, were 6.74 and 5.20%, respectively. The ensemble artificial intelligence (AI) models exhibited a much lower MAPE than ES1. Additionally, ES1 was less stable in prediction than NGBoost. The possibility of excessive and unusual MAPE and MeLa values occurred only in the two single models. However, on some data sets, ES1 outperformed AI models. NGBoost also outperformed other models, especially single models for most developing countries, and was more accurate than previously presented optimized models. In addition, sensitivity analysis was conducted on the NGBoost predicted outputs of 30 projects using the SHapley Additive exPlanations (SHAP) method. All variables demonstrated an effect on ETC(t)/PD. The results revealed that the most influential input variables in order of importance were actual time (AT) to PD, regulatory quality (RQ), earned duration (ED) to PD, schedule cost index (SCI), planned complete percentage, rule of law (RL), actual complete percentage (ACP) and ETC(t) of the ES optimal equation to PD. The probabilistic hybrid model was selected based on the outputs predicted by the NGBoost and XGBoost models and the MAPE values from three AI models. The 95% prediction interval of the NGBoost–XGBoost model revealed that 96.10 and 98.60% of the actual output values of the testing and training sets are within this interval, respectively.

Research limitations/implications

Due to the use of projects performed in different countries, it was not possible to distribute the questionnaire to the managers and stakeholders of 30 projects in six developing countries. Due to the low number of EVM-based projects in various references, it was unfeasible to utilize other types of projects. Future prospects include evaluating the accuracy and stability of NGBoost for timely and non-fluctuating projects (mostly in developed countries), considering a greater number of legal/institutional variables as input, using legal/institutional/internal/inflation inputs for complex projects with extremely high uncertainty (such as bridge and road construction) and integrating these inputs and NGBoost with new technologies (such as blockchain, radio frequency identification (RFID) systems, building information modeling (BIM) and Internet of things (IoT)).

Practical implications

The legal/intuitive recommendations made to governments are strict control of prices, adequate supervision, removal of additional rules, removal of unfair regulations, clarification of the future trend of a law change, strict monitoring of property rights, simplification of the processes for obtaining permits and elimination of unnecessary changes particularly in developing countries and at the onset of irregular projects with limited information and numerous uncertainties. Furthermore, the managers and stakeholders of this group of projects were informed of the significance of seven construction variables (institutional/legal external risks, internal factors and inflation) at an early stage, using time series (dynamic) models to predict AD, accurate calculation of progress percentage variables, the effectiveness of building type in non-residential projects, regular updating inflation during implementation, effectiveness of employer type in the early stage of public projects in addition to the late stage of private projects, and allocating reserve duration (buffer) in order to respond to institutional/legal risks.

Originality/value

Ensemble methods were optimized in 70% of references. To the authors' knowledge, NGBoost from the set of ensemble methods was not used to estimate construction project duration and delays. NGBoost is an effective method for considering uncertainties in irregular projects and is often implemented in developing countries. Furthermore, AD estimation models do fail to incorporate RQ and RL from the World Bank's worldwide governance indicators (WGI) as risk-based inputs. In addition, the various WGI, EVM and inflation variables are not combined with substantial degrees of delay institutional risks as inputs. Consequently, due to the existence of critical and complex risks in different countries, it is vital to consider legal and institutional factors. This is especially recommended if an in-depth, accurate and reality-based method like SHAP is used for analysis.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 26 May 2022

Ismail Abiodun Sulaimon, Hafiz Alaka, Razak Olu-Ajayi, Mubashir Ahmad, Saheed Ajayi and Abdul Hye

Road traffic emissions are generally believed to contribute immensely to air pollution, but the effect of road traffic data sets on air quality (AQ) predictions has not been fully…

260

Abstract

Purpose

Road traffic emissions are generally believed to contribute immensely to air pollution, but the effect of road traffic data sets on air quality (AQ) predictions has not been fully investigated. This paper aims to investigate the effects traffic data set have on the performance of machine learning (ML) predictive models in AQ prediction.

Design/methodology/approach

To achieve this, the authors have set up an experiment with the control data set having only the AQ data set and meteorological (Met) data set, while the experimental data set is made up of the AQ data set, Met data set and traffic data set. Several ML models (such as extra trees regressor, eXtreme gradient boosting regressor, random forest regressor, K-neighbors regressor and two others) were trained, tested and compared on these individual combinations of data sets to predict the volume of PM2.5, PM10, NO2 and O3 in the atmosphere at various times of the day.

Findings

The result obtained showed that various ML algorithms react differently to the traffic data set despite generally contributing to the performance improvement of all the ML algorithms considered in this study by at least 20% and an error reduction of at least 18.97%.

Research limitations/implications

This research is limited in terms of the study area, and the result cannot be generalized outside of the UK as some of the inherent conditions may not be similar elsewhere. Additionally, only the ML algorithms commonly used in literature are considered in this research, therefore, leaving out a few other ML algorithms.

Practical implications

This study reinforces the belief that the traffic data set has a significant effect on improving the performance of air pollution ML prediction models. Hence, there is an indication that ML algorithms behave differently when trained with a form of traffic data set in the development of an AQ prediction model. This implies that developers and researchers in AQ prediction need to identify the ML algorithms that behave in their best interest before implementation.

Originality/value

The result of this study will enable researchers to focus more on algorithms of benefit when using traffic data sets in AQ prediction.

Details

Journal of Engineering, Design and Technology , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1726-0531

Keywords

Open Access
Article
Publication date: 20 February 2024

Li Chen, Dirk Ifenthaler, Jane Yin-Kim Yau and Wenting Sun

The study aims to identify the status quo of artificial intelligence in entrepreneurship education with a view to identifying potential research gaps, especially in the adoption…

1038

Abstract

Purpose

The study aims to identify the status quo of artificial intelligence in entrepreneurship education with a view to identifying potential research gaps, especially in the adoption of certain intelligent technologies and pedagogical designs applied in this domain.

Design/methodology/approach

A scoping review was conducted using six inclusive and exclusive criteria agreed upon by the author team. The collected studies, which focused on the adoption of AI in entrepreneurship education, were analysed by the team with regards to various aspects including the definition of intelligent technology, research question, educational purpose, research method, sample size, research quality and publication. The results of this analysis were presented in tables and figures.

Findings

Educators introduced big data and algorithms of machine learning in entrepreneurship education. Big data analytics use multimodal data to improve the effectiveness of entrepreneurship education and spot entrepreneurial opportunities. Entrepreneurial analytics analysis entrepreneurial projects with low costs and high effectiveness. Machine learning releases educators’ burdens and improves the accuracy of the assessment. However, AI in entrepreneurship education needs more sophisticated pedagogical designs in diagnosis, prediction, intervention, prevention and recommendation, combined with specific entrepreneurial learning content and entrepreneurial procedure, obeying entrepreneurial pedagogy.

Originality/value

This study holds significant implications as it can shift the focus of entrepreneurs and educators towards the educational potential of artificial intelligence, prompting them to consider the ways in which it can be used effectively. By providing valuable insights, the study can stimulate further research and exploration, potentially opening up new avenues for the application of artificial intelligence in entrepreneurship education.

Details

Education + Training, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0040-0912

Keywords

Article
Publication date: 16 November 2023

Nenavath Sreenu

This research study aims to delve into the enduring relationship between housing property prices and economic policy uncertainty across eight major Indian cities.

Abstract

Purpose

This research study aims to delve into the enduring relationship between housing property prices and economic policy uncertainty across eight major Indian cities.

Design/methodology/approach

Using the panel non-linear autoregressive distributed lag model, this study meticulously investigates the asymmetric impact of economic policy uncertainty on apartment and house (unit) prices in India during the period from 2000 to 2022.

Findings

The findings of this study indicate that economic policy uncertainty exerts a negative influence on property prices, but noteworthy asymmetry is observed, with positive changes in effect having a more pronounced impact than negative changes. This asymmetrical effect is particularly prominent in the case of unit prices.

Originality/value

This research reveals that long-run price trends are also influenced by factors such as interest rates, building costs and housing loans. Through a comprehensive analysis of these factors and their interplay with property prices, this research paper contributes valuable insights to the understanding of the real estate market dynamics in Indian cities.

Details

International Journal of Housing Markets and Analysis, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1753-8270

Keywords

Article
Publication date: 15 September 2023

Kaili Wang, Ke Dong, Jiachun Wu and Jiang Wu

The purpose of this paper is to identify the historical trends and status of the national development of artificial intelligence (AI) from a nationwide perspective and to enable…

Abstract

Purpose

The purpose of this paper is to identify the historical trends and status of the national development of artificial intelligence (AI) from a nationwide perspective and to enable governments at different administrative levels to promote AI development through policymaking.

Design/methodology/approach

This paper analyzed 248 Chinese AI policies (36 issued by the state agencies and 212 by the regional agencies). Policy bibliometrics, policy instruments and network analysis were used to reveal the AI policy patterns. Three aspects were analyzed: the spatiotemporal distribution of issued policies, the policy foci and instruments of policy contents and the cooperation and citation among policy-issuing agencies.

Findings

Results indicate that Chinese AI development is still in the initial phase. During the policymaking processes, the state and regional policy foci have strong consistency; however, the coordination among state and regional agencies is supposed to be strengthened. According to the issuing time of AI policies, Chinese AI development is in accordance with the global situation and has witnessed unprecedented growth in the last five years. And the coastal provinces have issued more targeted policies than the middle and western provinces. Governments at the state and regional levels have emphasized familiar policy foci and played the role of policymakers, along with regional governments that also functioned as policy executors as well. According to the three-dimension instruments coding, the authors found an uneven structure of policy instruments at both levels. Furthermore, weak cooperation appears at the state level, while little cooperation is found among regional agencies. Regional governments cite state policies, thus leading to the formation of top-down diffusion, lacking bottom-up diffusion.

Originality/value

The paper contributes to the literature by characterizing policy patterns from both external attributes and semantic contents, thus revealing features of policy distribution, contents and agencies. What is more, this research analyzes Chinese AI policies from a nationwide perspective, which contributes to clarifying the overall status and multi-level relationships of policies. The findings also benefit the coordinated development of governments during further policymaking processes.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

1 – 10 of 12