Search results

1 – 10 of 51
Article
Publication date: 17 February 2022

Prajakta Thakare and Ravi Sankar V.

Agriculture is the backbone of a country, contributing more than half of the sector of economy throughout the world. The need for precision agriculture is essential in evaluating…

Abstract

Purpose

Agriculture is the backbone of a country, contributing more than half of the sector of economy throughout the world. The need for precision agriculture is essential in evaluating the conditions of the crops with the aim of determining the proper selection of pesticides. The conventional method of pest detection fails to be stable and provides limited accuracy in the prediction. This paper aims to propose an automatic pest detection module for the accurate detection of pests using the hybrid optimization controlled deep learning model.

Design/methodology/approach

The paper proposes an advanced pest detection strategy based on deep learning strategy through wireless sensor network (WSN) in the agricultural fields. Initially, the WSN consisting of number of nodes and a sink are clustered as number of clusters. Each cluster comprises a cluster head (CH) and a number of nodes, where the CH involves in the transfer of data to the sink node of the WSN and the CH is selected using the fractional ant bee colony optimization (FABC) algorithm. The routing process is executed using the protruder optimization algorithm that helps in the transfer of image data to the sink node through the optimal CH. The sink node acts as the data aggregator and the collection of image data thus obtained acts as the input database to be processed to find the type of pest in the agricultural field. The image data is pre-processed to remove the artifacts present in the image and the pre-processed image is then subjected to feature extraction process, through which the significant local directional pattern, local binary pattern, local optimal-oriented pattern (LOOP) and local ternary pattern (LTP) features are extracted. The extracted features are then fed to the deep-convolutional neural network (CNN) in such a way to detect the type of pests in the agricultural field. The weights of the deep-CNN are tuned optimally using the proposed MFGHO optimization algorithm that is developed with the combined characteristics of navigating search agents and the swarming search agents.

Findings

The analysis using insect identification from habitus image Database based on the performance metrics, such as accuracy, specificity and sensitivity, reveals the effectiveness of the proposed MFGHO-based deep-CNN in detecting the pests in crops. The analysis proves that the proposed classifier using the FABC+protruder optimization-based data aggregation strategy obtains an accuracy of 94.3482%, sensitivity of 93.3247% and the specificity of 94.5263%, which is high as compared to the existing methods.

Originality/value

The proposed MFGHO optimization-based deep-CNN is used for the detection of pest in the crop fields to ensure the better selection of proper cost-effective pesticides for the crop fields in such a way to increase the production. The proposed MFGHO algorithm is developed with the integrated characteristic features of navigating search agents and the swarming search agents in such a way to facilitate the optimal tuning of the hyperparameters in the deep-CNN classifier for the detection of pests in the crop fields.

Details

Journal of Engineering, Design and Technology , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 19 January 2024

Ping Huang, Haitao Ding, Hong Chen, Jianwei Zhang and Zhenjia Sun

The growing availability of naturalistic driving datasets (NDDs) presents a valuable opportunity to develop various models for autonomous driving. However, while current NDDs…

Abstract

Purpose

The growing availability of naturalistic driving datasets (NDDs) presents a valuable opportunity to develop various models for autonomous driving. However, while current NDDs include data on vehicles with and without intended driving behavior changes, they do not explicitly demonstrate a type of data on vehicles that intend to change their driving behavior but do not execute the behaviors because of safety, efficiency, or other factors. This missing data is essential for autonomous driving decisions. This study aims to extract the driving data with implicit intentions to support the development of decision-making models.

Design/methodology/approach

According to Bayesian inference, drivers who have the same intended changes likely share similar influencing factors and states. Building on this principle, this study proposes an approach to extract data on vehicles that intended to execute specific behaviors but failed to do so. This is achieved by computing driving similarities between the candidate vehicles and benchmark vehicles with incorporation of the standard similarity metrics, which takes into account information on the surrounding vehicles' location topology and individual vehicle motion states. By doing so, the method enables a more comprehensive analysis of driving behavior and intention.

Findings

The proposed method is verified on the Next Generation SIMulation dataset (NGSim), which confirms its ability to reveal similarities between vehicles executing similar behaviors during the decision-making process in nature. The approach is also validated using simulated data, achieving an accuracy of 96.3 per cent in recognizing vehicles with specific driving behavior intentions that are not executed.

Originality/value

This study provides an innovative approach to extract driving data with implicit intentions and offers strong support to develop data-driven decision-making models for autonomous driving. With the support of this approach, the development of autonomous vehicles can capture more real driving experience from human drivers moving towards a safer and more efficient future.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 11 July 2023

Abhinandan Chatterjee, Pradip Bala, Shruti Gedam, Sanchita Paul and Nishant Goyal

Depression is a mental health problem characterized by a persistent sense of sadness and loss of interest. EEG signals are regarded as the most appropriate instruments for…

Abstract

Purpose

Depression is a mental health problem characterized by a persistent sense of sadness and loss of interest. EEG signals are regarded as the most appropriate instruments for diagnosing depression because they reflect the operating status of the human brain. The purpose of this study is the early detection of depression among people using EEG signals.

Design/methodology/approach

(i) Artifacts are removed by filtering and linear and non-linear features are extracted; (ii) feature scaling is done using a standard scalar while principal component analysis (PCA) is used for feature reduction; (iii) the linear, non-linear and combination of both (only for those whose accuracy is highest) are taken for further analysis where some ML and DL classifiers are applied for the classification of depression; and (iv) in this study, total 15 distinct ML and DL methods, including KNN, SVM, bagging SVM, RF, GB, Extreme Gradient Boosting, MNB, Adaboost, Bagging RF, BootAgg, Gaussian NB, RNN, 1DCNN, RBFNN and LSTM, that have been effectively utilized as classifiers to handle a variety of real-world issues.

Findings

1. Among all, alpha, alpha asymmetry, gamma and gamma asymmetry give the best results in linear features, while RWE, DFA, CD and AE give the best results in non-linear feature. 2. In the linear features, gamma and alpha asymmetry have given 99.98% accuracy for Bagging RF, while gamma asymmetry has given 99.98% accuracy for BootAgg. 3. For non-linear features, it has been shown 99.84% of accuracy for RWE and DFA in RF, 99.97% accuracy for DFA in XGBoost and 99.94% accuracy for RWE in BootAgg. 4. By using DL, in linear features, gamma asymmetry has given more than 96% accuracy in RNN and 91% accuracy in LSTM and for non-linear features, 89% accuracy has been achieved for CD and AE in LSTM. 5. By combining linear and non-linear features, the highest accuracy was achieved in Bagging RF (98.50%) gamma asymmetry + RWE. In DL, Alpha + RWE, Gamma asymmetry + CD and gamma asymmetry + RWE have achieved 98% accuracy in LSTM.

Originality/value

A novel dataset was collected from the Central Institute of Psychiatry (CIP), Ranchi which was recorded using a 128-channels whereas major previous studies used fewer channels; the details of the study participants are summarized and a model is developed for statistical analysis using N-way ANOVA; artifacts are removed by high and low pass filtering of epoch data followed by re-referencing and independent component analysis for noise removal; linear features, namely, band power and interhemispheric asymmetry and non-linear features, namely, relative wavelet energy, wavelet entropy, Approximate entropy, sample entropy, detrended fluctuation analysis and correlation dimension are extracted; this model utilizes Epoch (213,072) for 5 s EEG data, which allows the model to train for longer, thereby increasing the efficiency of classifiers. Features scaling is done using a standard scalar rather than normalization because it helps increase the accuracy of the models (especially for deep learning algorithms) while PCA is used for feature reduction; the linear, non-linear and combination of both features are taken for extensive analysis in conjunction with ML and DL classifiers for the classification of depression. The combination of linear and non-linear features (only for those whose accuracy is highest) is used for the best detection results.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 25 January 2022

Anil Kumar Maddali and Habibulla Khan

Currently, the design, technological features of voices, and their analysis of various applications are being simulated with the requirement to communicate at a greater distance…

Abstract

Purpose

Currently, the design, technological features of voices, and their analysis of various applications are being simulated with the requirement to communicate at a greater distance or more discreetly. The purpose of this study is to explore how voices and their analyses are used in modern literature to generate a variety of solutions, of which only a few successful models exist.

Design/methodology

The mel-frequency cepstral coefficient (MFCC), average magnitude difference function, cepstrum analysis and other voice characteristics are effectively modeled and implemented using mathematical modeling with variable weights parametric for each algorithm, which can be used with or without noises. Improvising the design characteristics and their weights with different supervised algorithms that regulate the design model simulation.

Findings

Different data models have been influenced by the parametric range and solution analysis in different space parameters, such as frequency or time model, with features such as without, with and after noise reduction. The frequency response of the current design can be analyzed through the Windowing techniques.

Original value

A new model and its implementation scenario with pervasive computational algorithms’ (PCA) (such as the hybrid PCA with AdaBoost (HPCA), PCA with bag of features and improved PCA with bag of features) relating the different features such as MFCC, power spectrum, pitch, Window techniques, etc. are calculated using the HPCA. The features are accumulated on the matrix formulations and govern the design feature comparison and its feature classification for improved performance parameters, as mentioned in the results.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 28 September 2023

Moh. Riskiyadi

This study aims to compare machine learning models, datasets and splitting training-testing using data mining methods to detect financial statement fraud.

3545

Abstract

Purpose

This study aims to compare machine learning models, datasets and splitting training-testing using data mining methods to detect financial statement fraud.

Design/methodology/approach

This study uses a quantitative approach from secondary data on the financial reports of companies listed on the Indonesia Stock Exchange in the last ten years, from 2010 to 2019. Research variables use financial and non-financial variables. Indicators of financial statement fraud are determined based on notes or sanctions from regulators and financial statement restatements with special supervision.

Findings

The findings show that the Extremely Randomized Trees (ERT) model performs better than other machine learning models. The best original-sampling dataset compared to other dataset treatments. Training testing splitting 80:10 is the best compared to other training-testing splitting treatments. So the ERT model with an original-sampling dataset and 80:10 training-testing splitting are the most appropriate for detecting future financial statement fraud.

Practical implications

This study can be used by regulators, investors, stakeholders and financial crime experts to add insight into better methods of detecting financial statement fraud.

Originality/value

This study proposes a machine learning model that has not been discussed in previous studies and performs comparisons to obtain the best financial statement fraud detection results. Practitioners and academics can use findings for further research development.

Details

Asian Review of Accounting, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1321-7348

Keywords

Open Access
Article
Publication date: 22 June 2023

Ignacio Manuel Luque Raya and Pablo Luque Raya

Having defined liquidity, the aim is to assess the predictive capacity of its representative variables, so that economic fluctuations may be better understood.

Abstract

Purpose

Having defined liquidity, the aim is to assess the predictive capacity of its representative variables, so that economic fluctuations may be better understood.

Design/methodology/approach

Conceptual variables that are representative of liquidity will be used to formulate the predictions. The results of various machine learning models will be compared, leading to some reflections on the predictive value of the liquidity variables, with a view to defining their selection.

Findings

The predictive capacity of the model was also found to vary depending on the source of the liquidity, in so far as the data on liquidity within the private sector contributed more than the data on public sector liquidity to the prediction of economic fluctuations. International liquidity was seen as a more diffuse concept, and the standardization of its definition could be the focus of future studies. A benchmarking process was also performed when applying the state-of-the-art machine learning models.

Originality/value

Better understanding of these variables might help us toward a deeper understanding of the operation of financial markets. Liquidity, one of the key financial market variables, is neither well-defined nor standardized in the existing literature, which calls for further study. Hence, the novelty of an applied study employing modern data science techniques can provide a fresh perspective on financial markets.

流動資金,無論是在金融市場方面,抑或是在實體經濟方面,均為市場趨勢最明確的預報因素之一

因此,就了解經濟週期和經濟發展而言,流動資金是一個極其重要的概念。本研究擬在安全資產的價格預測方面取得進步。安全資產代表了經濟的實際情況,特別是美國的十年期國債。

研究目的

流動資金的定義上面已說明了; 為進一步了解經濟波動,本研究擬對流動資金代表性變量的預測能力進行評估。

研究方法

研究使用作為流動資金代表的概念變項去規劃預測。各機器學習模型的結果會作比較,這會帶來對流動資金變量的預測值的深思,而深思的目的是確定其選擇。

研究結果

只要在私營部門內流動資金的數據比公營部門的流動資金數據、在預測經濟波動方面貢獻更大時,我們發現、模型的預測能力也會依賴流動資金的來源而存在差異。國際流動資金被視為一個晦澀的概念,而它的定義的標準化,或許應是未來學術研究的焦點。當應用最先進的機器學習模型時,標桿分析法的步驟也施行了。

研究的原創性

若我們對有關的變量加深認識,我們就可更深入地理解金融市場的運作。流動資金,雖是金融市場中一個極其重要的變量,但在現存的學術文獻裏,不但沒有明確的定義,而且也沒有被標準化; 就此而言,未來的研究或許可在這方面作進一步的探討。因此,本研究為富有新穎思維的應用研究,研究使用了現代數據科學技術,這可為探討金融市場提供一個全新的視角。

Details

European Journal of Management and Business Economics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2444-8451

Keywords

Article
Publication date: 12 December 2022

Godoyon Ebenezer Wusu, Hafiz Alaka, Wasiu Yusuf, Iofis Mporas, Luqman Toriola-Coker and Raphael Oseghale

Several factors influence OSC adoption, but extant literature did not articulate the dominant barriers or drivers influencing adoption. Therefore, this research has not only…

Abstract

Purpose

Several factors influence OSC adoption, but extant literature did not articulate the dominant barriers or drivers influencing adoption. Therefore, this research has not only ventured into analyzing the core influencing factors but has also employed one of the best-known predictive means, Machine Learning, to identify the most influencing OSC adoption factors.

Design/methodology/approach

The research approach is deductive in nature, focusing on finding out the most critical factors through literature review and reinforcing — the factors through a 5- point Likert scale survey questionnaire. The responses received were tested for reliability before being run through Machine Learning algorithms to determine the most influencing OSC factors within the Nigerian Construction Industry (NCI).

Findings

The research outcome identifies seven (7) best-performing algorithms for predicting OSC adoption: Decision Tree, Random Forest, K-Nearest Neighbour, Extra-Trees, AdaBoost, Support Vector Machine and Artificial Neural Network. It also reported finance, awareness, use of Building Information Modeling (BIM) and belief in OSC as the main influencing factors.

Research limitations/implications

Data were primarily collected among the NCI professionals/workers and the whole exercise was Nigeria region-based. The research outcome, however, provides a foundation for OSC adoption potential within Nigeria, Africa and beyond.

Practical implications

The research concluded that with detailed attention paid to the identified factors, OSC usage could find its footing in Nigeria and, consequently, Africa. The models can also serve as a template for other regions where OSC adoption is being considered.

Originality/value

The research establishes the most effective algorithms for the prediction of OSC adoption possibilities as well as critical influencing factors to successfully adopting OSC within the NCI as a means to surmount its housing shortage.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 3 April 2024

Rizwan Ali, Jin Xu, Mushahid Hussain Baig, Hafiz Saif Ur Rehman, Muhammad Waqas Aslam and Kaleem Ullah Qasim

This study aims to endeavour to decode artificial intelligence (AI)-based tokens' complex dynamics and predictability using a comprehensive multivariate framework that integrates…

Abstract

Purpose

This study aims to endeavour to decode artificial intelligence (AI)-based tokens' complex dynamics and predictability using a comprehensive multivariate framework that integrates technical and macroeconomic indicators.

Design/methodology/approach

In this study we used advance machine learning techniques, such as gradient boosting regression (GBR), random forest (RF) and notably long short-term memory (LSTM) networks, this research provides a nuanced understanding of the factors driving the performance of AI tokens. The study’s comparative analysis highlights the superior predictive capabilities of LSTM models, as evidenced by their performance across various AI digital tokens such as AGIX-singularity-NET, Cortex and numeraire NMR.

Findings

This study finding shows that through an intricate exploration of feature importance and the impact of speculative behaviour, the research elucidates the long-term patterns and resilience of AI-based tokens against economic shifts. The SHapley Additive exPlanations (SHAP) analysis results show that technical and some macroeconomic factors play a dominant role in price production. It also examines the potential of these models for strategic investment and hedging, underscoring their relevance in an increasingly digital economy.

Originality/value

According to our knowledge, the absence of AI research frameworks for forecasting and modelling current aria-leading AI tokens is apparent. Due to a lack of study on understanding the relationship between the AI token market and other factors, forecasting is outstandingly demanding. This study provides a robust predictive framework to accurately identify the changing trends of AI tokens within a multivariate context and fill the gaps in existing research. We can investigate detailed predictive analytics with the help of modern AI algorithms and correct model interpretation to elaborate on the behaviour patterns of developing decentralised digital AI-based token prices.

Details

Journal of Economic Studies, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0144-3585

Keywords

Article
Publication date: 26 December 2023

Farshad Peiman, Mohammad Khalilzadeh, Nasser Shahsavari-Pour and Mehdi Ravanshadnia

Earned value management (EVM)–based models for estimating project actual duration (AD) and cost at completion using various methods are continuously developed to improve the…

Abstract

Purpose

Earned value management (EVM)–based models for estimating project actual duration (AD) and cost at completion using various methods are continuously developed to improve the accuracy and actualization of predicted values. This study primarily aimed to examine natural gradient boosting (NGBoost-2020) with the classification and regression trees (CART) base model (base learner). To the best of the authors' knowledge, this concept has never been applied to EVM AD forecasting problem. Consequently, the authors compared this method to the single K-nearest neighbor (KNN) method, the ensemble method of extreme gradient boosting (XGBoost-2016) with the CART base model and the optimal equation of EVM, the earned schedule (ES) equation with the performance factor equal to 1 (ES1). The paper also sought to determine the extent to which the World Bank's two legal factors affect countries and how the two legal causes of delay (related to institutional flaws) influence AD prediction models.

Design/methodology/approach

In this paper, data from 30 construction projects of various building types in Iran, Pakistan, India, Turkey, Malaysia and Nigeria (due to the high number of delayed projects and the detrimental effects of these delays in these countries) were used to develop three models. The target variable of the models was a dimensionless output, the ratio of estimated duration to completion (ETC(t)) to planned duration (PD). Furthermore, 426 tracking periods were used to build the three models, with 353 samples and 23 projects in the training set, 73 patterns (17% of the total) and six projects (21% of the total) in the testing set. Furthermore, 17 dimensionless input variables were used, including ten variables based on the main variables and performance indices of EVM and several other variables detailed in the study. The three models were subsequently created using Python and several GitHub-hosted codes.

Findings

For the testing set of the optimal model (NGBoost), the better percentage mean (better%) of the prediction error (based on projects with a lower error percentage) of the NGBoost compared to two KNN and ES1 single models, as well as the total mean absolute percentage error (MAPE) and mean lags (MeLa) (indicating model stability) were 100, 83.33, 5.62 and 3.17%, respectively. Notably, the total MAPE and MeLa for the NGBoost model testing set, which had ten EVM-based input variables, were 6.74 and 5.20%, respectively. The ensemble artificial intelligence (AI) models exhibited a much lower MAPE than ES1. Additionally, ES1 was less stable in prediction than NGBoost. The possibility of excessive and unusual MAPE and MeLa values occurred only in the two single models. However, on some data sets, ES1 outperformed AI models. NGBoost also outperformed other models, especially single models for most developing countries, and was more accurate than previously presented optimized models. In addition, sensitivity analysis was conducted on the NGBoost predicted outputs of 30 projects using the SHapley Additive exPlanations (SHAP) method. All variables demonstrated an effect on ETC(t)/PD. The results revealed that the most influential input variables in order of importance were actual time (AT) to PD, regulatory quality (RQ), earned duration (ED) to PD, schedule cost index (SCI), planned complete percentage, rule of law (RL), actual complete percentage (ACP) and ETC(t) of the ES optimal equation to PD. The probabilistic hybrid model was selected based on the outputs predicted by the NGBoost and XGBoost models and the MAPE values from three AI models. The 95% prediction interval of the NGBoost–XGBoost model revealed that 96.10 and 98.60% of the actual output values of the testing and training sets are within this interval, respectively.

Research limitations/implications

Due to the use of projects performed in different countries, it was not possible to distribute the questionnaire to the managers and stakeholders of 30 projects in six developing countries. Due to the low number of EVM-based projects in various references, it was unfeasible to utilize other types of projects. Future prospects include evaluating the accuracy and stability of NGBoost for timely and non-fluctuating projects (mostly in developed countries), considering a greater number of legal/institutional variables as input, using legal/institutional/internal/inflation inputs for complex projects with extremely high uncertainty (such as bridge and road construction) and integrating these inputs and NGBoost with new technologies (such as blockchain, radio frequency identification (RFID) systems, building information modeling (BIM) and Internet of things (IoT)).

Practical implications

The legal/intuitive recommendations made to governments are strict control of prices, adequate supervision, removal of additional rules, removal of unfair regulations, clarification of the future trend of a law change, strict monitoring of property rights, simplification of the processes for obtaining permits and elimination of unnecessary changes particularly in developing countries and at the onset of irregular projects with limited information and numerous uncertainties. Furthermore, the managers and stakeholders of this group of projects were informed of the significance of seven construction variables (institutional/legal external risks, internal factors and inflation) at an early stage, using time series (dynamic) models to predict AD, accurate calculation of progress percentage variables, the effectiveness of building type in non-residential projects, regular updating inflation during implementation, effectiveness of employer type in the early stage of public projects in addition to the late stage of private projects, and allocating reserve duration (buffer) in order to respond to institutional/legal risks.

Originality/value

Ensemble methods were optimized in 70% of references. To the authors' knowledge, NGBoost from the set of ensemble methods was not used to estimate construction project duration and delays. NGBoost is an effective method for considering uncertainties in irregular projects and is often implemented in developing countries. Furthermore, AD estimation models do fail to incorporate RQ and RL from the World Bank's worldwide governance indicators (WGI) as risk-based inputs. In addition, the various WGI, EVM and inflation variables are not combined with substantial degrees of delay institutional risks as inputs. Consequently, due to the existence of critical and complex risks in different countries, it is vital to consider legal and institutional factors. This is especially recommended if an in-depth, accurate and reality-based method like SHAP is used for analysis.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 31 July 2023

Chetanya Singh, Manoj Kumar Dash, Rajendra Sahu and Anil Kumar

Artificial intelligence (AI) is increasingly applied by businesses to optimize their processes and decision-making, develop effective and efficient strategies, and positively…

Abstract

Purpose

Artificial intelligence (AI) is increasingly applied by businesses to optimize their processes and decision-making, develop effective and efficient strategies, and positively influence customer behaviors. Businesses use AI to generate behaviors such as customer retention (CR). The existing literature on “AI and CR” is vastly scattered. The paper aims to review the present research on AI in CR systematically and suggest future research directions to further develop the field.

Design/methodology/approach

The Scopus database is used to collect the data for systematic review and bibliometric analysis using the VOSviewer tool. The paper performs the following analysis: (1) year-wise publications and citations, (2) co-authorship analysis of authors, countries, and affiliations, (3) citation analysis of articles and journals, (4) co-occurrence visualization of binding terms, and (5) bibliographic coupling of articles.

Findings

Five research themes are identified, namely, (1) AI and customer churn prediction in CR, (2) AI and customer service experience in CR, (3) AI and customer sentiment analysis in CR, (4) AI and customer (big data) analytics in CR, and (5) AI privacy and ethical concerns in CR. Based on the research themes, fifteen future research objectives and a future research framework are suggested.

Research limitations/implications

The paper has important implications for researchers and managers as it reveals vital insights into the latest trends and paths in AI-CR research and practices. It focuses on privacy and ethical issues of AI; hence, it will help the government develop policies for sustainable AI adoption for CR.

Originality/value

To the author's best knowledge, this paper is the first attempt to comprehensively review the existing research on “AI and CR” using bibliometric analysis.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of 51