Search results
1 – 10 of 229Valeriia Baklanova, Aleksei Kurkin and Tamara Teplova
The primary objective of this research is to provide a precise interpretation of the constructed machine learning model and produce definitive summaries that can evaluate the…
Abstract
Purpose
The primary objective of this research is to provide a precise interpretation of the constructed machine learning model and produce definitive summaries that can evaluate the influence of investor sentiment on the overall sales of non-fungible token (NFT) assets. To achieve this objective, the NFT hype index was constructed as well as several approaches of XAI were employed to interpret Black Box models and assess the magnitude and direction of the impact of the features used.
Design/methodology/approach
The research paper involved the construction of a sentiment index termed the NFT hype index, which aims to measure the influence of market actors within the NFT industry. This index was created by analyzing written content posted by 62 high-profile individuals and opinion leaders on the social media platform Twitter. The authors collected posts from the Twitter accounts that were afterward classified by tonality with a help of natural language processing model VADER. Then the machine learning methods and XAI approaches (feature importance, permutation importance and SHAP) were applied to explain the obtained results.
Findings
The built index was subjected to rigorous analysis using the gradient boosting regressor model and explainable AI techniques, which confirmed its significant explanatory power. Remarkably, the NFT hype index exhibited a higher degree of predictive accuracy compared to the well-known sentiment indices.
Practical implications
The NFT hype index, constructed from Twitter textual data, functions as an innovative, sentiment-based indicator for investment decision-making in the NFT market. It offers investors unique insights into the market sentiment that can be used alongside conventional financial analysis techniques to enhance risk management, portfolio optimization and overall investment outcomes within the rapidly evolving NFT ecosystem. Thus, the index plays a crucial role in facilitating well-informed, data-driven investment decisions and ensuring a competitive edge in the digital assets market.
Originality/value
The authors developed a novel index of investor interest for NFT assets (NFT hype index) based on text messages posted by market influencers and compared it to conventional sentiment indices in terms of their explanatory power. With the application of explainable AI, it was shown that sentiment indices may perform as significant predictors for NFT sales and that the NFT hype index works best among all sentiment indices considered.
Details
Keywords
Yoonjae Hwang, Sungwon Jung and Eun Joo Park
Initiator crimes, also known as near-repeat crimes, occur in places with known risk factors and vulnerabilities based on prior crime-related experiences or information…
Abstract
Purpose
Initiator crimes, also known as near-repeat crimes, occur in places with known risk factors and vulnerabilities based on prior crime-related experiences or information. Consequently, the environment in which initiator crimes occur might be different from more general crime environments. This study aimed to analyse the differences between the environments of initiator crimes and general crimes, confirming the need for predicting initiator crimes.
Design/methodology/approach
We compared predictive models using data corresponding to initiator crimes and all residential burglaries without considering repetitive crime patterns as dependent variables. Using random forest and gradient boosting, representative ensemble models and predictive models were compared utilising various environmental factor data. Subsequently, we evaluated the performance of each predictive model to derive feature importance and partial dependence based on a highly predictive model.
Findings
By analysing environmental factors affecting overall residential burglary and initiator crimes, we observed notable differences in high-importance variables. Further analysis of the partial dependence of total residential burglary and initiator crimes based on these variables revealed distinct impacts on each crime. Moreover, initiator crimes took place in environments consistent with well-known theories in the field of environmental criminology.
Originality/value
Our findings indicate the possibility that results that do not appear through the existing theft crime prediction method will be identified in the initiator crime prediction model. Emphasising the importance of investigating the environments in which initiator crimes occur, this study underscores the potential of artificial intelligence (AI)-based approaches in creating a safe urban environment. By effectively preventing potential crimes, AI-driven prediction of initiator crimes can significantly contribute to enhancing urban safety.
Details
Keywords
Konstantinos Kalodanis, Panagiotis Rizomiliotis and Dimosthenis Anagnostopoulos
The purpose of this paper is to highlight the key technical challenges that derive from the recently proposed European Artificial Intelligence Act and specifically, to investigate…
Abstract
Purpose
The purpose of this paper is to highlight the key technical challenges that derive from the recently proposed European Artificial Intelligence Act and specifically, to investigate the applicability of the requirements that the AI Act mandates to high-risk AI systems from the perspective of AI security.
Design/methodology/approach
This paper presents the main points of the proposed AI Act, with emphasis on the compliance requirements of high-risk systems. It matches known AI security threats with the relevant technical requirements, it demonstrates the impact that these security threats can have to the AI Act technical requirements and evaluates the applicability of these requirements based on the effectiveness of the existing security protection measures. Finally, the paper highlights the necessity for an integrated framework for AI system evaluation.
Findings
The findings of the EU AI Act technical assessment highlight the gap between the proposed requirements and the available AI security countermeasures as well as the necessity for an AI security evaluation framework.
Originality/value
AI Act, high-risk AI systems, security threats, security countermeasures.
Details
Keywords
Heba Al Kailani, Ghaleb J. Sweis, Farouq Sammour, Wasan Omar Maaitah, Rateb J. Sweis and Mohammad Alkailani
The process of predicting construction costs and forecasting price fluctuations is a significant and challenging undertaking for project managers. This study aims to develop a…
Abstract
Purpose
The process of predicting construction costs and forecasting price fluctuations is a significant and challenging undertaking for project managers. This study aims to develop a construction cost index (CCI) for Jordan’s construction industry using fuzzy analytic hierarchy process (FAHP) and predict future CCI values using traditional and machine learning (ML) techniques.
Design/methodology/approach
The most influential cost items were selected by conducting a literature review and confirmatory expert interviews. The cost items’ weights were calculated using FAHP to develop the CCI formula.
Findings
The results showed that the random forest model had the lowest mean absolute percentage error (MAPE) of 1.09%, followed by Extreme Gradient Boosting and K-nearest neighbours with MAPEs of 1.41% and 1.46%, respectively.
Originality/value
The novelty of this study lies within the use of FAHP to address the ambiguity of the impact of various cost items on CCI. The developed CCI equation and ML models are expected to significantly benefit construction managers, investors and policymakers in making informed decisions by enhancing their understanding of cost trends in the construction industry.
Details
Keywords
Rizwan Ali, Jin Xu, Mushahid Hussain Baig, Hafiz Saif Ur Rehman, Muhammad Waqas Aslam and Kaleem Ullah Qasim
This study aims to endeavour to decode artificial intelligence (AI)-based tokens' complex dynamics and predictability using a comprehensive multivariate framework that integrates…
Abstract
Purpose
This study aims to endeavour to decode artificial intelligence (AI)-based tokens' complex dynamics and predictability using a comprehensive multivariate framework that integrates technical and macroeconomic indicators.
Design/methodology/approach
In this study we used advance machine learning techniques, such as gradient boosting regression (GBR), random forest (RF) and notably long short-term memory (LSTM) networks, this research provides a nuanced understanding of the factors driving the performance of AI tokens. The study’s comparative analysis highlights the superior predictive capabilities of LSTM models, as evidenced by their performance across various AI digital tokens such as AGIX-singularity-NET, Cortex and numeraire NMR.
Findings
This study finding shows that through an intricate exploration of feature importance and the impact of speculative behaviour, the research elucidates the long-term patterns and resilience of AI-based tokens against economic shifts. The SHapley Additive exPlanations (SHAP) analysis results show that technical and some macroeconomic factors play a dominant role in price production. It also examines the potential of these models for strategic investment and hedging, underscoring their relevance in an increasingly digital economy.
Originality/value
According to our knowledge, the absence of AI research frameworks for forecasting and modelling current aria-leading AI tokens is apparent. Due to a lack of study on understanding the relationship between the AI token market and other factors, forecasting is outstandingly demanding. This study provides a robust predictive framework to accurately identify the changing trends of AI tokens within a multivariate context and fill the gaps in existing research. We can investigate detailed predictive analytics with the help of modern AI algorithms and correct model interpretation to elaborate on the behaviour patterns of developing decentralised digital AI-based token prices.
Details
Keywords
Haider Jouma, Muhamad Mansor, Muhamad Safwan Abd Rahman, Yong Jia Ying and Hazlie Mokhlis
This study aims to investigate the daily performance of the proposed microgrid (MG) that comprises photovoltaic, wind turbines and is connected to the main grid. The load demand…
Abstract
Purpose
This study aims to investigate the daily performance of the proposed microgrid (MG) that comprises photovoltaic, wind turbines and is connected to the main grid. The load demand is a residential area that includes 20 houses.
Design/methodology/approach
The daily operational strategy of the proposed MG allows to vend and procure utterly between the main grid and MG. The smart metre of every consumer provides the supplier with the daily consumption pattern which is amended by demand side management (DSM). The daily operational cost (DOC) CO2 emission and other measures are utilized to evaluate the system performance. A grey wolf optimizer was employed to minimize DOC including the cost of procuring energy from the main grid, the emission cost and the revenue of sold energy to the main grid.
Findings
The obtained results of winter and summer days revealed that DSM significantly improved the system performance from the economic and environmental perspectives. With DSM, DOC on winter day was −26.93 ($/kWh) and on summer day, DOC was 10.59 ($/kWh). While without considering DSM, DOC on winter day was −25.42 ($/kWh) and on summer day DOC was 14.95 ($/kWh).
Originality/value
As opposed to previous research that predominantly addressed the long-term operation, the value of the proposed research is to investigate the short-term operation (24-hour) of MG that copes with vital contingencies associated with selling and procuring energy with the main grid considering the environmental cost. Outstandingly, the proposed research engaged the consumers by smart meters to apply demand-sideDSM, while the previous studies largely focused on supply side management.
Details
Keywords
Isaac Akomea-Frimpong, Jacinta Rejoice Ama Delali Dzagli, Kenneth Eluerkeh, Franklina Boakyewaa Bonsu, Sabastina Opoku-Brafi, Samuel Gyimah, Nana Ama Sika Asuming, David Wireko Atibila and Augustine Senanu Kukah
Recent United Nations Climate Change Conferences recognise extreme climate change of heatwaves, floods and droughts as threatening risks to the resilience and success of…
Abstract
Purpose
Recent United Nations Climate Change Conferences recognise extreme climate change of heatwaves, floods and droughts as threatening risks to the resilience and success of public–private partnership (PPP) infrastructure projects. Such conferences together with available project reports and empirical studies recommend project managers and practitioners to adopt smart technologies and develop robust measures to tackle climate risk exposure. Comparatively, artificial intelligence (AI) risk management tools are better to mitigate climate risk, but it has been inadequately explored in the PPP sector. Thus, this study aims to explore the tools and roles of AI in climate risk management of PPP infrastructure projects.
Design/methodology/approach
Systematically, this study compiles and analyses 36 peer-reviewed journal articles sourced from Scopus, Web of Science, Google Scholar and PubMed.
Findings
The results demonstrate deep learning, building information modelling, robotic automations, remote sensors and fuzzy logic as major key AI-based risk models (tools) for PPP infrastructures. The roles of AI in climate risk management of PPPs include risk detection, analysis, controls and prediction.
Research limitations/implications
For researchers, the findings provide relevant guide for further investigations into AI and climate risks within the PPP research domain.
Practical implications
This article highlights the AI tools in mitigating climate crisis in PPP infrastructure management.
Originality/value
This article provides strong arguments for the utilisation of AI in understanding and managing numerous challenges related to climate change in PPP infrastructure projects.
Details
Keywords
Daniel Page, Yudhvir Seetharam and Christo Auret
This study investigates whether the skilled minority of active equity managers in emerging markets can be identified using a machine learning (ML) framework that incorporates a…
Abstract
Purpose
This study investigates whether the skilled minority of active equity managers in emerging markets can be identified using a machine learning (ML) framework that incorporates a large set of performance characteristics.
Design/methodology/approach
The study uses a cross-section of South African active equity managers from January 2002 to December 2021. The performance characteristics are analysed using ML models, with a particular focus on gradient boosters, and naïve selection techniques such as momentum and style alpha. The out-of-sample nominal, excess and risk-adjusted returns are evaluated, and precision tests are conducted to assess the accuracy of the performance predictions.
Findings
A minority of active managers exhibit skill that results in generating alpha, even after accounting for fees, and show that ML models, particularly gradient boosters, are superior at identifying non-linearities. LightGBM (LG) achieves the highest out-of-sample nominal, excess and risk-adjusted return and proves to be the most accurate predictor of performance in precision tests. Naïve selection techniques, such as momentum and style alpha, outperform most ML models in forecasting emerging market active manager performance.
Originality/value
The authors contribute to the literature by demonstrating that a ML approach that incorporates a large set of performance characteristics can be used to identify skilled active equity managers in emerging markets. The findings suggest that both ML models and naïve selection techniques can be used to predict performance, but the former is more accurate in predicting ex ante performance. This study has practical implications for investment practitioners and academics interested in active asset manager performance in emerging markets.
Details
Keywords
Thamaraiselvan Natarajan, P. Pragha, Krantiraditya Dhalmahapatra and Deepak Ramanan Veera Raghavan
The metaverse, which is now revolutionizing how brands strategize their business needs, necessitates understanding individual opinions. Sentiment analysis deciphers emotions and…
Abstract
Purpose
The metaverse, which is now revolutionizing how brands strategize their business needs, necessitates understanding individual opinions. Sentiment analysis deciphers emotions and uncovers a deeper understanding of user opinions and trends within this digital realm. Further, sentiments signify the underlying factor that triggers one’s intent to use technology like the metaverse. Positive sentiments often correlate with positive user experiences, while negative sentiments may signify issues or frustrations. Brands may consider these sentiments and implement them on their metaverse platforms for a seamless user experience.
Design/methodology/approach
The current study adopts machine learning sentiment analysis techniques using Support Vector Machine, Doc2Vec, RNN, and CNN to explore the sentiment of individuals toward metaverse in a user-generated context. The topics were discovered using the topic modeling method, and sentiment analysis was performed subsequently.
Findings
The results revealed that the users had a positive notion about the experience and orientation of the metaverse while having a negative attitude towards the economy, data, and cyber security. The accuracy of each model has been analyzed, and it has been concluded that CNN provides better accuracy on an average of 89% compared to the other models.
Research limitations/implications
Analyzing sentiment can reveal how the general public perceives the metaverse. Positive sentiment may suggest enthusiasm and readiness for adoption, while negative sentiment might indicate skepticism or concerns. Given the positive user notions about the metaverse’s experience and orientation, developers should continue to focus on creating innovative and immersive virtual environments. At the same time, users' concerns about data, cybersecurity and the economy are critical. The negative attitude toward the metaverse’s economy suggests a need for innovation in economic models within the metaverse. Also, developers and platform operators should prioritize robust data security measures. Implementing strong encryption and two-factor authentication and educating users about cybersecurity best practices can address these concerns and enhance user trust.
Social implications
In terms of societal dynamics, the metaverse could revolutionize communication and relationships by altering traditional notions of proximity and the presence of its users. Further, virtual economies might emerge, with virtual assets having real-world value, presenting both opportunities and challenges for industries and regulators.
Originality/value
The current study contributes to research as it is the first of its kind to explore the sentiments of individuals toward the metaverse using deep learning techniques and evaluate the accuracy of these models.
Details
Keywords
Abhinandan Chatterjee, Pradip Bala, Shruti Gedam, Sanchita Paul and Nishant Goyal
Depression is a mental health problem characterized by a persistent sense of sadness and loss of interest. EEG signals are regarded as the most appropriate instruments for…
Abstract
Purpose
Depression is a mental health problem characterized by a persistent sense of sadness and loss of interest. EEG signals are regarded as the most appropriate instruments for diagnosing depression because they reflect the operating status of the human brain. The purpose of this study is the early detection of depression among people using EEG signals.
Design/methodology/approach
(i) Artifacts are removed by filtering and linear and non-linear features are extracted; (ii) feature scaling is done using a standard scalar while principal component analysis (PCA) is used for feature reduction; (iii) the linear, non-linear and combination of both (only for those whose accuracy is highest) are taken for further analysis where some ML and DL classifiers are applied for the classification of depression; and (iv) in this study, total 15 distinct ML and DL methods, including KNN, SVM, bagging SVM, RF, GB, Extreme Gradient Boosting, MNB, Adaboost, Bagging RF, BootAgg, Gaussian NB, RNN, 1DCNN, RBFNN and LSTM, that have been effectively utilized as classifiers to handle a variety of real-world issues.
Findings
1. Among all, alpha, alpha asymmetry, gamma and gamma asymmetry give the best results in linear features, while RWE, DFA, CD and AE give the best results in non-linear feature. 2. In the linear features, gamma and alpha asymmetry have given 99.98% accuracy for Bagging RF, while gamma asymmetry has given 99.98% accuracy for BootAgg. 3. For non-linear features, it has been shown 99.84% of accuracy for RWE and DFA in RF, 99.97% accuracy for DFA in XGBoost and 99.94% accuracy for RWE in BootAgg. 4. By using DL, in linear features, gamma asymmetry has given more than 96% accuracy in RNN and 91% accuracy in LSTM and for non-linear features, 89% accuracy has been achieved for CD and AE in LSTM. 5. By combining linear and non-linear features, the highest accuracy was achieved in Bagging RF (98.50%) gamma asymmetry + RWE. In DL, Alpha + RWE, Gamma asymmetry + CD and gamma asymmetry + RWE have achieved 98% accuracy in LSTM.
Originality/value
A novel dataset was collected from the Central Institute of Psychiatry (CIP), Ranchi which was recorded using a 128-channels whereas major previous studies used fewer channels; the details of the study participants are summarized and a model is developed for statistical analysis using N-way ANOVA; artifacts are removed by high and low pass filtering of epoch data followed by re-referencing and independent component analysis for noise removal; linear features, namely, band power and interhemispheric asymmetry and non-linear features, namely, relative wavelet energy, wavelet entropy, Approximate entropy, sample entropy, detrended fluctuation analysis and correlation dimension are extracted; this model utilizes Epoch (213,072) for 5 s EEG data, which allows the model to train for longer, thereby increasing the efficiency of classifiers. Features scaling is done using a standard scalar rather than normalization because it helps increase the accuracy of the models (especially for deep learning algorithms) while PCA is used for feature reduction; the linear, non-linear and combination of both features are taken for extensive analysis in conjunction with ML and DL classifiers for the classification of depression. The combination of linear and non-linear features (only for those whose accuracy is highest) is used for the best detection results.