Search results
1 – 10 of 424Jianxiang Qiu, Jialiang Xie, Dongxiao Zhang and Ruping Zhang
Twin support vector machine (TSVM) is an effective machine learning technique. However, the TSVM model does not consider the influence of different data samples on the optimal…
Abstract
Purpose
Twin support vector machine (TSVM) is an effective machine learning technique. However, the TSVM model does not consider the influence of different data samples on the optimal hyperplane, which results in its sensitivity to noise. To solve this problem, this study proposes a twin support vector machine model based on fuzzy systems (FSTSVM).
Design/methodology/approach
This study designs an effective fuzzy membership assignment strategy based on fuzzy systems. It describes the relationship between the three inputs and the fuzzy membership of the sample by defining fuzzy inference rules and then exports the fuzzy membership of the sample. Combining this strategy with TSVM, the FSTSVM is proposed. Moreover, to speed up the model training, this study employs a coordinate descent strategy with shrinking by active set. To evaluate the performance of FSTSVM, this study conducts experiments designed on artificial data sets and UCI data sets.
Findings
The experimental results affirm the effectiveness of FSTSVM in addressing binary classification problems with noise, demonstrating its superior robustness and generalization performance compared to existing learning models. This can be attributed to the proposed fuzzy membership assignment strategy based on fuzzy systems, which effectively mitigates the adverse effects of noise.
Originality/value
This study designs a fuzzy membership assignment strategy based on fuzzy systems that effectively reduces the negative impact caused by noise and then proposes the noise-robust FSTSVM model. Moreover, the model employs a coordinate descent strategy with shrinking by active set to accelerate the training speed of the model.
Details
Keywords
Thamaraiselvan Natarajan, P. Pragha, Krantiraditya Dhalmahapatra and Deepak Ramanan Veera Raghavan
The metaverse, which is now revolutionizing how brands strategize their business needs, necessitates understanding individual opinions. Sentiment analysis deciphers emotions and…
Abstract
Purpose
The metaverse, which is now revolutionizing how brands strategize their business needs, necessitates understanding individual opinions. Sentiment analysis deciphers emotions and uncovers a deeper understanding of user opinions and trends within this digital realm. Further, sentiments signify the underlying factor that triggers one’s intent to use technology like the metaverse. Positive sentiments often correlate with positive user experiences, while negative sentiments may signify issues or frustrations. Brands may consider these sentiments and implement them on their metaverse platforms for a seamless user experience.
Design/methodology/approach
The current study adopts machine learning sentiment analysis techniques using Support Vector Machine, Doc2Vec, RNN, and CNN to explore the sentiment of individuals toward metaverse in a user-generated context. The topics were discovered using the topic modeling method, and sentiment analysis was performed subsequently.
Findings
The results revealed that the users had a positive notion about the experience and orientation of the metaverse while having a negative attitude towards the economy, data, and cyber security. The accuracy of each model has been analyzed, and it has been concluded that CNN provides better accuracy on an average of 89% compared to the other models.
Research limitations/implications
Analyzing sentiment can reveal how the general public perceives the metaverse. Positive sentiment may suggest enthusiasm and readiness for adoption, while negative sentiment might indicate skepticism or concerns. Given the positive user notions about the metaverse’s experience and orientation, developers should continue to focus on creating innovative and immersive virtual environments. At the same time, users' concerns about data, cybersecurity and the economy are critical. The negative attitude toward the metaverse’s economy suggests a need for innovation in economic models within the metaverse. Also, developers and platform operators should prioritize robust data security measures. Implementing strong encryption and two-factor authentication and educating users about cybersecurity best practices can address these concerns and enhance user trust.
Social implications
In terms of societal dynamics, the metaverse could revolutionize communication and relationships by altering traditional notions of proximity and the presence of its users. Further, virtual economies might emerge, with virtual assets having real-world value, presenting both opportunities and challenges for industries and regulators.
Originality/value
The current study contributes to research as it is the first of its kind to explore the sentiments of individuals toward the metaverse using deep learning techniques and evaluate the accuracy of these models.
Details
Keywords
Xiaojie Xu and Yun Zhang
The Chinese housing market has witnessed rapid growth during the past decade and the significance of housing price forecasting has undoubtedly elevated, becoming an important…
Abstract
Purpose
The Chinese housing market has witnessed rapid growth during the past decade and the significance of housing price forecasting has undoubtedly elevated, becoming an important issue to investors and policymakers. This study aims to examine neural networks (NNs) for office property price index forecasting from 10 major Chinese cities for July 2005–April 2021.
Design/methodology/approach
The authors aim at building simple and accurate NNs to contribute to pure technical forecasts of the Chinese office property market. To facilitate the analysis, the authors explore different model settings over algorithms, delays, hidden neurons and data-spitting ratios.
Findings
The authors reach a simple NN with three delays and three hidden neurons, which leads to stable performance of about 1.45% average relative root mean square error across the 10 cities for the training, validation and testing phases.
Originality/value
The results could be used on a standalone basis or combined with fundamental forecasts to form perspectives of office property price trends and conduct policy analysis.
Details
Keywords
Daniel Šandor and Marina Bagić Babac
Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning…
Abstract
Purpose
Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning. It is mainly distinguished by the inflection with which it is spoken, with an undercurrent of irony, and is largely dependent on context, which makes it a difficult task for computational analysis. Moreover, sarcasm expresses negative sentiments using positive words, allowing it to easily confuse sentiment analysis models. This paper aims to demonstrate the task of sarcasm detection using the approach of machine and deep learning.
Design/methodology/approach
For the purpose of sarcasm detection, machine and deep learning models were used on a data set consisting of 1.3 million social media comments, including both sarcastic and non-sarcastic comments. The data set was pre-processed using natural language processing methods, and additional features were extracted and analysed. Several machine learning models, including logistic regression, ridge regression, linear support vector and support vector machines, along with two deep learning models based on bidirectional long short-term memory and one bidirectional encoder representations from transformers (BERT)-based model, were implemented, evaluated and compared.
Findings
The performance of machine and deep learning models was compared in the task of sarcasm detection, and possible ways of improvement were discussed. Deep learning models showed more promise, performance-wise, for this type of task. Specifically, a state-of-the-art model in natural language processing, namely, BERT-based model, outperformed other machine and deep learning models.
Originality/value
This study compared the performance of the various machine and deep learning models in the task of sarcasm detection using the data set of 1.3 million comments from social media.
Details
Keywords
Magdalena Saldana-Perez, Giovanni Guzmán, Carolina Palma-Preciado, Amadeo Argüelles-Cruz and Marco Moreno-Ibarra
Climate change is a problem that concerns all of us. Despite the information produced by organizations such as the Expert Team on Climate Change Detection and Indices and the…
Abstract
Purpose
Climate change is a problem that concerns all of us. Despite the information produced by organizations such as the Expert Team on Climate Change Detection and Indices and the United Nations, only a few cities have been planned taking into account the climate changes indices. This paper aims to study climatic variations, how climate conditions might change in the future and how these changes will affect the activities and living conditions in cities, specifically focusing on Mexico city.
Design/methodology/approach
In this approach, two distinct machine learning regression models, k-Nearest Neighbors and Support Vector Regression, were used to predict variations in climate change indices within select urban areas of Mexico city. The calculated indices are based on maximum, minimum and average temperature data collected from the National Water Commission in Mexico and the Scientific Research Center of Ensenada. The methodology involves pre-processing temperature data to create a training data set for regression algorithms. It then computes predictions for each temperature parameter and ultimately assesses the performance of these algorithms based on precision metrics scores.
Findings
This paper combines a geospatial perspective with computational tools and machine learning algorithms. Among the two regression algorithms used, it was observed that k-Nearest Neighbors produced superior results, achieving an R2 score of 0.99, in contrast to Support Vector Regression, which yielded an R2 score of 0.74.
Originality/value
The full potential of machine learning algorithms has not been fully harnessed for predicting climate indices. This paper also identifies the strengths and weaknesses of each algorithm and how the generated estimations can then be considered in the decision-making process.
Details
Keywords
Hossein Sohrabi and Esmatullah Noorzai
The present study aims to develop a risk-supported case-based reasoning (RS-CBR) approach for water-related projects by incorporating various uncertainties and risks in the…
Abstract
Purpose
The present study aims to develop a risk-supported case-based reasoning (RS-CBR) approach for water-related projects by incorporating various uncertainties and risks in the revision step.
Design/methodology/approach
The cases were extracted by studying 68 water-related projects. This research employs earned value management (EVM) factors to consider time and cost features and economic, natural, technical, and project risks to account for uncertainties and supervised learning models to estimate cost overrun. Time-series algorithms were also used to predict construction cost indexes (CCI) and model improvements in future forecasts. Outliers were deleted by the pre-processing process. Next, datasets were split into testing and training sets, and algorithms were implemented. The accuracy of different models was measured with the mean absolute percentage error (MAPE) and the normalized root mean square error (NRSME) criteria.
Findings
The findings show an improvement in the accuracy of predictions using datasets that consider uncertainties, and ensemble algorithms such as Random Forest and AdaBoost had higher accuracy. Also, among the single algorithms, the support vector regressor (SVR) with the sigmoid kernel outperformed the others.
Originality/value
This research is the first attempt to develop a case-based reasoning model based on various risks and uncertainties. The developed model has provided an approving overlap with machine learning models to predict cost overruns. The model has been implemented in collected water-related projects and results have been reported.
Details
Keywords
Marko Kureljusic and Erik Karger
Accounting information systems are mainly rule-based, and data are usually available and well-structured. However, many accounting systems are yet to catch up with current…
Abstract
Purpose
Accounting information systems are mainly rule-based, and data are usually available and well-structured. However, many accounting systems are yet to catch up with current technological developments. Thus, artificial intelligence (AI) in financial accounting is often applied only in pilot projects. Using AI-based forecasts in accounting enables proactive management and detailed analysis. However, thus far, there is little knowledge about which prediction models have already been evaluated for accounting problems. Given this lack of research, our study aims to summarize existing findings on how AI is used for forecasting purposes in financial accounting. Therefore, the authors aim to provide a comprehensive overview and agenda for future researchers to gain more generalizable knowledge.
Design/methodology/approach
The authors identify existing research on AI-based forecasting in financial accounting by conducting a systematic literature review. For this purpose, the authors used Scopus and Web of Science as scientific databases. The data collection resulted in a final sample size of 47 studies. These studies were analyzed regarding their forecasting purpose, sample size, period and applied machine learning algorithms.
Findings
The authors identified three application areas and presented details regarding the accuracy and AI methods used. Our findings show that sociotechnical and generalizable knowledge is still missing. Therefore, the authors also develop an open research agenda that future researchers can address to enable the more frequent and efficient use of AI-based forecasts in financial accounting.
Research limitations/implications
Owing to the rapid development of AI algorithms, our results can only provide an overview of the current state of research. Therefore, it is likely that new AI algorithms will be applied, which have not yet been covered in existing research. However, interested researchers can use our findings and future research agenda to develop this field further.
Practical implications
Given the high relevance of AI in financial accounting, our results have several implications and potential benefits for practitioners. First, the authors provide an overview of AI algorithms used in different accounting use cases. Based on this overview, companies can evaluate the AI algorithms that are most suitable for their practical needs. Second, practitioners can use our results as a benchmark of what prediction accuracy is achievable and should strive for. Finally, our study identified several blind spots in the research, such as ensuring employee acceptance of machine learning algorithms in companies. However, companies should consider this to implement AI in financial accounting successfully.
Originality/value
To the best of our knowledge, no study has yet been conducted that provided a comprehensive overview of AI-based forecasting in financial accounting. Given the high potential of AI in accounting, the authors aimed to bridge this research gap. Moreover, our cross-application view provides general insights into the superiority of specific algorithms.
Details
Keywords
Ruchi Kejriwal, Monika Garg and Gaurav Sarin
Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both…
Abstract
Purpose
Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both fundamental and technical analysis to predict the prices. Fundamental analysis helps to study structured data of the company. Technical analysis helps to study price trends, and with the increasing and easy availability of unstructured data have made it important to study the market sentiment. Market sentiment has a major impact on the prices in short run. Hence, the purpose is to understand the market sentiment timely and effectively.
Design/methodology/approach
The research includes text mining and then creating various models for classification. The accuracy of these models is checked using confusion matrix.
Findings
Out of the six machine learning techniques used to create the classification model, kernel support vector machine gave the highest accuracy of 68%. This model can be now used to analyse the tweets, news and various other unstructured data to predict the price movement.
Originality/value
This study will help investors classify a news or a tweet into “positive”, “negative” or “neutral” quickly and determine the stock price trends.
Details
Keywords
Emerson Norabuena-Figueroa, Roger Rurush-Asencio, K. P. Jaheer Mukthar, Jose Sifuentes-Stratti and Elia Ramírez-Asís
The development of information technologies has led to a considerable transformation in human resource management from conventional or commonly known as personnel management to…
Abstract
The development of information technologies has led to a considerable transformation in human resource management from conventional or commonly known as personnel management to modern one. Data mining technology, which has been widely used in several applications, including those that function on the web, includes clustering algorithms as a key component. Web intelligence is a recent academic field that calls for sophisticated analytics and machine learning techniques to facilitate information discovery, particularly on the web. Human resource data gathered from the web are typically enormous, highly complex, dynamic, and unstructured. Traditional clustering methods need to be upgraded because they are ineffective. Standard clustering algorithms are enhanced and expanded with optimization capabilities to address this difficulty by swarm intelligence, a subset of nature-inspired computing. We collect the initial raw human resource data and preprocess the data wherein data cleaning, data normalization, and data integration takes place. The proposed K-C-means-data driven cuckoo bat optimization algorithm (KCM-DCBOA) is used for clustering of the human resource data. The feature extraction is done using principal component analysis (PCA) and the classification of human resource data is done using support vector machine (SVM). Other approaches from the literature were contrasted with the suggested approach. According to the experimental findings, the suggested technique has extremely promising features in terms of the quality of clustering and execution time.
Details
Keywords
Samirasadat Samadi and Mohammad Saeed Taslimi
This study aims to review the features and challenges of the flood relief chain, identifies administrative measures during and after the flood occurrence and prioritizes them…
Abstract
Purpose
This study aims to review the features and challenges of the flood relief chain, identifies administrative measures during and after the flood occurrence and prioritizes them using two machine learning (ML) and analytic hierarchy process (AHP) methods. This paper aims to provide a prioritization program based on flood conditions that optimize flood management and improves society’s resilience against flood occurrence.
Design/methodology/approach
The collected database in this paper has been trained by using ML algorithms, including support vector machine (SVM), Naive Bayes (NB) and k-nearest neighbors (kNN), to create a prioritization program. Furthermore, the administrative measures in two phases of during and after the flood are prioritized by using the AHP method and questionnaires completed by experts and relief workers in flood management.
Findings
Among the ML algorithms, the SVM method was selected with 91.37% accuracy. The prioritization program provided by the model, which distinguishes it from other existing models, considers five conditions of the flood occurrence to prioritize actions (season, population affected, area affected, damage to houses and human lives lost). Therefore, the model presents a specific plan for each flood with different occurrence conditions.
Research limitations/implications
The main limitation is the lack of a comprehensive data set to determine the effect of all flood conditions on the prioritization program and the relief activities that have been done in previous flood disasters.
Originality/value
The originality of this paper is the use of ML methods to prioritize administrative measures during and after the flood and presents a prioritization program based on each flood’s conditions. Therefore, through this program, the authority and society can control the adverse impacts of flood more effectively and help to reduce human and financial losses as much as possible.
Details