Search results

1 – 10 of 15
Open Access
Article
Publication date: 21 May 2024

Yaohao Peng and João Gabriel de Moraes Souza

This study aims to evaluate the effectiveness of machine learning models to yield profitability over the market benchmark, notably in periods of systemic instability, such as the…

78

Abstract

Purpose

This study aims to evaluate the effectiveness of machine learning models to yield profitability over the market benchmark, notably in periods of systemic instability, such as the ongoing war between Russia and Ukraine.

Design/methodology/approach

This study made computational experiments using support vector machine (SVM) classifiers to predict stock price movements for three financial markets and construct profitable trading strategies to subsidize investors’ decision-making.

Findings

On average, machine learning models outperformed the market benchmarks during the more volatile period of the Russia–Ukraine war, but not during the period before the conflict. Moreover, the hyperparameter combinations for which the profitability is superior were found to be highly sensitive to small variations during the model training process.

Practical implications

Investors should proceed with caution when applying machine learning models for stock price forecasting and trading recommendations, as their superior performance for volatile periods – in terms of generating abnormal gains over the market – was not observed for a period of relative stability in the economy.

Originality/value

This paper’s approach to search for financial strategies that succeed in outperforming the market provides empirical evidence about the effectiveness of state-of-the-art machine learning techniques before and after the conflict deflagration, which is of potential value for researchers in quantitative finance and market professionals who operate in the financial segment.

Details

Revista de Gestão, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1809-2276

Keywords

Open Access
Article
Publication date: 28 November 2022

Ruchi Kejriwal, Monika Garg and Gaurav Sarin

Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both…

1071

Abstract

Purpose

Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both fundamental and technical analysis to predict the prices. Fundamental analysis helps to study structured data of the company. Technical analysis helps to study price trends, and with the increasing and easy availability of unstructured data have made it important to study the market sentiment. Market sentiment has a major impact on the prices in short run. Hence, the purpose is to understand the market sentiment timely and effectively.

Design/methodology/approach

The research includes text mining and then creating various models for classification. The accuracy of these models is checked using confusion matrix.

Findings

Out of the six machine learning techniques used to create the classification model, kernel support vector machine gave the highest accuracy of 68%. This model can be now used to analyse the tweets, news and various other unstructured data to predict the price movement.

Originality/value

This study will help investors classify a news or a tweet into “positive”, “negative” or “neutral” quickly and determine the stock price trends.

Details

Vilakshan - XIMB Journal of Management, vol. 21 no. 1
Type: Research Article
ISSN: 0973-1954

Keywords

Open Access
Article
Publication date: 31 July 2023

Daniel Šandor and Marina Bagić Babac

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning…

3294

Abstract

Purpose

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning. It is mainly distinguished by the inflection with which it is spoken, with an undercurrent of irony, and is largely dependent on context, which makes it a difficult task for computational analysis. Moreover, sarcasm expresses negative sentiments using positive words, allowing it to easily confuse sentiment analysis models. This paper aims to demonstrate the task of sarcasm detection using the approach of machine and deep learning.

Design/methodology/approach

For the purpose of sarcasm detection, machine and deep learning models were used on a data set consisting of 1.3 million social media comments, including both sarcastic and non-sarcastic comments. The data set was pre-processed using natural language processing methods, and additional features were extracted and analysed. Several machine learning models, including logistic regression, ridge regression, linear support vector and support vector machines, along with two deep learning models based on bidirectional long short-term memory and one bidirectional encoder representations from transformers (BERT)-based model, were implemented, evaluated and compared.

Findings

The performance of machine and deep learning models was compared in the task of sarcasm detection, and possible ways of improvement were discussed. Deep learning models showed more promise, performance-wise, for this type of task. Specifically, a state-of-the-art model in natural language processing, namely, BERT-based model, outperformed other machine and deep learning models.

Originality/value

This study compared the performance of the various machine and deep learning models in the task of sarcasm detection using the data set of 1.3 million comments from social media.

Details

Information Discovery and Delivery, vol. 52 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Open Access
Article
Publication date: 21 May 2024

Ahmed Ali A. Shohan, Ahmed Bindajam, Mohammed Al-Shayeb and Hang Thi

This study aims to quantify and analyse the dynamics of land use and land cover (LULC) changes over three decades in the rapidly urbanizing city of Abha, Saudi Arabia, and to…

Abstract

Purpose

This study aims to quantify and analyse the dynamics of land use and land cover (LULC) changes over three decades in the rapidly urbanizing city of Abha, Saudi Arabia, and to assess urban growth using Morphological Spatial Pattern Analysis (MSPA).

Design/methodology/approach

Using the Support Vector Machine (SVM) classification in Google Earth Engine, changes in land use in Abha between 1990 and 2020 are accurately assessed. This method leverages cloud computing to enhance the efficiency and accuracy of big data analysis. Additionally, MSPA was employed in Google Colab to analyse urban growth patterns.

Findings

The study demonstrates significant expansion of urban areas in Abha, growing from 62.46 km² in 1990 to 271.45 km² in 2020, while aquatic habitats decreased from 1.36 km² to 0.52 km². MSPA revealed a notable increase in urban core areas from 41.66 km² in 2001 to 194.97 km² in 2021, showcasing the nuanced dynamics of urban sprawl and densification.

Originality/value

The novelty of this study lies in its integrated approach, combining LULC and MSPA analyses within a cloud computing framework to capture the dynamics of city and environment. The insights from this study are poised to influence policy and planning decisions, particularly in fostering sustainable urban environments that accommodate growth while preserving natural habitats. This approach is crucial for devising strategies that can adapt to and mitigate the environmental impacts of urban expansion.

Details

Frontiers in Engineering and Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-2499

Keywords

Open Access
Article
Publication date: 2 April 2024

Koraljka Golub, Osma Suominen, Ahmed Taiye Mohammed, Harriet Aagaard and Olof Osterman

In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an…

Abstract

Purpose

In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an open source software package on a large set of Swedish union catalogue metadata records, with Dewey Decimal Classification (DDC) as the target classification system. It also aimed to contribute to the body of research on aboutness and related challenges in automated subject indexing and evaluation.

Design/methodology/approach

On a sample of over 230,000 records with close to 12,000 distinct DDC classes, an open source tool Annif, developed by the National Library of Finland, was applied in the following implementations: lexical algorithm, support vector classifier, fastText, Omikuji Bonsai and an ensemble approach combing the former four. A qualitative study involving two senior catalogue librarians and three students of library and information studies was also conducted to investigate the value and inter-rater agreement of automatically assigned classes, on a sample of 60 records.

Findings

The best results were achieved using the ensemble approach that achieved 66.82% accuracy on the three-digit DDC classification task. The qualitative study confirmed earlier studies reporting low inter-rater agreement but also pointed to the potential value of automatically assigned classes as additional access points in information retrieval.

Originality/value

The paper presents an extensive study of automated classification in an operative library catalogue, accompanied by a qualitative study of automated classes. It demonstrates the value of applying semi-automated indexing in operative information retrieval systems.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 23 January 2024

Luís Jacques de Sousa, João Poças Martins, Luís Sanhudo and João Santos Baptista

This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase…

Abstract

Purpose

This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase, construction companies must assess the scope of each task and map the client’s expectations to an internal database of tasks, resources and costs. Quantity surveyors carry out this assessment manually with little to no computer aid, within very austere time constraints, even though these results determine the company’s bid quality and are contractually binding.

Design/methodology/approach

This paper seeks to compile applications of machine learning (ML) and natural language processing in the architectural engineering and construction sector to find which methodologies can assist this assessment. The paper carries out a systematic literature review, following the preferred reporting items for systematic reviews and meta-analyses guidelines, to survey the main scientific contributions within the topic of text classification (TC) for budgeting in construction.

Findings

This work concludes that it is necessary to develop data sets that represent the variety of tasks in construction, achieve higher accuracy algorithms, widen the scope of their application and reduce the need for expert validation of the results. Although full automation is not within reach in the short term, TC algorithms can provide helpful support tools.

Originality/value

Given the increasing interest in ML for construction and recent developments, the findings disclosed in this paper contribute to the body of knowledge, provide a more automated perspective on budgeting in construction and break ground for further implementation of text-based ML in budgeting for construction.

Details

Construction Innovation , vol. 24 no. 7
Type: Research Article
ISSN: 1471-4175

Keywords

Open Access
Article
Publication date: 26 April 2024

Luís Jacques de Sousa, João Poças Martins and Luís Sanhudo

Factors like bid price, submission time, and number of bidders influence the procurement process in public projects. These factors and the award criteria may impact the project’s…

Abstract

Purpose

Factors like bid price, submission time, and number of bidders influence the procurement process in public projects. These factors and the award criteria may impact the project’s financial compliance. Predicting budget compliance in construction projects has been traditionally challenging, but Machine Learning (ML) techniques have revolutionised estimations.

Design/methodology/approach

In this study, Portuguese Public Procurement Data (PPPData) was utilised as the model’s input. Notably, this dataset exhibited a substantial imbalance in the target feature. To address this issue, the study evaluated three distinct data balancing techniques: oversampling, undersampling, and the SMOTE method. Next, a comprehensive feature selection process was conducted, leading to the testing of five different algorithms for forecasting budget compliance. Finally, a secondary test was conducted, refining the features to include only those elements that procurement technicians can modify while also considering the two most accurate predictors identified in the previous test.

Findings

The findings indicate that employing the SMOTE method on the scraped data can achieve a balanced dataset. Furthermore, the results demonstrate that the Adam ANN algorithm outperformed others, boasting a precision rate of 68.1%.

Practical implications

The model can aid procurement technicians during the tendering phase by using historical data and analogous projects to predict performance.

Social implications

Although the study reveals that ML algorithms cannot accurately predict budget compliance using procurement data, they can still provide project owners with insights into the most suitable criteria, aiding decision-making. Further research should assess the model’s impact and capacity within the procurement workflow.

Originality/value

Previous research predominantly focused on forecasting budgets by leveraging data from the private construction execution phase. While some investigations incorporated procurement data, this study distinguishes itself by using an imbalanced dataset and anticipating compliance rather than predicting budgetary figures. The model predicts budget compliance by analysing qualitative and quantitative characteristics of public project contracts. The research paper explores various model architectures and data treatment techniques to develop a model to assist the Client in tender definition.

Details

Engineering, Construction and Architectural Management, vol. 31 no. 13
Type: Research Article
ISSN: 0969-9988

Keywords

Open Access
Article
Publication date: 21 May 2024

Vinicius Muraro and Sergio Salles-Filho

Currently, foresight studies have been adapted to incorporate new techniques based on big data and machine learning (BDML), which has led to new approaches and conceptual changes…

Abstract

Purpose

Currently, foresight studies have been adapted to incorporate new techniques based on big data and machine learning (BDML), which has led to new approaches and conceptual changes regarding uncertainty and how to prospect future. The purpose of this study is to explore the effects of BDML on foresight practice and on conceptual changes in uncertainty.

Design/methodology/approach

The methodology is twofold: a bibliometric analysis of BDML-supported foresight studies collected from Scopus up to 2021 and a survey analysis with 479 foresight experts to gather opinions and expectations from academics and practitioners related to BDML in foresight studies. These approaches provide a comprehensive understanding of the current landscape and future paths of BDML-supported foresight research, using quantitative analysis of literature and qualitative input from experts in the field, and discuss potential theoretical changes related to uncertainty.

Findings

It is still incipient but increasing the number of prospective studies that use BDML techniques, which are often integrated into traditional foresight methodologies. Although it is expected that BDML will boost data analysis, there are concerns regarding possible biased results. Data literacy will be required from the foresight team to leverage the potential and mitigate risks. The article also discusses the extent to which BDML is expected to affect uncertainty, both theoretically and in foresight practice.

Originality/value

This study contributes to the conceptual debate on decision-making under uncertainty and raises public understanding on the opportunities and challenges of using BDML for foresight and decision-making.

Details

foresight, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-6689

Keywords

Open Access
Article
Publication date: 18 October 2023

Ivan Soukal, Jan Mačí, Gabriela Trnková, Libuse Svobodova, Martina Hedvičáková, Eva Hamplova, Petra Maresova and Frank Lefley

The primary purpose of this paper is to identify the so-called core authors and their publications according to pre-defined criteria and thereby direct the users to the fastest…

Abstract

Purpose

The primary purpose of this paper is to identify the so-called core authors and their publications according to pre-defined criteria and thereby direct the users to the fastest and easiest way to get a picture of the otherwise pervasive field of bankruptcy prediction models. The authors aim to present state-of-the-art bankruptcy prediction models assembled by the field's core authors and critically examine the approaches and methods adopted.

Design/methodology/approach

The authors conducted a literature search in November 2022 through scientific databases Scopus, ScienceDirect and the Web of Science, focussing on a publication period from 2010 to 2022. The database search query was formulated as “Bankruptcy Prediction” and “Model or Tool”. However, the authors intentionally did not specify any model or tool to make the search non-discriminatory. The authors reviewed over 7,300 articles.

Findings

This paper has addressed the research questions: (1) What are the most important publications of the core authors in terms of the target country, size of the sample, sector of the economy and specialization in SME? (2) What are the most used methods for deriving or adjusting models appearing in the articles of the core authors? (3) To what extent do the core authors include accounting-based variables, non-financial or macroeconomic indicators, in their prediction models? Despite the advantages of new-age methods, based on the information in the articles analyzed, it can be deduced that conventional methods will continue to be beneficial, mainly due to the higher degree of ease of use and the transferability of the derived model.

Research limitations/implications

The authors identify several gaps in the literature which this research does not address but could be the focus of future research.

Practical implications

The authors provide practitioners and academics with an extract from a wide range of studies, available in scientific databases, on bankruptcy prediction models or tools, resulting in a large number of records being reviewed. This research will interest shareholders, corporations, and financial institutions interested in models of financial distress prediction or bankruptcy prediction to help identify troubled firms in the early stages of distress.

Social implications

Bankruptcy is a major concern for society in general, especially in today's economic environment. Therefore, being able to predict possible business failure at an early stage will give an organization time to address the issue and maybe avoid bankruptcy.

Originality/value

To the authors' knowledge, this is the first paper to identify the core authors in the bankruptcy prediction model and methods field. The primary value of the study is the current overview and analysis of the theoretical and practical development of knowledge in this field in the form of the construction of new models using classical or new-age methods. Also, the paper adds value by critically examining existing models and their modifications, including a discussion of the benefits of non-accounting variables usage.

Details

Central European Management Journal, vol. 32 no. 1
Type: Research Article
ISSN: 2658-0845

Keywords

Open Access
Article
Publication date: 20 November 2023

Devesh Singh

This study aims to examine foreign direct investment (FDI) factors and develops a rational framework for FDI inflow in Western European countries such as France, Germany, the…

Abstract

Purpose

This study aims to examine foreign direct investment (FDI) factors and develops a rational framework for FDI inflow in Western European countries such as France, Germany, the Netherlands, Switzerland, Belgium and Austria.

Design/methodology/approach

Data for this study were collected from the World development indicators (WDI) database from 1995 to 2018. Factors such as economic growth, pollution, trade, domestic capital investment, gross value-added and the financial stability of the country that influence FDI decisions were selected through empirical literature. A framework was developed using interpretable machine learning (IML), decision trees and three-stage least squares simultaneous equation methods for FDI inflow in Western Europe.

Findings

The findings of this study show that there is a difference between the most important and trusted factors for FDI inflow. Additionally, this study shows that machine learning (ML) models can perform better than conventional linear regression models.

Research limitations/implications

This research has several limitations. Ideally, classification accuracies should be higher, and the current scope of this research is limited to examining the performance of FDI determinants within Western Europe.

Practical implications

Through this framework, the national government can understand how investors make their capital allocation decisions in their country. The framework developed in this study can help policymakers better understand the rationality of FDI inflows.

Originality/value

An IML framework has not been developed in prior studies to analyze FDI inflows. Additionally, the author demonstrates the applicability of the IML framework for estimating FDI inflows in Western Europe.

Details

Journal of Economics, Finance and Administrative Science, vol. 29 no. 57
Type: Research Article
ISSN: 2077-1886

Keywords

Access

Only content I have access to

Year

Last 3 months (15)

Content type

1 – 10 of 15