Search results

1 – 10 of over 2000
Open Access
Article
Publication date: 17 October 2023

Abdelhadi Ifleh and Mounime El Kabbouri

The prediction of stock market (SM) indices is a fascinating task. An in-depth analysis in this field can provide valuable information to investors, traders and policy makers in…

Abstract

Purpose

The prediction of stock market (SM) indices is a fascinating task. An in-depth analysis in this field can provide valuable information to investors, traders and policy makers in attractive SMs. This article aims to apply a correlation feature selection model to identify important technical indicators (TIs), which are combined with multiple deep learning (DL) algorithms for forecasting SM indices.

Design/methodology/approach

The methodology involves using a correlation feature selection model to select the most relevant features. These features are then used to predict the fluctuations of six markets using various DL algorithms, and the results are compared with predictions made using all features by using a range of performance measures.

Findings

The experimental results show that the combination of TIs selected through correlation and Artificial Neural Network (ANN) provides good results in the MADEX market. The combination of selected indicators and Convolutional Neural Network (CNN) in the NASDAQ 100 market outperforms all other combinations of variables and models. In other markets, the combination of all variables with ANN provides the best results.

Originality/value

This article makes several significant contributions, including the use of a correlation feature selection model to select pertinent variables, comparison between multiple DL algorithms (ANN, CNN and Long-Short-Term Memory (LSTM)), combining selected variables with algorithms to improve predictions, evaluation of the suggested model on six datasets (MASI, MADEX, FTSE 100, SP500, NASDAQ 100 and EGX 30) and application of various performance measures (Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error(RMSE), Mean Squared Logarithmic Error (MSLE) and Root Mean Squared Logarithmic Error (RMSLE)).

Details

Arab Gulf Journal of Scientific Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1985-9899

Keywords

Open Access
Article
Publication date: 31 May 2023

Xiaojie Xu and Yun Zhang

For policymakers and participants of financial markets, predictions of trading volumes of financial indices are important issues. This study aims to address such a prediction

Abstract

Purpose

For policymakers and participants of financial markets, predictions of trading volumes of financial indices are important issues. This study aims to address such a prediction problem based on the CSI300 nearby futures by using high-frequency data recorded each minute from the launch date of the futures to roughly two years after constituent stocks of the futures all becoming shortable, a time period witnessing significantly increased trading activities.

Design/methodology/approach

In order to answer questions as follows, this study adopts the neural network for modeling the irregular trading volume series of the CSI300 nearby futures: are the research able to utilize the lags of the trading volume series to make predictions; if this is the case, how far can the predictions go and how accurate can the predictions be; can this research use predictive information from trading volumes of the CSI300 spot and first distant futures for improving prediction accuracy and what is the corresponding magnitude; how sophisticated is the model; and how robust are its predictions?

Findings

The results of this study show that a simple neural network model could be constructed with 10 hidden neurons to robustly predict the trading volume of the CSI300 nearby futures using 1–20 min ahead trading volume data. The model leads to the root mean square error of about 955 contracts. Utilizing additional predictive information from trading volumes of the CSI300 spot and first distant futures could further benefit prediction accuracy and the magnitude of improvements is about 1–2%. This benefit is particularly significant when the trading volume of the CSI300 nearby futures is close to be zero. Another benefit, at the cost of the model becoming slightly more sophisticated with more hidden neurons, is that predictions could be generated through 1–30 min ahead trading volume data.

Originality/value

The results of this study could be used for multiple purposes, including designing financial index trading systems and platforms, monitoring systematic financial risks and building financial index price forecasting.

Details

Asian Journal of Economics and Banking, vol. 8 no. 1
Type: Research Article
ISSN: 2615-9821

Keywords

Open Access
Article
Publication date: 6 April 2023

Karlo Puh and Marina Bagić Babac

Predicting the stock market's prices has always been an interesting topic since its closely related to making money. Recently, the advances in natural language processing (NLP…

3083

Abstract

Purpose

Predicting the stock market's prices has always been an interesting topic since its closely related to making money. Recently, the advances in natural language processing (NLP) have opened new perspectives for solving this task. The purpose of this paper is to show a state-of-the-art natural language approach to using language in predicting the stock market.

Design/methodology/approach

In this paper, the conventional statistical models for time-series prediction are implemented as a benchmark. Then, for methodological comparison, various state-of-the-art natural language models ranging from the baseline convolutional and recurrent neural network models to the most advanced transformer-based models are developed, implemented and tested.

Findings

Experimental results show that there is a correlation between the textual information in the news headlines and stock price prediction. The model based on the GRU (gated recurrent unit) cell with one linear layer, which takes pairs of the historical prices and the sentiment score calculated using transformer-based models, achieved the best result.

Originality/value

This study provides an insight into how to use NLP to improve stock price prediction and shows that there is a correlation between news headlines and stock price prediction.

Details

American Journal of Business, vol. 38 no. 2
Type: Research Article
ISSN: 1935-5181

Keywords

Open Access
Article
Publication date: 28 April 2023

Himanshu Goel and Bhupender Kumar Som

This study aims to predict the Indian stock market (Nifty 50) by employing macroeconomic variables as input variables identified from the literature for two sub periods, i.e. the…

1070

Abstract

Purpose

This study aims to predict the Indian stock market (Nifty 50) by employing macroeconomic variables as input variables identified from the literature for two sub periods, i.e. the pre-coronavirus disease 2019 (COVID-19) (June 2011–February 2020) and during the COVID-19 (March 2020–June 2021).

Design/methodology/approach

Secondary data on macroeconomic variables and Nifty 50 index spanning a period of last ten years starting from 2011 to 2021 have been from various government and regulatory websites. Also, an artificial neural network (ANN) model was trained with the scaled conjugate gradient algorithm for predicting the National Stock exchange's (NSE) flagship index Nifty 50.

Findings

The findings of the study reveal that Scaled Conjugate Gradient (SCG) algorithm achieved 96.99% accuracy in predicting the Indian stock market in the pre-COVID-19 scenario. On the contrary, the proposed ANN model achieved 99.85% accuracy in during the COVID-19 period. The findings of this study have implications for investors, portfolio managers, domestic and foreign institution investors, etc.

Originality/value

The novelty of this study lies in the fact that are hardly any studies that forecasts the Indian stock market using artificial neural networks in the pre and during COVID-19 periods.

Details

EconomiA, vol. 24 no. 1
Type: Research Article
ISSN: 1517-7580

Keywords

Open Access
Article
Publication date: 19 August 2022

Bedour M. Alshammari, Fairouz Aldhmour, Zainab M. AlQenaei and Haidar Almohri

There is a gap in knowledge about the Gulf Cooperation Council (GCC) because most studies are undertaken in countries outside the Gulf region – such as China, India, the US and…

4499

Abstract

Purpose

There is a gap in knowledge about the Gulf Cooperation Council (GCC) because most studies are undertaken in countries outside the Gulf region – such as China, India, the US and Taiwan. The stock market contains rich, valuable and considerable data, and these data need careful analysis for good decisions to be made that can lead to increases in the efficiency of a business. Data mining techniques offer data processing tools and applications used to enhance decision-maker decisions. This study aims to predict the Kuwait stock market by applying big data mining.

Design/methodology/approach

The methodology used is quantitative techniques, which are mathematical and statistical models that describe a various array of the relationships of variables. Quantitative methods used to predict the direction of the stock market returns by using four techniques were implemented: logistic regression, decision trees, support vector machine and random forest.

Findings

The results are all variables statistically significant at the 5% level except gold price and oil price. Also, the variables that do not have an influence on the direction of the rate of return of Boursa Kuwait are money supply and gold price, unlike the Kuwait index, which has the highest coefficient. Furthermore, the height score of the variable that affects the direction of the rate of return is the firms, and the accuracy of the overall performance of the four models is nearly 50%.

Research limitations/implications

Some of the limitations identified for this study are as follows: (1) location limitation: Kuwait Stock Exchange; (2) time limitation: the amount of time available to accomplish the study, where the period was completed within the academic year 2019-2020 and the academic year 2020-2021. During 2020, the coronavirus pandemic (COVID-19), which was a major obstacle, occurred during data collection and analysis; (3) data limitation: The Kuwait Stock Exchange data were collected from May 2019 to March 2020, while the factors affecting the stock exchange data were collected in July 2020 due to the corona pandemic.

Originality/value

The study used new titles, variables and techniques such as using data mining to predict the Kuwait stock market. There are no adequate studies that predict the stock market by data mining in the GCC, especially in Kuwait. There is a gap in knowledge in the GCC as most studies are in foreign countries, such as China, India, the US and Taiwan.

Details

Arab Gulf Journal of Scientific Research, vol. 40 no. 2
Type: Research Article
ISSN: 1985-9899

Keywords

Open Access
Article
Publication date: 23 August 2022

Armin Mahmoodi, Leila Hashemi, Milad Jasemi, Jeremy Laliberté, Richard C. Millar and Hamed Noshadi

In this research, the main purpose is to use a suitable structure to predict the trading signals of the stock market with high accuracy. For this purpose, two models for the…

958

Abstract

Purpose

In this research, the main purpose is to use a suitable structure to predict the trading signals of the stock market with high accuracy. For this purpose, two models for the analysis of technical adaptation were used in this study.

Design/methodology/approach

It can be seen that support vector machine (SVM) is used with particle swarm optimization (PSO) where PSO is used as a fast and accurate classification to search the problem-solving space and finally the results are compared with the neural network performance.

Findings

Based on the result, the authors can say that both new models are trustworthy in 6 days, however, SVM-PSO is better than basic research. The hit rate of SVM-PSO is 77.5%, but the hit rate of neural networks (basic research) is 74.2.

Originality/value

In this research, two approaches (raw-based and signal-based) have been developed to generate input data for the model: raw-based and signal-based. For comparison, the hit rate is considered the percentage of correct predictions for 16 days.

Details

Asian Journal of Economics and Banking, vol. 7 no. 1
Type: Research Article
ISSN: 2615-9821

Keywords

Open Access
Article
Publication date: 8 December 2023

Armin Mahmoodi, Leila Hashemi, Amin Mahmoodi, Benyamin Mahmoodi and Milad Jasemi

The proposed model has been aimed to predict stock market signals by designing an accurate model. In this sense, the stock market is analysed by the technical analysis of Japanese…

Abstract

Purpose

The proposed model has been aimed to predict stock market signals by designing an accurate model. In this sense, the stock market is analysed by the technical analysis of Japanese Candlestick, which is combined by the following meta heuristic algorithms: support vector machine (SVM), meta-heuristic algorithms, particle swarm optimization (PSO), imperialist competition algorithm (ICA) and genetic algorithm (GA).

Design/methodology/approach

In addition, among the developed algorithms, the most effective one is chosen to determine probable sell and buy signals. Moreover, the authors have proposed comparative results to validate the designed model in this study with the same basic models of three articles in the past. Hence, PSO is used as a classification method to search the solution space absolutelyand with the high speed of running. In terms of the second model, SVM and ICA are examined by the time. Where the ICA is an improver for the SVM parameters. Finally, in the third model, SVM and GA are studied, where GA acts as optimizer and feature selection agent.

Findings

Results have been indicated that, the prediction accuracy of all new models are high for only six days, however, with respect to the confusion matrixes results, it is understood that the SVM-GA and SVM-ICA models have correctly predicted more sell signals, and the SCM-PSO model has correctly predicted more buy signals. However, SVM-ICA has shown better performance than other models considering executing the implemented models.

Research limitations/implications

In this study, the authors to analyze the data the long length of time between the years 2013–2021, makes the input data analysis challenging. They must be changed with respect to the conditions.

Originality/value

In this study, two methods have been developed in a candlestick model, they are raw based and signal-based approaches which the hit rate is determined by the percentage of correct evaluations of the stock market for a 16-day period.

Details

Journal of Capital Markets Studies, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-4774

Keywords

Open Access
Article
Publication date: 16 December 2021

Heba M. Ezzat

Since the beginning of 2020, economies faced many changes as a result of coronavirus disease 2019 (COVID-19) pandemic. The effect of COVID-19 on the Egyptian Exchange (EGX) is…

1344

Abstract

Purpose

Since the beginning of 2020, economies faced many changes as a result of coronavirus disease 2019 (COVID-19) pandemic. The effect of COVID-19 on the Egyptian Exchange (EGX) is investigated in this research.

Design/methodology/approach

To explore the impact of COVID-19, three periods were considered: (1) 17 months before the spread of COVID-19 and the start of the lockdown, (2) 17 months after the spread of COVID-19 and the during the lockdown and (3) 34 months comprehending the whole period (before and during COVID-19). Due to the large number of variables that could be considered, dimensionality reduction method, such as the principal component analysis (PCA) is followed. This method helps in determining the most individual stocks contributing to the main EGX index (EGX 30). The PCA, also, addresses the multicollinearity between the variables under investigation. Additionally, a principal component regression (PCR) model is developed to predict the future behavior of the EGX 30.

Findings

The results demonstrate that the first three principal components (PCs) could be considered to explain 89%, 85%, and 88% of data variability at (1) before COVID-19, (2) during COVID-19 and (3) the whole period, respectively. Furthermore, sectors of food and beverage, basic resources and real estate have not been affected by the COVID-19. The resulted Principal Component Regression (PCR) model performs very well. This could be concluded by comparing the observed values of EGX 30 with the predicted ones (R-squared estimated as 0.99).

Originality/value

To the best of our knowledge, no research has been conducted to investigate the effect of the COVID-19 on the EGX following an unsupervised machine learning method.

Details

Journal of Humanities and Applied Social Sciences, vol. 5 no. 5
Type: Research Article
ISSN: 2632-279X

Keywords

Open Access
Book part
Publication date: 9 December 2021

Marina Da Bormida

Advances in Big Data, artificial Intelligence and data-driven innovation bring enormous benefits for the overall society and for different sectors. By contrast, their misuse can…

Abstract

Advances in Big Data, artificial Intelligence and data-driven innovation bring enormous benefits for the overall society and for different sectors. By contrast, their misuse can lead to data workflows bypassing the intent of privacy and data protection law, as well as of ethical mandates. It may be referred to as the ‘creep factor’ of Big Data, and needs to be tackled right away, especially considering that we are moving towards the ‘datafication’ of society, where devices to capture, collect, store and process data are becoming ever-cheaper and faster, whilst the computational power is continuously increasing. If using Big Data in truly anonymisable ways, within an ethically sound and societally focussed framework, is capable of acting as an enabler of sustainable development, using Big Data outside such a framework poses a number of threats, potential hurdles and multiple ethical challenges. Some examples are the impact on privacy caused by new surveillance tools and data gathering techniques, including also group privacy, high-tech profiling, automated decision making and discriminatory practices. In our society, everything can be given a score and critical life changing opportunities are increasingly determined by such scoring systems, often obtained through secret predictive algorithms applied to data to determine who has value. It is therefore essential to guarantee the fairness and accurateness of such scoring systems and that the decisions relying upon them are realised in a legal and ethical manner, avoiding the risk of stigmatisation capable of affecting individuals’ opportunities. Likewise, it is necessary to prevent the so-called ‘social cooling’. This represents the long-term negative side effects of the data-driven innovation, in particular of such scoring systems and of the reputation economy. It is reflected in terms, for instance, of self-censorship, risk-aversion and lack of exercise of free speech generated by increasingly intrusive Big Data practices lacking an ethical foundation. Another key ethics dimension pertains to human-data interaction in Internet of Things (IoT) environments, which is increasing the volume of data collected, the speed of the process and the variety of data sources. It is urgent to further investigate aspects like the ‘ownership’ of data and other hurdles, especially considering that the regulatory landscape is developing at a much slower pace than IoT and the evolution of Big Data technologies. These are only some examples of the issues and consequences that Big Data raise, which require adequate measures in response to the ‘data trust deficit’, moving not towards the prohibition of the collection of data but rather towards the identification and prohibition of their misuse and unfair behaviours and treatments, once government and companies have such data. At the same time, the debate should further investigate ‘data altruism’, deepening how the increasing amounts of data in our society can be concretely used for public good and the best implementation modalities.

Details

Ethical Issues in Covert, Security and Surveillance Research
Type: Book
ISBN: 978-1-80262-414-4

Keywords

Open Access
Article
Publication date: 11 April 2018

Chao Yu, Yueting Chai and Yi Liu

Collective intelligence has drawn many scientists’ attention in many centuries. This paper shows the collective intelligence study process in a perspective of crowd science.

6375

Abstract

Purpose

Collective intelligence has drawn many scientists’ attention in many centuries. This paper shows the collective intelligence study process in a perspective of crowd science.

Design/methodology/approach

After summarizing the time-order process of related researches, different points of views on collective intelligence’s measurement and their modeling methods were outlined.

Findings

The authors show the recent research focusing on collective intelligence optimization. The studies on application of collective intelligence and its future potential are also discussed.

Originality/value

This paper will help researchers in crowd science have a better picture of this highly related frontier interdiscipline.

Details

International Journal of Crowd Science, vol. 2 no. 1
Type: Research Article
ISSN: 2398-7294

Keywords

1 – 10 of over 2000