Search results
1 – 10 of over 7000This study seeks to measure the behaviour of stock prices in the Bahrain Stock Exchange (BSE), which is expected to follow a random walk. The aim of the study is to measure the…
Abstract
Purpose
This study seeks to measure the behaviour of stock prices in the Bahrain Stock Exchange (BSE), which is expected to follow a random walk. The aim of the study is to measure the weak‐form efficiency.
Design/methodology/approach
Random walk models such as unit root and Dickey‐Fuller tests are used as basic stochastic tests for a non‐stationarity of the daily prices for all the listed companies in the BSE. In addition, autoregressive integrated moving average (ARIMA) and exponential smoothing methods are also used. Cross‐sectional‐time‐series is used for the 40 listed companies over the period 1 June 1990 up until 31 December 2000.
Findings
Random walk with no drift and trend is confirmed for all daily stock prices and each individual sector. Other tests, such as ARIMA (AR1), autocorrelation tests and exponential smoothing tests also supported the efficiency of the BSE in the weak‐form.
Practical implications
The finding of the study is a necessary piece of information for all investors whether in Bahrain or dealing with Bahrain stock market. Listed firms could also benefit from the findings by seeing the true picture of their stock price. Since, Bahrain is considered as an emerging market, the new methodologies used could be replicated for all other emerging markets. In addition, the finding is used as a base for testing the market efficiency in the semi‐strong form, which has not yet been tested by any researcher.
Originality/value
This study will add value to the literature of market efficiency in emerging market since it is the only study which covers all the listed companies and over a long period of time. To confirm the weak‐form efficiency in Bahrain, the study is unique in using five different methods in the same paper which have not been found in the previous literature.
Details
Keywords
Claire G. Gilmore and Ginette M. McManus
The existence of weak‐form efficiency in the equity markets of the three main Central European transition economies (the Czech Republic, Hungary, and Poland) is examined for the…
Abstract
The existence of weak‐form efficiency in the equity markets of the three main Central European transition economies (the Czech Republic, Hungary, and Poland) is examined for the period July 1995 through September 2000, using weekly Investable and Comprehensive indexes developed by the International Finance Corporation. Several different approaches are used. Univariate and multivariate tests provide some evidence that stock prices in these exchanges exhibit a random walk, which constitutes evidence for weakform efficiency. This differs in some cases from studies using data for the initial years of these markets. The variance ratio test (VR) of Lo and MacKinlay (1988) yields somewhat mixed results concerning the random‐walk properties of the indexes. A modelcomparison test compares forecasts from a NAÏVE model with ARIMA and GARCH alternatives. Results from the model‐comparison approach are consistent in rejecting the random‐walk hypothesis for the three Central European equity markets.
Details
Keywords
Andrew B. Martinez, Jennifer L. Castle and David F. Hendry
We investigate whether smooth robust methods for forecasting can help mitigate pronounced and persistent failure across multiple forecast horizons. We demonstrate that naive…
Abstract
We investigate whether smooth robust methods for forecasting can help mitigate pronounced and persistent failure across multiple forecast horizons. We demonstrate that naive predictors are interpretable as local estimators of the long-run relationship with the advantage of adapting quickly after a break, but at a cost of additional forecast error variance. Smoothing over naive estimates helps retain these advantages while reducing the costs, especially for longer forecast horizons. We derive the performance of these predictors after a location shift, and confirm the results using simulations. We apply smooth methods to forecasts of UK productivity and US 10-year Treasury yields and show that they can dramatically reduce persistent forecast failure exhibited by forecasts from macroeconomic models and professional forecasters.
Details
Keywords
Marija Vištica, Ani Grubišic and Branko Žitko
In order to initialize a student model in intelligent tutoring systems, some form of initial knowledge test should be given to a student. Since the authors cannot include all…
Abstract
Purpose
In order to initialize a student model in intelligent tutoring systems, some form of initial knowledge test should be given to a student. Since the authors cannot include all domain knowledge in that initial test, a domain knowledge subset should be selected. The paper aims to discuss this issue.
Design/methodology/approach
In order to generate a knowledge sample that represents truly a certain domain knowledge, the authors can use sampling algorithms. In this paper, the authors present five sampling algorithms (Random Walk, Metropolis-Hastings Random Walk, Forest Fire, Snowball and Represent algorithm) and investigate which structural properties of the domain knowledge sample are preserved after sampling process is conducted.
Findings
The samples that the authors got using these algorithms are compared and the authors have compared their cumulative node degree distributions, clustering coefficients and the length of the shortest paths in a sampled graph in order to find the best one.
Originality/value
This approach is original as the authors could not find any similar work that uses graph sampling methods for student modeling.
Details
Keywords
Islam A. ElShaarawy, Essam H. Houssein, Fatma Helmy Ismail and Aboul Ella Hassanien
The purpose of this paper is to propose an enhanced elephant herding optimization (EEHO) algorithm by improving the exploration phase to overcome the fast-unjustified convergence…
Abstract
Purpose
The purpose of this paper is to propose an enhanced elephant herding optimization (EEHO) algorithm by improving the exploration phase to overcome the fast-unjustified convergence toward the origin of the native EHO. The exploration and exploitation of the proposed EEHO are achieved by updating both clan and separation operators.
Design/methodology/approach
The original EHO shows fast unjustified convergence toward the origin specifically, a constant function is used as a benchmark for inspecting the biased convergence of evolutionary algorithms. Furthermore, the star discrepancy measure is adopted to quantify the quality of the exploration phase of evolutionary algorithms in general.
Findings
In experiments, EEHO has shown a better performance of convergence rate compared with the original EHO. Reasons behind this performance are: EEHO proposes a more exploitative search method than the one used in EHO and the balanced control of exploration and exploitation based on fixing clan updating operator and separating operator. Operator γ is added to EEHO assists to escape from local optima, which commonly exist in the search space. The proposed EEHO controls the convergence rate and the random walk independently. Eventually, the quantitative and qualitative results revealed that the proposed EEHO outperforms the original EHO.
Research limitations/implications
Therefore, the pros and cons are reported as follows: pros of EEHO compared to EHO – 1) unbiased exploration of the whole search space thanks to the proposed update operator that fixed the unjustified convergence of the EHO toward the origin and the proposed separating operator that fixed the tendency of EHO to introduce new elephants at the boundary of the search space; and 2) the ability to control exploration–exploitation trade-off by independently controverting the convergence rate and the random walk using different parameters – cons EEHO compared to EHO: 1) suitable values for three parameters (rather than two only) have to be found to use EEHO.
Originality/value
As the original EHO shows fast unjustified convergence toward the origin specifically, the search method adopted in EEHO is more exploitative than the one used in EHO because of the balanced control of exploration and exploitation based on fixing clan updating operator and separating operator. Further, the star discrepancy measure is adopted to quantify the quality of exploration phase of evolutionary algorithms in general. Operator γ that added EEHO allows the successive local and global searching (exploration and exploitation) and helps escaping from local minima that commonly exist in the search space.
Details
Keywords
Andrew Adamatzky and Owen Holland
Attempts to characterise some aspects of the new wave of reaction‐diffusion and ant based computation, and to discuss their place in the class of fully distributed load‐balancing…
Abstract
Attempts to characterise some aspects of the new wave of reaction‐diffusion and ant based computation, and to discuss their place in the class of fully distributed load‐balancing algorithms that solve the dynamic load‐balancing problem of communication networks. The main question of the paper states: what are the advantages of the intellectualisation of the control agents and what are the costs of smartness? We start our investigation with random walk techniques and the electricity paradigm, carry on with the reaction‐diffusion approach, and finish the construction of the computational hierarchy with the ant paradigm and smart agents.
Details
Keywords
Zhe Jing, Yan Luo, Xiaotong Li and Xin Xu
A smart city is a potential solution to the problems caused by the unprecedented speed of urbanization. However, the increasing availability of big data is a challenge for…
Abstract
Purpose
A smart city is a potential solution to the problems caused by the unprecedented speed of urbanization. However, the increasing availability of big data is a challenge for transforming a city into a smart one. Conventional statistics and econometric methods may not work well with big data. One promising direction is to leverage advanced machine learning tools in analyzing big data about cities. In this paper, the authors propose a model to learn region embedding. The learned embedding can be used for more accurate prediction by representing discrete variables as continuous vectors that encode the meaning of a region.
Design/methodology/approach
The authors use the random walk and skip-gram methods to learn embedding and update the preliminary embedding generated by graph convolutional network (GCN). The authors apply this model to a real-world dataset from Manhattan, New York, and use the learned embedding for crime event prediction.
Findings
This study’s results show that the proposed model can learn multi-dimensional city data more accurately. Thus, it facilitates cities to transform themselves into smarter ones that are more sustainable and efficient.
Originality/value
The authors propose an embedding model that can learn multi-dimensional city data for improving predictive analytics and urban operations. This model can learn more dimensions of city data, reduce the amount of computation and leverage distributed computing for smart city development and transformation.
Details
Keywords
Alexander M. Goulielmos, Constantine Giziakis and Michalis Pasarzis
The purpose of this article is really to provide an answer to the question: Why have marine accidents that result in ships lost have, over the years, been concentrated in two main…
Abstract
The purpose of this article is really to provide an answer to the question: Why have marine accidents that result in ships lost have, over the years, been concentrated in two main areas by numbers? Indeed in the British Isles/North Sea/E. Channel‐Biscay Bay 367 ships were lost between 1992‐1999 and in S. China and E. Indies 433 ships also were lost. In contrast, in Cape Horn and in the Panama Canal only five ships were lost over the same period. This strange “attraction” of accidents to only two sea areas has induced us to assume that this phenomenon cannot probably be explained by random walk statistical/ mathematical methods, but by non‐linear chaotic methods and especially that of Hurst Rescale Range Analysis and Spectrum Analysis. Our numerical results – on rather limited data – have shown that in these ships lost a non‐random factor or factors have acted that further investigation may reveal. We consider this as an important fact with wide applications, e.g. in accidents in national highways that strangely enough the majority of road accidents occur mainly in certain locations! Another important conclusion is that man cannot and wishes not to interfere with “randomness” and simply accepts it doing nothing, transferring responsibility from his shoulders over to destiny. Things that are not random must be prevented. If they are random or not chaos/complexity theory helps us to see. Our analysis, we believe, is of special interest to the marine insurance companies.
Details
Keywords
Andreas Schwab and William H. Starbuck
This chapter reports on a rapidly growing trend in data analysis – analytic comparisons between baseline models and explanatory models. Baseline models estimate values for the…
Abstract
This chapter reports on a rapidly growing trend in data analysis – analytic comparisons between baseline models and explanatory models. Baseline models estimate values for the dependent variable in the absence of hypothesized causal effects. Thus, the baseline models discussed in this chapter differ from the baseline models commonly used in sequential regression analyses.Baseline modelling entails iteration: (1) Researchers develop baseline models to capture key patterns in the empirical data that are independent of the hypothesized effects. (2) They compare these patterns with the patterns implied by their explanatory models. (3) They use the derived insights to improve their explanatory models. (4) They iterate by comparing their improved explanatory models with modified baseline models.The chapter draws on methodological literature in economics, applied psychology, and the philosophy of science to point out fundamental features of baseline modelling. Examples come from research in international business and management, emerging market economies and developing countries.Baseline modelling offers substantial advantages for theory development. Although analytic comparisons with baseline models originated in some research fields as early as the 1960s, they have not been widely discussed or applied in international management. Baseline modelling takes a more inductive and iterative approach to modelling and theory development. Because baseline modelling holds substantial potential, international-management scholars should explore its opportunities for advancing scientific progress.