Search results
1 – 10 of over 82000Xiaoli Su, Lijun Zeng, Bo Shao and Binlong Lin
The production planning problem with fine-grained information has hardly been considered in practice. The purpose of this study is to investigate the data-driven production…
Abstract
Purpose
The production planning problem with fine-grained information has hardly been considered in practice. The purpose of this study is to investigate the data-driven production planning problem when a manufacturer can observe historical demand data with high-dimensional mixed-frequency features, which provides fine-grained information.
Design/methodology/approach
In this study, a two-step data-driven optimization model is proposed to examine production planning with the exploitation of mixed-frequency demand data is proposed. First, an Unrestricted MIxed DAta Sampling approach is proposed, which imposes Group LASSO Penalty (GP-U-MIDAS). The use of high frequency of massive demand information is analytically justified to significantly improve the predictive ability without sacrificing goodness-of-fit. Then, integrated with the GP-U-MIDAS approach, the authors develop a multiperiod production planning model with a rolling cycle. The performance is evaluated by forecasting outcomes, production planning decisions, service levels and total cost.
Findings
Numerical results show that the key variables influencing market demand can be completely recognized through the GP-U-MIDAS approach; in particular, the selected accuracy of crucial features exceeds 92%. Furthermore, the proposed approach performs well regarding both in-sample fitting and out-of-sample forecasting throughout most of the horizons. Taking the total cost and service level obtained under the actual demand as the benchmark, the mean values of both the service level and total cost differences are reduced. The mean deviations of the service level and total cost are reduced to less than 2.4%. This indicates that when faced with fluctuating demand, the manufacturer can adopt the proposed model to effectively manage total costs and experience an enhanced service level.
Originality/value
Compared with previous studies, the authors develop a two-step data-driven optimization model by directly incorporating a potentially large number of features; the model can help manufacturers effectively identify the key features of market demand, improve the accuracy of demand estimations and make informed production decisions. Moreover, demand forecasting and optimal production decisions behave robustly with shifting demand and different cost structures, which can provide manufacturers an excellent method for solving production planning problems under demand uncertainty.
Details
Keywords
Weiqing Wang, Zengbin Zhang, Liukai Wang, Xiaobo Zhang and Zhenyu Zhang
The purpose of this study is to forecast the development performance of important economies in a smart city using mixed-frequency data.
Abstract
Purpose
The purpose of this study is to forecast the development performance of important economies in a smart city using mixed-frequency data.
Design/methodology/approach
This study introduces reverse unrestricted mixed-data sampling (RUMIDAS) to support vector regression (SVR) to develop a novel RUMIDAS-SVR model. The RUMIDAS-SVR model was estimated using a quadratic programming problem. The authors then use the novel RUMIDAS-SVR model to forecast the development performance of all high-tech listed companies, an important sector of the economy reflecting the potential and dynamism of urban economic development in Shanghai using the mixed-frequency consumer price index (CPI) producer price index (PPI), and consumer confidence index (CCI) as predictors.
Findings
The empirical results show that the established RUMIDAS-SVR is superior to the competing models with regard to mean absolute error (MAE) and root-mean-squared error (RMSE) and multi-source macroeconomic predictors contribute to the development performance forecast of important economies.
Practical implications
Smart city policy makers should create a favourable macroeconomic environment, such as controlling inflation or stabilising prices for companies within the city, and companies within the important city economic sectors should take initiative to shoulder their responsibility to support the construction of the smart city.
Originality/value
This study contributes to smart city monitoring by proposing and developing a new model, RUMIDAS-SVR, to help the construction of smart cities. It also empirically provides strategic insights for smart city stakeholders.
Details
Keywords
Michael Bleaney and Zhiyong Li
This paper aims to investigate the performance of estimators of the bid-ask spread in a wide range of circumstances and sampling frequencies. The bid-ask spread is important for…
Abstract
Purpose
This paper aims to investigate the performance of estimators of the bid-ask spread in a wide range of circumstances and sampling frequencies. The bid-ask spread is important for many reasons. Because spread data are not always available, many methods have been suggested for estimating the spread. Existing papers focus on the performance of the estimators either under ideal conditions or in real data. The gap between ideal conditions and the properties of real data are usually ignored. The consistency of the estimates across various sampling frequencies is also ignored.
Design/methodology/approach
The estimators and the possible errors are analysed theoretically. Then we perform simulation experiments, reporting the bias, standard deviation and root mean square estimation error of each estimator. More specifically, we assess the effects of the following factors on the performance of the estimators: the magnitude of the spread relative to returns volatility, randomly varying of spreads, the autocorrelation of mid-price returns and mid-price changes caused by trade directions and feedback trading.
Findings
The best estimates come from using the highest frequency of data available. The relative performance of estimators can vary quite markedly with the sampling frequency. In small samples, the standard deviation can be more important to the estimation error than bias; in large samples, the opposite tends to be true.
Originality/value
There is a conspicuous lack of simulation evidence on the comparative performance of different estimators of the spread under the less than ideal conditions that are typical of real-world data. This paper aims to fill this gap.
Details
Keywords
Xuebiao Wang, Xi Wang, Bo Li and Zhiqi Bai
The purpose of this paper is to consider that the model of volatility characteristics is more reasonable and the description of volatility is more explanatory.
Abstract
Purpose
The purpose of this paper is to consider that the model of volatility characteristics is more reasonable and the description of volatility is more explanatory.
Design/methodology/approach
This paper analyzes the basic characteristics of market yield volatility based on the five-minute trading data of the Chinese CSI300 stock index futures from 2012 to 2017 by Hurst index and GPH test, A-J and J-O Jumping test and Realized-EGARCH model, respectively. The results show that the yield fluctuation rate of CSI300 stock index futures market has obvious non-linear characteristics including long memory, jumpy and asymmetry.
Findings
This paper finds that the LHAR-RV-CJ model has a better prediction effect on the volatility of CSI300 stock index futures. The research shows that CSI300 stock index futures market is heterogeneous, means that long-term investors are focused on long-term market fluctuations rather than short-term fluctuations; the influence of the short-term jumping component on the market volatility is limited, and the long jump has a greater negative influence on market fluctuation; the negative impact of long-period yield is limited to short-term market fluctuation, while, with the period extending, the negative influence of long-period impact is gradually increased.
Research limitations/implications
This paper has research limitations in variable measurement and data selection.
Practical implications
This study is based on the high-frequency data or the application number of financial modeling analysis, especially in the study of asset price volatility. It makes full use of all kinds of information contained in high-frequency data, compared to low-frequency data such as day, weekly or monthly data. High-frequency data can be more accurate, better guide financial asset pricing and risk management, and result in effective configuration.
Originality/value
The existing research on the futures market volatility of high frequency data, mainly focus on single feature analysis, and the comprehensive comparative analysis on the volatility characteristics of study is less, at the same time in setting up the model for the forecast of volatility, based on the model research on the basic characteristics is less, so the construction of a model is relatively subjective, in this paper, considering the fluctuation characteristics of the model is more reasonable, characterization of volatility will also be more explanatory power. The difference between this paper and the existing literature lies in that this paper establishes a prediction model based on the basic characteristics of market return volatility, and conducts a description and prediction study on volatility.
Details
Keywords
Vicente Ramos, Woraphon Yamaka, Bartomeu Alorda and Songsak Sriboonchitta
This paper aims to illustrate the potential of high-frequency data for tourism and hospitality analysis, through two research objectives: First, this study describes and test a…
Abstract
Purpose
This paper aims to illustrate the potential of high-frequency data for tourism and hospitality analysis, through two research objectives: First, this study describes and test a novel high-frequency forecasting methodology applied on big data characterized by fine-grained time and spatial resolution; Second, this paper elaborates on those estimates’ usefulness for visitors and tourism public and private stakeholders, whose decisions are increasingly focusing on short-time horizons.
Design/methodology/approach
This study uses the technical communications between mobile devices and WiFi networks to build a high frequency and precise geolocation of big data. The empirical section compares the forecasting accuracy of several artificial intelligence and time series models.
Findings
The results robustly indicate the long short-term memory networks model superiority, both for in-sample and out-of-sample forecasting. Hence, the proposed methodology provides estimates which are remarkably better than making short-time decision considering the current number of residents and visitors (Naïve I model).
Practical implications
A discussion section exemplifies how high-frequency forecasts can be incorporated into tourism information and management tools to improve visitors’ experience and tourism stakeholders’ decision-making. Particularly, the paper details its applicability to managing overtourism and Covid-19 mitigating measures.
Originality/value
High-frequency forecast is new in tourism studies and the discussion sheds light on the relevance of this time horizon for dealing with some current tourism challenges. For many tourism-related issues, what to do next is not anymore what to do tomorrow or the next week.
Plain Language Summary
This research initiates high-frequency forecasting in tourism and hospitality studies. Additionally, we detail several examples of how anticipating urban crowdedness requires high-frequency data and can improve visitors’ experience and public and private decision-making.
Details
Keywords
Matt Brigida and William R. Pratt
This paper aims to investigate the quickness, and test the accuracy, of liquidity taking high-frequency traders (HFT). This gives us important insights into a class of market…
Abstract
Purpose
This paper aims to investigate the quickness, and test the accuracy, of liquidity taking high-frequency traders (HFT). This gives us important insights into a class of market participant who has come to be very influential in present markets.
Design/methodology/approach
The authors use the weekly natural gas (NG) storage report for the test because the information contained in the release often has a large effect on prices. Moreover, the NG market is heavily traded and liquid, and prone to high volatility. These factors make trading in this market attractive to HFT. The authors test for the profitability of those who trade in the first milliseconds after the report’s release; and for information leakage prior to the report.
Findings
The authors find those who trade within the first 50 ms accurately incorporate the information contained in the storage report into prices, and earn the majority of profits. In fact, HFT profits are decreasing in the time it takes them to trade after the announcement (measured to 200 ms). Further tests find no evidence of informed trading prior to the release of the report, and so the HFT reaction to the report incorporates the information contained therein into prices.
Originality/value
This is one of the few analyzes of the profitability of liquidity-taking HFT, and the only analysis that uses millisecond NG data. The data used is the exchanges original FIX/FAST messages.
Details
Keywords
This study is the first to investigate the causal relationship between Bitcoin and equity price returns by sectors. Previous studies have focused on aggregated indices such as…
Abstract
Purpose
This study is the first to investigate the causal relationship between Bitcoin and equity price returns by sectors. Previous studies have focused on aggregated indices such as S&P500, Nasdaq and Dow Jones, but this study uses mixed frequency and disaggregated data at the sectoral level. This allows the authors to examine the nature, direction and strength of causality between Bitcoin and equity prices in different sectors in more detail.
Design/methodology/approach
This paper utilizes an Unrestricted Asymmetric Mixed Data Sampling (U-AMIDAS) model to investigate the effect of high-frequency Bitcoin returns on a low-frequency series equity returns. This study also examines causality running from equity to Bitcoin returns by sector. The sample period covers United States (US) data from 3 Jan 2011 to 14 April 2023 across nine sectors: materials, energy, financial, industrial, technology, consumer staples, utilities, health and consumer discretionary.
Findings
The study found that there is no causality running from Bitcoin to equity returns in any sector except for the technology sector. In the tech sector, lagged Bitcoin returns Granger cause changes in future equity prices asymmetrically. This means that falling Bitcoin prices significantly influence the tech sector during market pullbacks, but the opposite cannot be said during market rallies. The findings are consistent with those of other studies that have established that during market pullbacks, individual asset prices have a tendency to decline together, whereas during market rallies, they have a tendency to rise independently. In contrast, this study finds evidence of causality running from all sectors of the equity market to Bitcoin.
Practical implications
The findings have significant implications for investors and fund managers, emphasizing the need to consider the asymmetric causality between Bitcoin and the tech sector. Investors should avoid excessive exposure to both Bitcoin and tech stocks in their portfolio, as this may lead to significant drawdowns during market corrections. Diversification across different asset classes and sectors may be a more prudent strategy to mitigate such risks.
Originality/value
The study's findings underscore the need for investors to pay close attention to the frequency and disaggregation of data by sector in order to fully understand the true extent of the relationship between Bitcoin and the equity market.
Details
Keywords
Lukas Koelbl, Alexander Braumann, Elisabeth Felsenstein and Manfred Deistler
This paper is concerned with estimation of the parameters of a high-frequency VAR model using mixed-frequency data, both for the stock and for the flow case. Extended Yule–Walker…
Abstract
This paper is concerned with estimation of the parameters of a high-frequency VAR model using mixed-frequency data, both for the stock and for the flow case. Extended Yule–Walker estimators and (Gaussian) maximum likelihood type estimators based on the EM algorithm are considered. Properties of these estimators are derived, partly analytically and by simulations. Finally, the loss of information due to mixed-frequency data when compared to the high-frequency situation as well as the gain of information when using mixed-frequency data relative to low-frequency data is discussed.
Details
Keywords
Changhai Lin, Zhengyu Song, Sifeng Liu, Yingjie Yang and Jeffrey Forrest
The purpose of this paper is to analyze the mechanism and filter efficacy of accumulation generation operator (AGO)/inverse accumulation generation operator (IAGO) in the frequency…
Abstract
Purpose
The purpose of this paper is to analyze the mechanism and filter efficacy of accumulation generation operator (AGO)/inverse accumulation generation operator (IAGO) in the frequency domain.
Design/methodology/approach
The AGO/IAGO in time domain will be transferred to the frequency domain by the Fourier transform. Based on the consistency of the mathematical expressions of the AGO/IAGO in the gray system and the digital filter in digital signal processing, the equivalent filter model of the AGO/IAGO is established. The unique methods in digital signal processing systems “spectrum analysis” of AGO/IAGO are carried out in the frequency domain.
Findings
Through the theoretical study and practical example, benefit of spectrum analysis is explained, and the mechanism and filter efficacy of AGO/IAGO are quantitatively analyzed. The study indicated that the AGO is particularly suitable to act on the system's behavior time series in which the long period parts is the main factor. The acted sequence has good effect of noise immunity.
Practical implications
The AGO/IAGO has a wonderful effect on the processing of some statistical data, e.g. most of the statistical data related to economic growth, crop production, climate and atmospheric changes are mainly affected by long period factors (i.e. low-frequency data), and most of the disturbances are short-period factors (high-frequency data). After processing by the 1-AGO, its high frequency content is suppressed, and its low frequency content is amplified. In terms of information theory, this two-way effect improves the signal-to-noise ratio greatly and reduces the proportion of noise/interference in the new sequence. Based on 1-AGO acting, the information mining and extrapolation prediction will have a good effect.
Originality/value
The authors find that 1-AGO has a wonderful effect on the processing of data sequence. When the 1-AGO acts on a data sequence X, its low-pass filtering effect will benefit the information fluctuations removing and high-frequency noise/interference reduction, so the data shows a clear exponential change trends. However, it is not suitable for excessive use because its equivalent filter has poles at the non-periodic content. But, because of pol effect at zero frequency, the 1-AGO will greatly amplify the low-frequency information parts and suppress the high-frequency parts in the information at the same time.
Details