Search results
1 – 10 of 328Xiangfei Chen, David Trafimow, Tonghui Wang, Tingting Tong and Cong Wang
The authors derive the necessary mathematics, provide computer simulations, provide links to free and user-friendly computer programs, and analyze real data sets.
Abstract
Purpose
The authors derive the necessary mathematics, provide computer simulations, provide links to free and user-friendly computer programs, and analyze real data sets.
Design/methodology/approach
Cohen's d, which indexes the difference in means in standard deviation units, is the most popular effect size measure in the social sciences and economics. Not surprisingly, researchers have developed statistical procedures for estimating sample sizes needed to have a desirable probability of rejecting the null hypothesis given assumed values for Cohen's d, or for estimating sample sizes needed to have a desirable probability of obtaining a confidence interval of a specified width. However, for researchers interested in using the sample Cohen's d to estimate the population value, these are insufficient. Therefore, it would be useful to have a procedure for obtaining sample sizes needed to be confident that the sample. Cohen's d to be obtained is close to the population parameter the researcher wishes to estimate, an expansion of the a priori procedure (APP). The authors derive the necessary mathematics, provide computer simulations and links to free and user-friendly computer programs, and analyze real data sets for illustration of our main results.
Findings
In this paper, the authors answered the following two questions: The precision question: How close do I want my sample Cohen's d to be to the population value? The confidence question: What probability do I want to have of being within the specified distance?
Originality/value
To the best of the authors’ knowledge, this is the first paper for estimating Cohen's effect size, using the APP method. It is convenient for researchers and practitioners to use the online computing packages.
Details
Keywords
This article examines the accuracy and bias inherent in the wisdom of crowd effect. The purpose is to clarify what kind of bias crowds have when they make predictions. In the…
Abstract
Purpose
This article examines the accuracy and bias inherent in the wisdom of crowd effect. The purpose is to clarify what kind of bias crowds have when they make predictions. In the theoretical inquiry, the effect of the accumulated absolute deviation was simulated. In the empirical study, the observed biases were examined using data from forecasting foreign exchange rates.
Design/methodology/approach
In the theoretical inquiry, the effect of the accumulated absolute deviation was simulated based on mathematical propositions. In the empirical study, the data from 2004 to 2011 were provided by Nikkei, which holds the “Nikkei Yen Derby” competition. In total, 3,657 groups forecasted the foreign exchange rate, and the first prediction was done in early May to forecast the rate at the end of May. The second round took place in June in a similar manner.
Findings
The average absolute deviation in May was smaller than that in June. The first round of prediction was more accurate than the second round one. Predictors were affected by the observable real exchange rate, such that they modified their forecasts by referring to the actual data in early June. An actuality bias existed when the participants lost their diverse prospects. Since the standard deviations of the June forecasts were smaller than those of May, the fact-convergence effect was supported.
Originality/value
This article reports novel findings that affect the wisdom of crowd effect—referred to as actuality bias and fact-convergence effect. The former refers to a forecasting bias toward the observable rate near the forecasting date. The latter implies that predictors, as a whole, indicate smaller forecast deviations by observing the realized foreign exchange rate.
Details
Keywords
Manuel E. Rademaker, Florian Schuberth and Theo K. Dijkstra
The purpose of this paper is to enhance consistent partial least squares (PLSc) to yield consistent parameter estimates for population models whose indicator blocks contain a…
Abstract
Purpose
The purpose of this paper is to enhance consistent partial least squares (PLSc) to yield consistent parameter estimates for population models whose indicator blocks contain a subset of correlated measurement errors.
Design/methodology/approach
Correction for attenuation as originally applied by PLSc is modified to include a priori assumptions on the structure of the measurement error correlations within blocks of indicators. To assess the efficacy of the modification, a Monte Carlo simulation is conducted.
Findings
In the presence of population measurement error correlation, estimated parameter bias is generally small for original and modified PLSc, with the latter outperforming the former for large sample sizes. In terms of the root mean squared error, the results are virtually identical for both original and modified PLSc. Only for relatively large sample sizes, high population measurement error correlation, and low population composite reliability are the increased standard errors associated with the modification outweighed by a smaller bias. These findings are regarded as initial evidence that original PLSc is comparatively robust with respect to misspecification of the structure of measurement error correlations within blocks of indicators.
Originality/value
Introducing and investigating a new approach to address measurement error correlation within blocks of indicators in PLSc, this paper contributes to the ongoing development and assessment of recent advancements in partial least squares path modeling.
Details
Keywords
Cleyton Farias and Marcelo Silva
The authors explore the hypothesis that some movements in commodity prices are anticipated (news shocks) and can trigger aggregate fluctuations in small open emerging economies…
Abstract
Purpose
The authors explore the hypothesis that some movements in commodity prices are anticipated (news shocks) and can trigger aggregate fluctuations in small open emerging economies. This paper aims to discuss the aforementioned objective.
Design/methodology/approach
The authors build a multi-sector dynamic stochastic general equilibrium model with endogenous commodity production. There are five exogenous processes: a country-specific interest rate shock that responds to commodity price fluctuations, a productivity (TFP) shock for each sector and a commodity price shock. Both TFP and commodity price shocks are composed of unanticipated and anticipated components.
Findings
The authors show that news shocks to commodity prices lead to higher output, investment and consumption, and a countercyclical movement in the trade-balance-to-output ratio. The authors also show that commodity price news shocks explain about 24% of output aggregate fluctuations in the small open economy.
Practical implications
Given the importance of both anticipated and unanticipated commodity price shocks, policymakers should pay attention to developments in commodity markets when designing policies to attenuate the business cycles. Future research should investigate the design of optimal fiscal and monetary policies in SOE subject to news shocks in commodity prices.
Originality/value
This paper contributes to the knowledge of the sources of fluctuations in emerging economies highlighting the importance of a new source: news shocks in commodity prices.
Details
Keywords
Mahmoud Alsaid, Rania M. Kamal and Mahmoud M. Rashwan
This paper presents economic and economic–statistical designs of the adaptive exponentially weighted moving average (AEWMA) control chart for monitoring the process mean. It also…
Abstract
Purpose
This paper presents economic and economic–statistical designs of the adaptive exponentially weighted moving average (AEWMA) control chart for monitoring the process mean. It also aims to compare the effect of estimated process parameters on the economic performance of three charts, which are Shewhart, exponentially weighted moving average and AEWMA control charts with economic–statistical design.
Design/methodology/approach
The optimal parameters of the control charts are obtained by applying the Lorenzen and Vance’s (1986) cost function. Comparisons between the economic–statistical and economic designs of the AEWMA control chart in terms of expected cost and statistical measures are performed. Also, comparisons are made between the economic performance of the three competing charts in terms of the average expected cost and standard deviation of expected cost.
Findings
This paper concludes that taking into account the economic factors and statistical properties in designing the AEWMA control chart leads to a slight increase in cost but in return the improvement in the statistical performance is substantial. In addition, under the estimated parameters case, the comparisons reveal that from the economic point of view the AEWMA chart is the most efficient chart when detecting shifts of different sizes.
Originality/value
The importance of the study stems from designing the AEWMA chart from both economic and statistical points of view because it has not been tackled before. In addition, this paper contributes to the literature by studying the effect of the estimated parameters on the performance of control charts with economic–statistical design.
Details
Keywords
Beibei Xiong, Yongli Li, Ernesto D.R. Santibanez Gonzalez and Malin Song
The purpose of this paper is to measure Chinese industries’ eco-efficiency during 2006-2013. The Chinese industry attained rapid achievement in recent decades, but meanwhile…
Abstract
Purpose
The purpose of this paper is to measure Chinese industries’ eco-efficiency during 2006-2013. The Chinese industry attained rapid achievement in recent decades, but meanwhile, overconsumption of energy and environmental pollution have become serious problems. To solve these problems, many research studies used the data envelopment analysis (DEA) to measure the Chinese industry’s eco-efficiency. However, because the target set by these works is usually the furthest one for a province to be efficient, it may hardly be accepted by any province.
Design/methodology/approach
This paper builds a new “closest target method” based on an additive DEA model considering the undesirable outputs. This method is a mixed-integer programming problem which can measure the ecological efficiency of provinces and more importantly guide the province to perform efficiently with minimum effort.
Findings
The results show that the eco-efficiency of Chinese provinces increased at the average level, but the deviations remained at a larger value. Compared to the “furthest” target methods, the targets by the approach proposed by this study are more acceptable for a province to improve its performance on both economy and environment counts.
Originality/value
This study is the first attempt to introduce the closest targets concept to measure the eco-efficiency and set the target for each provincial industry in China.
Details
Keywords
Jing Wang, Nathan N. Huynh and Edsel Pena
This paper evaluates an alternative queuing concept for marine container terminals that utilize a truck appointment system (TAS). Instead of having all lanes providing service to…
Abstract
Purpose
This paper evaluates an alternative queuing concept for marine container terminals that utilize a truck appointment system (TAS). Instead of having all lanes providing service to trucks with appointments, this study considers the case where walk-in lanes are provided to serve those trucks with no appointments or trucks with appointments but arrived late due to traffic congestion.
Design/methodology/approach
To enable the analysis of the proposed alternative queuing strategy, the queuing system is shown mathematically to be stationary. Due to the complexity of the model, a discrete event simulation (DES) model is used to obtain the average waiting number of trucks per lane for both types of service lanes: TAS-lanes and walk-in lanes.
Findings
The numerical experiment results indicated that the considered queuing strategy is most beneficial when the utilization of the TAS lanes is expected to be much higher than that of the walk-in lanes.
Originality/value
The novelty of this study is that it examines the scenario where trucks with appointments switch to the walk-in lanes upon arrival if the TAS-lane server is occupied and the walk-in lane server is not occupied. This queuing strategy/policy could reduce the average waiting time of trucks at marine container terminals. Approximation equations are provided to assist practitioners calculate the average truck queue length and the average truck queuing time for this type of queuing system.
Details
Keywords
Florian Follert and Werner Gleißner
From the buying club’s perspective, the transfer of a player can be interpreted as an investment from which the club expects uncertain future benefits. This paper aims to develop…
Abstract
Purpose
From the buying club’s perspective, the transfer of a player can be interpreted as an investment from which the club expects uncertain future benefits. This paper aims to develop a decision-oriented approach for the valuation of football players that could theoretically help clubs determine the subjective value of investing in a player to assess its potential economic advantage.
Design/methodology/approach
We build on a semi-investment-theoretical risk-value model and elaborate an approach that can be applied in imperfect markets under uncertainty. Furthermore, we illustrate the valuation process with a numerical example based on fictitious data. Due to this explicitly intended decision support, our approach differs fundamentally from a large part of the literature, which is empirically based and attempts to explain observable figures through various influencing factors.
Findings
We propose a semi-investment-theoretical valuation approach that is based on a two-step model, namely, a first valuation at the club level and a final calculation to determine the decision value for an individual player. In contrast to the previous literature, we do not rely on an econometric framework that attempts to explain observable past variables but rather present a general, forward-looking decision model that can support managers in their investment decisions.
Originality/value
This approach is the first to show managers how to make an economically rational investment decision by determining the maximum payable price. Nevertheless, there is no normative requirement for the decision-maker. The club will obviously have to supplement the calculus with nonfinancial objectives. Overall, our paper can constitute a first step toward decision-oriented player valuation and for theoretical comparison with practical investment decisions in football clubs, which obviously take into account other specific sports team decisions.
Details
Keywords
Fuad Fuad, Agung Juliarto and Puji Harto
This study aims to examine whether International Financial Reporting Standards (IFRS) convergence process adds value to the accounting quality dimensions, including accruals…
Abstract
Purpose
This study aims to examine whether International Financial Reporting Standards (IFRS) convergence process adds value to the accounting quality dimensions, including accruals quality, earnings smoothing, timely loss recognition and earnings persistence.
Design/methodology/approach
It analyzes the hypothesis of accounting quality changes in post-IFRS convergence by using the univariate and multivariate statistics. Particularly, the authors rely on panel data analyses using industrial companies’ data from 2008 until 2014, comprising 3,861 firm-years observations, in Indonesia.
Findings
The results indicate that there is no conclusive evidence that all accounting quality dimensions including accruals quality, earnings smoothing, timely loss recognition and earnings persistence increased in post-IFRS convergence.
Practical implications
The findings of this study may help regulators and standard setters to consider future adoption of IFRS, mostly to figure out the best “formula” to increase the usefulness of accounting information in post-IFRS convergence.
Originality/value
Rather than doing piecemeal work, the current study focuses on IFRS convergence on a broader aspect of accounting quality dimensions. It also focuses on the convergence process of IFRS as an alternative of full adoption, which has been the focus of many research studies.
Details
Keywords
Jan Frederick Hausner and Gary van Vuuren
Using a portfolio comprising liquid global stocks and bonds, this study aims to limit absolute risk to that of a standardised benchmark and determine whether this has a…
Abstract
Purpose
Using a portfolio comprising liquid global stocks and bonds, this study aims to limit absolute risk to that of a standardised benchmark and determine whether this has a significant impact on expected return in both high volatility period (HV) and low volatility period (LV).
Design/methodology/approach
Using a traditional benchmark comprising 40% equity and 60% bonds, a constant tracking error (TE) frontier was constructed and implemented. Portfolio performance for different TE constraints and different economic periods (expansion and contraction) was explored.
Findings
Results indicate that during HV, replicating benchmark portfolio risk produces portfolios that outperform both the maximum return (MR) portfolio and the benchmark. MR portfolios outperform those with the same risk as that of the benchmark in LV. The MR portfolio weights assets to obtain the highest return on the TE frontier. During HV, the benchmark replicated risk portfolio obtained a higher absolute risk value than that of the MR portfolio because of an inefficient benchmark. In HV, the benchmark replicated risk portfolio favoured intermediate maturity treasury bills.
Originality/value
There is a dearth of literature exploring the performance of active portfolios subject to TE constraints. This work addresses this gap and demonstrates, for the first time, the relative portfolio performance of several standard portfolio choices on the frontier.
Details