Search results
1 – 10 of over 9000Steven F. Lehrer and Louis-Pierre Lepage
Prior analyses of racial bias in the New York City’s Stop-and-Frisk program implicitly assumed that potential bias of police officers did not vary by crime type and that their…
Abstract
Prior analyses of racial bias in the New York City’s Stop-and-Frisk program implicitly assumed that potential bias of police officers did not vary by crime type and that their decision of which type of crime to report as the basis for the stop did not exhibit any bias. In this paper, we first extend the hit rates model to consider crime type heterogeneity in racial bias and police officer decisions of reported crime type. Second, we reevaluate the program while accounting for heterogeneity in bias along crime types and for the sample selection which may arise from conditioning on crime type. We present evidence that differences in biases across crime types are substantial and specification tests support incorporating corrections for selective crime reporting. However, the main findings on racial bias do not differ sharply once accounting for this choice-based selection.
Details
Keywords
This chapter examines the use of mathematical programming to remove systematic bias from demand forecasts. A debiasing methodology is developed and applied to demand data from an…
Abstract
This chapter examines the use of mathematical programming to remove systematic bias from demand forecasts. A debiasing methodology is developed and applied to demand data from an actual service operation. The accuracy of the proposed methodology is compared to the accuracy of a well-known approach that utilizes ordinary least squares regression. Results indicate that the proposed method outperforms the least squares approach.
Ingo Hoffmann and Christoph J. Börner
This paper aims to evaluate the accuracy of a quantile estimate. Especially when estimating high quantiles from a few data, the quantile estimator itself is a random number with…
Abstract
Purpose
This paper aims to evaluate the accuracy of a quantile estimate. Especially when estimating high quantiles from a few data, the quantile estimator itself is a random number with its own distribution. This distribution is first determined and then it is shown how the accuracy of the quantile estimation can be assessed in practice.
Design/methodology/approach
The paper considers the situation that the parent distribution of the data is unknown, the tail is modeled with the generalized pareto distribution and the quantile is finally estimated using the fitted tail model. Based on well-known theoretical preliminary studies, the finite sample distribution of the quantile estimator is determined and the accuracy of the estimator is quantified.
Findings
In general, the algebraic representation of the finite sample distribution of the quantile estimator was found. With the distribution, all statistical quantities can be determined. In particular, the expected value, the variance and the bias of the quantile estimator are calculated to evaluate the accuracy of the estimation process. Scaling laws could be derived and it turns out that with a fat tail and few data, the bias and the variance increase massively.
Research limitations/implications
Currently, the research is limited to the form of the tail, which is interesting for the financial sector. Future research might consider problems where the tail has a finite support or the tail is over-fat.
Practical implications
The ability to calculate error bands and the bias for the quantile estimator is equally important for financial institutions, as well as regulators and auditors.
Originality/value
Understanding the quantile estimator as a random variable and analyzing and evaluating it based on its distribution gives researchers, regulators, auditors and practitioners new opportunities to assess risk.
Details
Keywords
Nada R. Sanders and Larry P. Ritzman
Accurate forecasting has become a challenge for companies operating in today's business environment, characterized by high uncertainty and short response times. Rapid…
Abstract
Accurate forecasting has become a challenge for companies operating in today's business environment, characterized by high uncertainty and short response times. Rapid technological innovations and e‐commerce have created an environment where historical data are often of limited value in predicting the future. In business organizations, the marketing function typically generates sales forecasts based on judgmental methods that rely heavily on subjective assessments and “soft” information, while operations rely more on quantitative data. Forecast generation rarely involves the pooling of information from these two functions. Increasingly, successful forecasting warrants the use of composite methodologies that incorporate a range of information from traditional quantitative computations usually used by operations, to marketing's judgmental assessments of markets. The purpose of this paper is to develop a framework for the integration of marketing's judgmental forecasts with traditional quantitative forecasting methods. Four integration methodologies are presented and evaluated relative to their appropriateness in combining forecasts within an organizational context. Our assessment considers human factors such as ownership, and the location of final forecast generation within the organization. Although each methodology has its strengths and weaknesses, not every methodology is appropriate for every organizational context.
Details
Keywords
This chapter analyzes the properties of an alternative least-squares based estimator for linear panel data models with general predetermined regressors. This approach uses…
Abstract
This chapter analyzes the properties of an alternative least-squares based estimator for linear panel data models with general predetermined regressors. This approach uses backward means of regressors to approximate individual specific fixed effects (FE). The author analyzes sufficient conditions for this estimator to be asymptotically efficient, and argue that, in comparison with the FE estimator, the use of backward means leads to a non-trivial bias-variance tradeoff. The author complements theoretical analysis with an extensive Monte Carlo study, where the author finds that some of the currently available results for restricted AR(1) model cannot be easily generalized, and should be extrapolated with caution.
Details
Keywords
Francieli Tonet Maciel and Ana Maria Hermeto C. Oliveira
The purpose of this paper is to examine the effects of changes in the relative composition and in the segmentation between formal and informal labour on earnings differentials…
Abstract
Purpose
The purpose of this paper is to examine the effects of changes in the relative composition and in the segmentation between formal and informal labour on earnings differentials among women over the last decade in Brazil.
Design/methodology/approach
The authors follow Machado and Mata’s method to decompose the changes along the earnings distribution, with correction for sample selection and using microdata from the Demographic Census of 2000 and 2010. Informal labour was divided into informal salaried labour and self-employment, and both groups were compared with the formal labour separately.
Findings
The results indicate that, in both cases, an increase in earnings differentials in the bottom of the earnings distribution due to segmentation, suggesting that the returns to formal labour have grown relatively to informal labour during the period. On the other hand, earnings differentials decrease as one moves up the earnings distribution due to the composition effect, which is stronger on the top of the distribution relatively to the bottom. Furthermore, there are compensating differentials for self-employed women above the 30th quantile, which contributed to reduce the inequality between this group and formal workers.
Originality/value
The paper contributes to a better understanding of the changes taking place in female labour, shedding some light on how they affect different points along the earnings distribution. Furthermore, the adopted approach proposes a new application for the correction of sample bias in the context of quantile regression by employing a logit multinomial, and using the Demographic Census data.
Details
Keywords
Current publication practices in the scholarly (International) Business and Management community are overwhelmingly anti-Popperian, which fundamentally frustrates the production…
Abstract
Purpose
Current publication practices in the scholarly (International) Business and Management community are overwhelmingly anti-Popperian, which fundamentally frustrates the production of scientific progress. This is the result of at least five related biases: the verification, novelty, normal science, evidence, and market biases. As a result, no one is really interested in replicating anything. In this essay, the author extensively argues what he believes is wrong, why that is so, and what we might do about this. The paper aims to discuss these issues.
Design/methodology/approach
This is an essay, combining a literature review with polemic argumentation.
Findings
Only a tiny fraction of published studies involve a replication effort. Moreover, journal authors, editors, reviewers and readers are not interested in seeing nulls and negatives in print. This replication crisis implies that Popper’s critical falsification principle is actually thrown into the scientific community’s dustbin. Behind the façade of all these so-called new discoveries, false positives abound, as do questionable research practices meant to produce all this allegedly cutting-edge and groundbreaking significant findings. If this dismal state of affairs does not change for the good, (International) Business and Management research is ending up in a deadlock.
Research limitations/implications
A radical cultural change in the scientific community, including (International) Business and Management, is badly needed. It should be in the community’s DNA to engage in the quest for the “truth” – nothing more, nothing less. Such a change must involve all stakeholders: scholars, editors, reviewers, and students, but also funding agencies, research institutes, university presidents, faculty deans, department chairs, journalists, policymakers, and publishers. In the words of Ioannidis (2012, p. 647): “Safeguarding scientific principles is not something to be done once and for all. It is a challenge that needs to be met successfully on a daily basis both by single scientists and the whole scientific establishment.”
Practical implications
Publication practices have to change radically. For instance, editorial policies should dispose of their current overly dominant pro-novelty and pro-positives biases, and explicitly encourage the publication of replication studies, including failed and unsuccessful ones that report null and negative findings.
Originality/value
This is an explicit plea to change the way the scientific research community operates, offering a series of concrete recommendations what to do before it is too late.
Details
Keywords
Mateus Canniatti Ponchio, Nelson Lerner Barth and Felipe Zambaldi
Kamil Krasuski, Janusz Cwiklak and Marek Grzegorzewski
This paper aims to present the problem of the integration of the global positioning system (GPS)/global navigation satellite system (GLONASS) data for the processing of aircraft…
Abstract
Purpose
This paper aims to present the problem of the integration of the global positioning system (GPS)/global navigation satellite system (GLONASS) data for the processing of aircraft position determination.
Design/methodology/approach
The aircraft coordinates were obtained based on GPS and GLONASS code observations for the single point positioning (SPP) method. The numerical computations were executed in the aircraft positioning software (APS) package. The mathematical scheme of equation observation of the SPP method was solved using least square estimation in stochastic processing. In the research experiment, the raw global navigation satellite system data from the Topcon HiperPro onboard receiver were applied.
Findings
In the paper, the mean errors of an aircraft position from APS were under 3 m. In addition, the accuracy of aircraft positioning was better than 6 m. The integrity term for horizontal protection level and vertical protection level parameters in the flight test was below 16 m.
Research limitations/implications
The paper presents only the application of GPS/GLONASS observations in aviation, without satellite data from other navigation systems.
Practical implications
The presented research method can be used in an aircraft based augmentation system in Polish aviation.
Social implications
The paper is addressed to persons who work in aviation and air transport.
Originality/value
The paper presents the SPP method as a satellite technique for the recovery of an aircraft position in an aviation test.
Details