Search results

1 – 10 of over 6000
To view the access options for this content please click here
Book part
Publication date: 10 April 2019

Steven F. Lehrer and Louis-Pierre Lepage

Prior analyses of racial bias in the New York City’s Stop-and-Frisk program implicitly assumed that potential bias of police officers did not vary by crime type and that…

Abstract

Prior analyses of racial bias in the New York City’s Stop-and-Frisk program implicitly assumed that potential bias of police officers did not vary by crime type and that their decision of which type of crime to report as the basis for the stop did not exhibit any bias. In this paper, we first extend the hit rates model to consider crime type heterogeneity in racial bias and police officer decisions of reported crime type. Second, we reevaluate the program while accounting for heterogeneity in bias along crime types and for the sample selection which may arise from conditioning on crime type. We present evidence that differences in biases across crime types are substantial and specification tests support incorporating corrections for selective crime reporting. However, the main findings on racial bias do not differ sharply once accounting for this choice-based selection.

Details

The Econometrics of Complex Survey Data
Type: Book
ISBN: 978-1-78756-726-9

Keywords

To view the access options for this content please click here
Book part
Publication date: 14 November 2011

Joanne Utley

This chapter examines the use of mathematical programming to remove systematic bias from demand forecasts. A debiasing methodology is developed and applied to demand data…

Abstract

This chapter examines the use of mathematical programming to remove systematic bias from demand forecasts. A debiasing methodology is developed and applied to demand data from an actual service operation. The accuracy of the proposed methodology is compared to the accuracy of a well-known approach that utilizes ordinary least squares regression. Results indicate that the proposed method outperforms the least squares approach.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-0-85724-959-3

To view the access options for this content please click here
Article
Publication date: 2 July 2020

Ingo Hoffmann and Christoph J. Börner

This paper aims to evaluate the accuracy of a quantile estimate. Especially when estimating high quantiles from a few data, the quantile estimator itself is a random…

Abstract

Purpose

This paper aims to evaluate the accuracy of a quantile estimate. Especially when estimating high quantiles from a few data, the quantile estimator itself is a random number with its own distribution. This distribution is first determined and then it is shown how the accuracy of the quantile estimation can be assessed in practice.

Design/methodology/approach

The paper considers the situation that the parent distribution of the data is unknown, the tail is modeled with the generalized pareto distribution and the quantile is finally estimated using the fitted tail model. Based on well-known theoretical preliminary studies, the finite sample distribution of the quantile estimator is determined and the accuracy of the estimator is quantified.

Findings

In general, the algebraic representation of the finite sample distribution of the quantile estimator was found. With the distribution, all statistical quantities can be determined. In particular, the expected value, the variance and the bias of the quantile estimator are calculated to evaluate the accuracy of the estimation process. Scaling laws could be derived and it turns out that with a fat tail and few data, the bias and the variance increase massively.

Research limitations/implications

Currently, the research is limited to the form of the tail, which is interesting for the financial sector. Future research might consider problems where the tail has a finite support or the tail is over-fat.

Practical implications

The ability to calculate error bands and the bias for the quantile estimator is equally important for financial institutions, as well as regulators and auditors.

Originality/value

Understanding the quantile estimator as a random variable and analyzing and evaluating it based on its distribution gives researchers, regulators, auditors and practitioners new opportunities to assess risk.

Details

The Journal of Risk Finance, vol. 21 no. 3
Type: Research Article
ISSN: 1526-5943

Keywords

To view the access options for this content please click here

Abstract

Details

Review of Marketing Research
Type: Book
ISBN: 978-0-85724-727-8

To view the access options for this content please click here
Article
Publication date: 1 May 2004

Nada R. Sanders and Larry P. Ritzman

Accurate forecasting has become a challenge for companies operating in today's business environment, characterized by high uncertainty and short response times. Rapid…

Abstract

Accurate forecasting has become a challenge for companies operating in today's business environment, characterized by high uncertainty and short response times. Rapid technological innovations and e‐commerce have created an environment where historical data are often of limited value in predicting the future. In business organizations, the marketing function typically generates sales forecasts based on judgmental methods that rely heavily on subjective assessments and “soft” information, while operations rely more on quantitative data. Forecast generation rarely involves the pooling of information from these two functions. Increasingly, successful forecasting warrants the use of composite methodologies that incorporate a range of information from traditional quantitative computations usually used by operations, to marketing's judgmental assessments of markets. The purpose of this paper is to develop a framework for the integration of marketing's judgmental forecasts with traditional quantitative forecasting methods. Four integration methodologies are presented and evaluated relative to their appropriateness in combining forecasts within an organizational context. Our assessment considers human factors such as ownership, and the location of final forecast generation within the organization. Although each methodology has its strengths and weaknesses, not every methodology is appropriate for every organizational context.

Details

International Journal of Operations & Production Management, vol. 24 no. 5
Type: Research Article
ISSN: 0144-3577

Keywords

To view the access options for this content please click here
Article
Publication date: 9 April 2018

Francieli Tonet Maciel and Ana Maria Hermeto C. Oliveira

The purpose of this paper is to examine the effects of changes in the relative composition and in the segmentation between formal and informal labour on earnings…

Abstract

Purpose

The purpose of this paper is to examine the effects of changes in the relative composition and in the segmentation between formal and informal labour on earnings differentials among women over the last decade in Brazil.

Design/methodology/approach

The authors follow Machado and Mata’s method to decompose the changes along the earnings distribution, with correction for sample selection and using microdata from the Demographic Census of 2000 and 2010. Informal labour was divided into informal salaried labour and self-employment, and both groups were compared with the formal labour separately.

Findings

The results indicate that, in both cases, an increase in earnings differentials in the bottom of the earnings distribution due to segmentation, suggesting that the returns to formal labour have grown relatively to informal labour during the period. On the other hand, earnings differentials decrease as one moves up the earnings distribution due to the composition effect, which is stronger on the top of the distribution relatively to the bottom. Furthermore, there are compensating differentials for self-employed women above the 30th quantile, which contributed to reduce the inequality between this group and formal workers.

Originality/value

The paper contributes to a better understanding of the changes taking place in female labour, shedding some light on how they affect different points along the earnings distribution. Furthermore, the adopted approach proposes a new application for the correction of sample bias in the context of quantile regression by employing a logit multinomial, and using the Demographic Census data.

Details

International Journal of Social Economics, vol. 45 no. 4
Type: Research Article
ISSN: 0306-8293

Keywords

To view the access options for this content please click here
Article
Publication date: 1 August 2016

Arjen van Witteloostuijn

Current publication practices in the scholarly (International) Business and Management community are overwhelmingly anti-Popperian, which fundamentally frustrates the…

Abstract

Purpose

Current publication practices in the scholarly (International) Business and Management community are overwhelmingly anti-Popperian, which fundamentally frustrates the production of scientific progress. This is the result of at least five related biases: the verification, novelty, normal science, evidence, and market biases. As a result, no one is really interested in replicating anything. In this essay, the author extensively argues what he believes is wrong, why that is so, and what we might do about this. The paper aims to discuss these issues.

Design/methodology/approach

This is an essay, combining a literature review with polemic argumentation.

Findings

Only a tiny fraction of published studies involve a replication effort. Moreover, journal authors, editors, reviewers and readers are not interested in seeing nulls and negatives in print. This replication crisis implies that Popper’s critical falsification principle is actually thrown into the scientific community’s dustbin. Behind the façade of all these so-called new discoveries, false positives abound, as do questionable research practices meant to produce all this allegedly cutting-edge and groundbreaking significant findings. If this dismal state of affairs does not change for the good, (International) Business and Management research is ending up in a deadlock.

Research limitations/implications

A radical cultural change in the scientific community, including (International) Business and Management, is badly needed. It should be in the community’s DNA to engage in the quest for the “truth” – nothing more, nothing less. Such a change must involve all stakeholders: scholars, editors, reviewers, and students, but also funding agencies, research institutes, university presidents, faculty deans, department chairs, journalists, policymakers, and publishers. In the words of Ioannidis (2012, p. 647): “Safeguarding scientific principles is not something to be done once and for all. It is a challenge that needs to be met successfully on a daily basis both by single scientists and the whole scientific establishment.”

Practical implications

Publication practices have to change radically. For instance, editorial policies should dispose of their current overly dominant pro-novelty and pro-positives biases, and explicitly encourage the publication of replication studies, including failed and unsuccessful ones that report null and negative findings.

Originality/value

This is an explicit plea to change the way the scientific research community operates, offering a series of concrete recommendations what to do before it is too late.

Details

Cross Cultural & Strategic Management, vol. 23 no. 3
Type: Research Article
ISSN: 2059-5794

Keywords

To view the access options for this content please click here
Article
Publication date: 8 November 2019

Kamil Krasuski, Janusz Cwiklak and Marek Grzegorzewski

This paper aims to present the problem of the integration of the global positioning system (GPS)/global navigation satellite system (GLONASS) data for the processing of…

Abstract

Purpose

This paper aims to present the problem of the integration of the global positioning system (GPS)/global navigation satellite system (GLONASS) data for the processing of aircraft position determination.

Design/methodology/approach

The aircraft coordinates were obtained based on GPS and GLONASS code observations for the single point positioning (SPP) method. The numerical computations were executed in the aircraft positioning software (APS) package. The mathematical scheme of equation observation of the SPP method was solved using least square estimation in stochastic processing. In the research experiment, the raw global navigation satellite system data from the Topcon HiperPro onboard receiver were applied.

Findings

In the paper, the mean errors of an aircraft position from APS were under 3 m. In addition, the accuracy of aircraft positioning was better than 6 m. The integrity term for horizontal protection level and vertical protection level parameters in the flight test was below 16 m.

Research limitations/implications

The paper presents only the application of GPS/GLONASS observations in aviation, without satellite data from other navigation systems.

Practical implications

The presented research method can be used in an aircraft based augmentation system in Polish aviation.

Social implications

The paper is addressed to persons who work in aviation and air transport.

Originality/value

The paper presents the SPP method as a satellite technique for the recovery of an aircraft position in an aviation test.

Details

Aircraft Engineering and Aerospace Technology, vol. 92 no. 2
Type: Research Article
ISSN: 1748-8842

Keywords

To view the access options for this content please click here
Article
Publication date: 4 February 2021

Dereje Mekonnen Bekele, Melkamu Teshome Ayana, Abdella Kemal Mohammed, Tarun Kumar Lohani and Mohammad Shabaz

To assess the impacts of climate change on stream flow and evaluation of reservoir performances, reliability, resilience and vulnerability (RRV) indices are contemplated…

Abstract

Purpose

To assess the impacts of climate change on stream flow and evaluation of reservoir performances, reliability, resilience and vulnerability (RRV) indices are contemplated. Precipitation, temperature (Tmax, Tmin), relative humidity and solar radiation are the hydrological and meteorological data which have been used extensively. Climate data like RCP2.6, RCP4.5 and RCP8.5 were evaluated for the base period 1976–2005 and future climate scenario for 2021–2050 and 2051–2080 as per the convenience.

Design/methodology/approach

The hydrologic engineering center hydrologic modeling system (HEC-HMS) model was used to simulate the current and future inflow volume into the reservoir. The model performance resulted as 0.76 Nash-Sutcliffe efficiency (NSE), 0.78 R2 and −3.17 D and during calibration the results obtained were 0.8 NSE, 0.82 R2 and 2.1 D. The projected climate scenario illustrates an increasing trend for both maximum and minimum temperature though a decreasing trend was documented for precipitation. The average time base reliability of the reservoirs was less than 50% without reservoir condition and greater than 50% for other conditions but volumetric reliability and resilience varies between 50% and 100% for all conditions. The vulnerability result of reservoirs may face shortage of flow ranging from 5.7% to 33.8%.

Findings

Evaluating reservoir simulation and hydropower generation for different climate scenarios by HEC-ResSim model, the energy generated for upper dam ranges from 349.4 MWhr to 331.2 MWhr and 4045.82 MWhr and 3946.74 MWhr for short and long-term future scenario, respectively. RCP for Tmax and Tmin goes on increasing whereas precipitation and inflow to reservoir decreases owing to increase in evapotranspiration. Under diverse climatic conditions power production goes on varying simultaneously.

Originality/value

This paper is original and all the references are properly cited.

Details

World Journal of Engineering, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1708-5284

Keywords

To view the access options for this content please click here
Book part
Publication date: 26 October 2017

Matthew Lindsey and Robert Pavur

Control charts are designed to be effective in detecting a shift in the distribution of a process. Typically, these charts assume that the data for these processes follow…

Abstract

Control charts are designed to be effective in detecting a shift in the distribution of a process. Typically, these charts assume that the data for these processes follow an approximately normal distribution or some known distribution. However, if a data-generating process has a large proportion of zeros, that is, the data is intermittent, then traditional control charts may not adequately monitor these processes. The purpose of this study is to examine proposed control chart methods designed for monitoring a process with intermittent data to determine if they have a sufficiently small percentage of false out-of-control signals. Forecasting techniques for slow-moving/intermittent product demand have been extensively explored as intermittent data is common to operational management applications (Syntetos & Boylan, 2001, 2005, 2011; Willemain, Smart, & Schwarz, 2004). Extensions and modifications of traditional forecasting models have been proposed to model intermittent or slow-moving demand, including the associated trends, correlated demand, seasonality and other characteristics (Altay, Litteral, & Rudisill, 2012). Croston’s (1972) method and its adaptations have been among the principal procedures used in these applications. This paper proposes adapting Croston’s methodology to design control charts, similar to Exponentially Weighted Moving Average (EWMA) control charts, to be effective in monitoring processes with intermittent data. A simulation study is conducted to assess the performance of these proposed control charts by evaluating their Average Run Lengths (ARLs), or equivalently, their percent of false positive signals.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-1-78743-069-3

Keywords

1 – 10 of over 6000