Search results

1 – 10 of over 9000
Book part
Publication date: 20 June 2003

Mark C Berger, Dan A Black, Amitabh Chandra and Frank A Scott

In the spirit of Polachek (1975) and the later work of Becker (1985) on the role of specialization within the family, we examine the relationship between fringe benefits and the…

Abstract

In the spirit of Polachek (1975) and the later work of Becker (1985) on the role of specialization within the family, we examine the relationship between fringe benefits and the division of labor within a married household. The provision of fringe benefits is complicated by their non-additive nature within the household, as well as IRS regulations that stipulate that they be offered in a non-discriminatory manner in order to maintain their tax-exempt status. We model family decisions within a framework in which one spouse specializes in childcare and as a result experiences a reduction in market productive capacity. Our model predicts that the forces toward specialization become stronger as the number of children increase, so that the spouse specializing in childcare will have some combination of lower wages, hours worked, and fringe benefits. We demonstrate that to the extent that labor markets are incomplete, the family is less likely to obtain health insurance from the employer of the spouse that specializes in childcare. Using data from the April 1993 CPS we find evidence consistent with our model.

Details

Worker Well-Being and Public Policy
Type: Book
ISBN: 978-1-84950-213-9

Article
Publication date: 2 May 2017

Yugu Xiao, Ke Wang and Lysa Porth

While crop insurance ratemaking has been studied for many decades, it is still faced with many challenges. Crop insurance premium rates (PRs) are traditionally determined only by…

Abstract

Purpose

While crop insurance ratemaking has been studied for many decades, it is still faced with many challenges. Crop insurance premium rates (PRs) are traditionally determined only by point estimation, and this approach may lead to uncertainty because it is sensitive to the underwriter’s assumptions regarding the trend, yield distribution, and other issues such as data scarcity and credibility. Thus, the purpose of this paper is to obtain the interval estimate for the PR, which can provide additional information about the accuracy of the point estimate.

Design/methodology/approach

A bootstrap method based on the loss cost ratio ratemaking approach is proposed. Using Monte Carlo experiments, the performance of this method is tested against several popular methods. To measure the efficiency of the confidence interval (CI) estimators, the actual coverage probabilities and the average widths of these intervals are calculated.

Findings

The proposed method is shown to be as efficient as the non-parametric kernel method, and has the features of flexibility and robustness, and can provide insight for underwriters regarding uncertainty based on the width of the CI.

Originality/value

Comprehensive comparisons are conducted to show the advantage and the efficiency of the proposed method. In addition, a significant empirical example is given to show how to use the CIs to support ratemaking.

Details

China Agricultural Economic Review, vol. 9 no. 2
Type: Research Article
ISSN: 1756-137X

Keywords

Article
Publication date: 29 June 2010

Jeh‐Nan Pan, Tzu‐Chun Kuo and Abraham Bretholt

The purpose of this research is to develop a new key performance index (KPI) and its interval estimation for measuring the service quality from customers' perceptions, since most…

5628

Abstract

Purpose

The purpose of this research is to develop a new key performance index (KPI) and its interval estimation for measuring the service quality from customers' perceptions, since most service quality data follow non‐normal distribution.

Design/methodology/approach

Based on the non‐normal process capability indices used in manufacturing industries, a new KPI suitable for measuring service quality is developed using Parasuraman's 5th Gap between customers' expectation and perception. Moreover, the confidence interval of the proposed KPI is established using the bootstrapping method.

Findings

The quantitative method for measuring the service quality through the new KPI and its interval estimation is illustrated by a realistic example. The results show that the new KPI allows practising managers to evaluate the actual service quality level delivered within each of five SERVQUAL categories and prioritize the possible improvement projects from customers' perspectives. Moreover, compared with the traditional method of sample size determination, a substantial amount of cost savings can be expected by using the suggested sample sizes.

Practical implications

The paper presents a structured approach of opportunity assessment for improving service quality from a strategic alignment perspective, particularly in the five dimensions: tangibles, reliability, responsiveness, assurance, and empathy. The new approach provides practising managers with a decision‐making tool for measuring service quality, detecting problematic situations and selecting the most urgent improvement project. Once the existing service problems are identified and improvement projects are prioritized, it can lead to the direction of continuous improvement for any service industry.

Originality/value

Given a managerial target on any desired service level as well as customers' perceptions and expectations, the new KPI could be applied to any non‐normal service quality and other survey data. Thus, the corporate performance in terms of key factors of business success can also be measured by the new KPI, which may lead to managing complexities and enhancing sustainability in service industries.

Details

Industrial Management & Data Systems, vol. 110 no. 6
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 1 February 2006

Ahmed Hurairah, Noor Akma Ibrahim, Isa Bin Daud and Kassim Haron

Exact confidence interval estimation for the new extreme value model is often impractical. This paper seeks to evaluate the accuracy of approximate confidence intervals for the…

Abstract

Purpose

Exact confidence interval estimation for the new extreme value model is often impractical. This paper seeks to evaluate the accuracy of approximate confidence intervals for the two‐parameter new extreme value model.

Design/methodology/approach

The confidence intervals of the parameters of the new model based on likelihood ratio, Wald and Rao statistics are evaluated and compared through the simulation study. The criteria used in evaluating the confidence intervals are the attainment of the nominal error probability and the symmetry of lower and upper error probabilities.

Findings

This study substantiates the merits of the likelihood ratio, the Wald and the Rao statistics. The results indicate that the likelihood ratio‐based intervals perform much better than the Wald and Rao intervals.

Originality/value

Exact interval estimates for the new model are difficult to obtain. Consequently, large sample intervals based on the asymptotic maximum likelihood estimators have gained widespread use. Intervals based on inverting likelihood ratio, Rao and Wald statistics are rarely used in commercial packages. This paper shows that the likelihood ratio intervals are superior to intervals based on the Wald and the Rao statistics.

Details

Engineering Computations, vol. 23 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 29 May 2020

Farnad Nasirzadeh, H.M. Dipu Kabir, Mahmood Akbari, Abbas Khosravi, Saeid Nahavandi and David G. Carmichael

This study aims to propose the adoption of artificial neural network (ANN)-based prediction intervals (PIs) to give more reliable prediction of labour productivity using…

Abstract

Purpose

This study aims to propose the adoption of artificial neural network (ANN)-based prediction intervals (PIs) to give more reliable prediction of labour productivity using historical data.

Design/methodology/approach

Using the proposed PI method, various sources of uncertainty affecting predictions can be accounted for, and a PI is proposed instead of a less reliable single-point estimate. The proposed PI consists of a lower and upper bound in which the realization of the predicted variable, namely, labour productivity, is anticipated to fall with a defined probability and represented in terms of a confidence level (CL).

Findings

The proposed PI method is implemented on a case study project to predict labour productivity. The quality of the generated PIs for the labour productivity is investigated at three confidence levels. The results show that the proposed method can predict the value of labour productivity efficiently.

Practical implications

This study is the first attempt in construction management to undertake a shift from deterministic point predictions to interval forecasts to improve the reliability of predictions. The proposed PI method will help project managers obtain accurate and credible predictions of labour productivity using historical data. With a better understanding of future outcomes, project managers can adopt appropriate improvement strategies to enhance labour productivity before commencing a project.

Originality/value

Point predictions provided by traditional deterministic ANN-based forecasting methodologies may be unreliable due to the different sources of uncertainty affecting predictions. The current study proposes ANN-based PIs as an alternative and robust tool to give a more reliable prediction of labour productivity using historical data. Using the proposed method, various sources of uncertainty affecting the predictions are accounted for, and a PI is proposed instead of a less reliable single point estimate.

Details

Engineering, Construction and Architectural Management, vol. 27 no. 9
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 3 April 2017

David O. Baloye and Lobina Gertrude Palamuleni

The purpose of this paper is to map the cascade effects of emergencies on critical infrastructure in a fast-growing city of a developing country. The paper specifically seeks to…

Abstract

Purpose

The purpose of this paper is to map the cascade effects of emergencies on critical infrastructure in a fast-growing city of a developing country. The paper specifically seeks to refocus the attention of decision makers and emergency managers towards a more effective way of reducing risk and costs associated with contingencies.

Design/methodology/approach

The study was based on a 2D representation of the three initiating events of fire, flood and automobile crashes. Detailed analysis was undertaken of the effects on the critical infrastructure, based on the probability of occurrence, frequency, spatial extent and degree of damage for the emergencies studied. Subsequently, a cascade matrix was generated to analyse the level of interaction or interdependencies between the participating critical infrastructures in the study area. A model of the cascade effects under a typical emergency was also generated using a software model of network trace functions.

Findings

The results show that while different levels of probability of occurrence, frequency and extent of damage was observed on the evaluated critical infrastructure under different emergency events, damage to the electricity distribution components of the critical infrastructure recorded the highest cascade effect for all emergency events.

Originality/value

This paper underlines the need to pay greater attention to providing protection to critical infrastructure in the rapidly growing cities, especially in developing countries. Findings from this study in Abeokuta, Nigeria, underscore the needs to expand the prevailing critical infrastructure protection beyond the current power and oil sectors in the national development plan. They also highlight the urgency for greater research attention to critical infrastructure inventories. More importantly, the results stress the need for concerted efforts towards proactive emergency management procedures, rather than maintaining the established “fire brigade, window dressing” approach to emergency management, at all levels of administration.

Details

Disaster Prevention and Management: An International Journal, vol. 26 no. 2
Type: Research Article
ISSN: 0965-3562

Keywords

Article
Publication date: 16 August 2021

Jiwon Nam-Speers

The purpose of this study was to measure the bias on a binary option's effect estimate that appeared in the types of questions asked and in the placement changes of public service…

Abstract

Purpose

The purpose of this study was to measure the bias on a binary option's effect estimate that appeared in the types of questions asked and in the placement changes of public service users.

Design/methodology/approach

The author designed Monte Carlo simulations with the analytical strategy of latent trait theory leveraging a probability of care-placement change. The author used difference-in-difference (DID) method to estimate the effects of care settings.

Findings

The author explained the extent of discrepancy between the estimates and the true values of care service effects in changes across time. The time trend of in-home care for the combined effect of in-home care, general maturity, and other environmental factors was estimated in a biased manner, while the bias for the estimate of the incremental effect for foster care could be negligible.

Research limitations/implications

This study was designed based on individual child-unit only. Therefore, higher-level units, such as care setting or cluster, county, and state, should be considered for the simulation model.

Social implications

This study contributed to illuminating an overlooked facet in causal inferences that embrace disproportionate selection biases that appear in categorical data scales in public management research.

Originality/value

To model the nuance of a disproportionate self-selection problem, the author constructed a scenario surrounding a caseworker's judgment of care placement in the child welfare system and investigated potential bias of the caseworker's discretion. The unfolding model has not been widely used in public management research, but it can be usefully leveraged for the estimation of a decision probability.

Details

International Journal of Public Sector Management, vol. 34 no. 6
Type: Research Article
ISSN: 0951-3558

Keywords

Book part
Publication date: 12 July 2021

Shariffah Suhaila Syed Jamaludin

The objective of this study is to propose a functional framework for hydrological applications by treating flood hydrographs as functional data. Discrete flow data are transformed…

Abstract

The objective of this study is to propose a functional framework for hydrological applications by treating flood hydrographs as functional data. Discrete flow data are transformed into a smoothing hydrograph curve, which can be analysed at any time interval. The concept of functional data considered the entire curve concerning time as a single observation. This chapter briefly discussed the idea of descriptive statistics, principal components and outliers in a functional framework. These methods were illustrated in the flood study at Sungai Kelantan River Basin, Malaysia. The results showed that five main components accounted for almost 73.8% of the overall flow variance. Based on the results of the factor scores, the hydrograph curves for the years 1988, 1993 and 2014 may be said to have a unique cluster of their own, while the rest of the years which consider having the same pattern. Due to various shapes and magnitudes, the hydrograph curves of 1988 and 2014 are considered outliers. In conclusion, the functional framework has shown that it is capable of representing a wide range of hydrographs and is capable of extracting additional information found in the hydrograph curve that cannot possibly be captured using classical statistical methods.

Article
Publication date: 5 October 2012

Sangwon Park and Daniel R. Fesenmaier

The purpose of this study is to estimate the extent (mean and range) of non‐response bias in online travel advertising conversion studies for 24 destinations located throughout…

Abstract

Purpose

The purpose of this study is to estimate the extent (mean and range) of non‐response bias in online travel advertising conversion studies for 24 destinations located throughout the USA.

Design/methodology/approach

The method uses two weighting procedures (i.e. post stratification and propensity score weighting) to estimate the extent of non‐response bias by adjusting the estimates provided by respondents to more closely represent the total target sample.

Findings

The results of this analysis clearly indicate that the use of unweighted data to estimate advertising effectiveness may lead to substantial over estimation of conversion rates, but there is limited “bias” in the estimates of median visitor expenditures. The analyses also indicate that weighting systems have substantially different impact on the estimates of conversion rates.

Research limitations/implications

First, the likelihood to answer a survey varies substantially depending on the degree of the familiarity with the mode (i.e. paper, telephone versus internet). Second, the competition‐related variables (i.e. the number and competitiveness of alternative nearby destinations) and various aspects of the campaign (i.e. amount of investment in a location) should be considered.

Originality/value

This study of 24 different American tourism campaigns provides a useful understanding in the nature (mean and range) of impact of non‐response bias in tourism advertising conversion studies. Additionally, where there is difficulty obtaining a reference survey in the advertising study, the two weighting methods used in this study are shown to be useful for assessing the errors in response data, especially in the case of propensity score weighting, where the means to develop multivariate‐based weights is straightforward.

Details

International Journal of Culture, Tourism and Hospitality Research, vol. 6 no. 4
Type: Research Article
ISSN: 1750-6182

Keywords

Article
Publication date: 7 November 2023

Mohammed Bouaddi, Omar Farooq and Catalina Hurwitz

The aim of this paper is to document the effect of analyst coverage on the ex ante probability of stock price crash and the ex ante probability stock price jump.

Abstract

Purpose

The aim of this paper is to document the effect of analyst coverage on the ex ante probability of stock price crash and the ex ante probability stock price jump.

Design/methodology/approach

This paper uses the data of non-financial firms from France to test the arguments presented in this paper during the period between 1997 and 2019. The paper also uses flexible quadrants copulas to compute the ex ante probabilities of crashes and jumps.

Findings

The results show that the extent of analyst coverage is positively associated with the ex ante probability of crash and negatively associated with the ex ante probability of jump. The results remain qualitatively the same after several sensitivity checks. The results also show that the relationship between the extent of analyst coverage and the probability of cash and the probability of jump holds when ex post probability of stock price crash and stock price jump is used.

Originality/value

Unlike most of the earlier papers on this topic, this paper uses the ex ante probability of crash and jump. This proxy is better suited than the ones used in the prior literature because it is a forward-looking measure.

Details

Review of Behavioral Finance, vol. 16 no. 3
Type: Research Article
ISSN: 1940-5979

Keywords

1 – 10 of over 9000