Search results

1 – 10 of over 16000
Book part
Publication date: 24 April 2023

J. Isaac Miller

Transient climate sensitivity relates total climate forcings from anthropogenic and other sources to surface temperature. Global transient climate sensitivity is well studied, as…

Abstract

Transient climate sensitivity relates total climate forcings from anthropogenic and other sources to surface temperature. Global transient climate sensitivity is well studied, as are the related concepts of equilibrium climate sensitivity (ECS) and transient climate response (TCR), but spatially disaggregated local climate sensitivity (LCS) is less so. An energy balance model (EBM) and an easily implemented semiparametric statistical approach are proposed to estimate LCS using the historical record and to assess its contribution to global transient climate sensitivity. Results suggest that areas dominated by ocean tend to import energy, they are relatively more sensitive to forcings, but they warm more slowly than areas dominated by land. Economic implications are discussed.

Details

Essays in Honor of Joon Y. Park: Econometric Methodology in Empirical Applications
Type: Book
ISBN: 978-1-83753-212-4

Keywords

Book part
Publication date: 23 October 2023

Glenn W. Harrison and J. Todd Swarthout

We take Cumulative Prospect Theory (CPT) seriously by rigorously estimating structural models using the full set of CPT parameters. Much of the literature only estimates a subset…

Abstract

We take Cumulative Prospect Theory (CPT) seriously by rigorously estimating structural models using the full set of CPT parameters. Much of the literature only estimates a subset of CPT parameters, or more simply assumes CPT parameter values from prior studies. Our data are from laboratory experiments with undergraduate students and MBA students facing substantial real incentives and losses. We also estimate structural models from Expected Utility Theory (EUT), Dual Theory (DT), Rank-Dependent Utility (RDU), and Disappointment Aversion (DA) for comparison. Our major finding is that a majority of individuals in our sample locally asset integrate. That is, they see a loss frame for what it is, a frame, and behave as if they evaluate the net payment rather than the gross loss when one is presented to them. This finding is devastating to the direct application of CPT to these data for those subjects. Support for CPT is greater when losses are covered out of an earned endowment rather than house money, but RDU is still the best single characterization of individual and pooled choices. Defenders of the CPT model claim, correctly, that the CPT model exists “because the data says it should.” In other words, the CPT model was borne from a wide range of stylized facts culled from parts of the cognitive psychology literature. If one is to take the CPT model seriously and rigorously then it needs to do a much better job of explaining the data than we see here.

Details

Models of Risk Preferences: Descriptive and Normative Challenges
Type: Book
ISBN: 978-1-83797-269-2

Keywords

Book part
Publication date: 23 October 2023

Brian Albert Monroe

Risk preferences play a critical role in almost every facet of economic activity. Experimental economists have sought to infer the risk preferences of subjects from choice…

Abstract

Risk preferences play a critical role in almost every facet of economic activity. Experimental economists have sought to infer the risk preferences of subjects from choice behavior over lotteries. To help mitigate the influence of observable, and unobservable, heterogeneity in their samples, risk preferences have been estimated at the level of the individual subject. Recent work has detailed the lack of statistical power in descriptively classifying individual subjects as conforming to Expected Utility Theory (EUT) or Rank Dependent Utility (RDU). I discuss the normative consequences of this lack of power and provide some suggestions to improve the accuracy of normative inferences about individual-level choice behavior.

Details

Models of Risk Preferences: Descriptive and Normative Challenges
Type: Book
ISBN: 978-1-83797-269-2

Keywords

Book part
Publication date: 5 April 2024

Taining Wang and Daniel J. Henderson

A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production…

Abstract

A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production frontier is considered without log-transformation to prevent induced non-negligible estimation bias. Second, the model flexibility is improved via semiparameterization, where the technology is an unknown function of a set of environment variables. The technology function accounts for latent heterogeneity across individual units, which can be freely correlated with inputs, environment variables, and/or inefficiency determinants. Furthermore, the technology function incorporates a single-index structure to circumvent the curse of dimensionality. Third, distributional assumptions are eschewed on both stochastic noise and inefficiency for model identification. Instead, only the conditional mean of the inefficiency is assumed, which depends on related determinants with a wide range of choice, via a positive parametric function. As a result, technical efficiency is constructed without relying on an assumed distribution on composite error. The model provides flexible structures on both the production frontier and inefficiency, thereby alleviating the risk of model misspecification in production and efficiency analysis. The estimator involves a series based nonlinear least squares estimation for the unknown parameters and a kernel based local estimation for the technology function. Promising finite-sample performance is demonstrated through simulations, and the model is applied to investigate productive efficiency among OECD countries from 1970–2019.

Book part
Publication date: 23 October 2023

Nathaniel T. Wilcox

The author presents new estimates of the probability weighting functions found in rank-dependent theories of choice under risk. These estimates are unusual in two senses. First…

Abstract

The author presents new estimates of the probability weighting functions found in rank-dependent theories of choice under risk. These estimates are unusual in two senses. First, they are free of functional form assumptions about both utility and weighting functions, and they are entirely based on binary discrete choices and not on matching or valuation tasks, though they depend on assumptions concerning the nature of probabilistic choice under risk. Second, estimated weighting functions contradict widely held priors of an inverse-s shape with fixed point well in the interior of the (0,1) interval: Instead the author usually finds populations dominated by “optimists” who uniformly overweight best outcomes in risky options. The choice pairs used here mostly do not provoke similarity-based simplifications. In a third experiment, the author shows that the presence of choice pairs that provoke similarity-based computational shortcuts does indeed flatten estimated probability weighting functions.

Details

Models of Risk Preferences: Descriptive and Normative Challenges
Type: Book
ISBN: 978-1-83797-269-2

Keywords

Book part
Publication date: 23 October 2023

Morten I. Lau, Hong Il Yoo and Hongming Zhao

We evaluate the hypothesis of temporal stability in risk preferences using two recent data sets from longitudinal lab experiments. Both experiments included a combination of…

Abstract

We evaluate the hypothesis of temporal stability in risk preferences using two recent data sets from longitudinal lab experiments. Both experiments included a combination of decision tasks that allows one to identify a full set of structural parameters characterizing risk preferences under Cumulative Prospect Theory (CPT), including loss aversion. We consider temporal stability in those structural parameters at both population and individual levels. The population-level stability pertains to whether the distribution of risk preferences across individuals in the subject population remains stable over time. The individual-level stability pertains to within-individual correlation in risk preferences over time. We embed the CPT structure in a random coefficient model that allows us to evaluate temporal stability at both levels in a coherent manner, without having to switch between different sets of models to draw inferences at a specific level.

Details

Models of Risk Preferences: Descriptive and Normative Challenges
Type: Book
ISBN: 978-1-83797-269-2

Keywords

Book part
Publication date: 5 April 2024

Zhichao Wang and Valentin Zelenyuk

Estimation of (in)efficiency became a popular practice that witnessed applications in virtually any sector of the economy over the last few decades. Many different models were…

Abstract

Estimation of (in)efficiency became a popular practice that witnessed applications in virtually any sector of the economy over the last few decades. Many different models were deployed for such endeavors, with Stochastic Frontier Analysis (SFA) models dominating the econometric literature. Among the most popular variants of SFA are Aigner, Lovell, and Schmidt (1977), which launched the literature, and Kumbhakar, Ghosh, and McGuckin (1991), which pioneered the branch taking account of the (in)efficiency term via the so-called environmental variables or determinants of inefficiency. Focusing on these two prominent approaches in SFA, the goal of this chapter is to try to understand the production inefficiency of public hospitals in Queensland. While doing so, a recognized yet often overlooked phenomenon emerges where possible dramatic differences (and consequently very different policy implications) can be derived from different models, even within one paradigm of SFA models. This emphasizes the importance of exploring many alternative models, and scrutinizing their assumptions, before drawing policy implications, especially when such implications may substantially affect people’s lives, as is the case in the hospital sector.

Article
Publication date: 14 November 2023

Fong-Yao Chen and Michael Y. Mak

Valuers should independently assess market value. The purpose of this article is to analyze whether the valuation behavior remains independent when commissioned by publicly listed…

Abstract

Purpose

Valuers should independently assess market value. The purpose of this article is to analyze whether the valuation behavior remains independent when commissioned by publicly listed companies in Taiwan.

Design/methodology/approach

This study used both quantitative and qualitative methods. Quantitative data analysis was used to examine the estimated premium ratio and estimated divergent ratio with the independent sample t test and Wilcoxon-Mann-Whitney test. To complement and validate the quantitative analysis, open-ended questionnaires were conducted, providing additional insights into the research findings.

Findings

The results showed that there is a significant difference in estimated valuations commissioned by representatives of buyers and sellers, and the estimated premium ratios commissioned by representatives of buyers were higher than those of sellers. Furthermore, the open-ended questionnaires results indicate that these findings may be influenced by clients for less experienced appraisers. However, for senior appraisers, this is seen as an action to gain a better understanding of the valuation purpose and always within a reasonable price range. In addition, client influence is not a static factor; it may transform into the valuer's behavior as the appraiser's experience grows and deepens.

Practical implications

It is difficult to obtain valuation reports commissioned by representatives of both buyers and sellers for the same property transactions. In this study, data were obtained from the Market Observation Post-System (MOPS) in Taiwan. As valuation reports could not be obtained, estimated valuations and transaction prices are used to calculate estimated premium ratio and estimated divergent ratios.

Originality/value

Previous investigations of the client effect have been conducted using qualitative methods including questionnaire surveys, in-depth interviews and experimental design. However, these studies are subject to moral hazard. This study may be the first study that has access to data on valuations for both buyers and sellers in such a formal setting.

Details

Journal of Property Investment & Finance, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-578X

Keywords

Article
Publication date: 24 August 2023

Banumathy Sundararaman and Neelakandan Ramalingam

This study was carried out to analyze the importance of consumer preference data in forecasting demand in apparel retailing.

Abstract

Purpose

This study was carried out to analyze the importance of consumer preference data in forecasting demand in apparel retailing.

Methodology

To collect preference data, 729 hypothetical stock keeping units (SKU) were derived using a full factorial design, from a combination of six attributes and three levels each. From the hypothetical SKU's, 63 practical SKU's were selected for further analysis. Two hundred two responses were collected from a store intercept survey. Respondents' utility scores for all 63 SKUs were calculated using conjoint analysis. In estimating aggregate demand, to allow for consumer substitution and to make the SKU available when a consumer wishes to buy more than one item in the same SKU, top three highly preferred SKU's utility scores of each individual were selected and classified using a decision tree and was aggregated. A choice rule was modeled to include substitution; by applying this choice rule, aggregate demand was estimated.

Findings

The respondents' utility scores were calculated. The value of Kendall's tau is 0.88, the value of Pearson's R is 0.98 and internal predictive validity using Kendall's tau is 1.00, and this shows the high quality of data obtained. The proposed model was used to estimate the demand for 63 SKUs. The demand was estimated at 6.04 per cent for the SKU cotton, regular style, half sleeve, medium priced, private label. The proposed model for estimating demand using consumer preference data gave better estimates close to actual sales than expert opinion data. The Spearman's rank correlation between actual sales and consumer preference data is 0.338 and is significant at 5 per cent level. The Spearman's rank correlation between actual sales and expert opinion is −0.059, and there is no significant relation between expert opinion data and actual sales. Thus, consumer preference model proves to be better in estimating demand than expert opinion data.

Research implications

There has been a considerable amount of work done in choice-based models. There is a lot of scope in working in deterministic models.

Practical implication

The proposed consumer preference-based demand estimation model can be beneficial to the apparel retailers in increasing their profit by reducing stock-out and overstocking situations. Though conjoint analysis is used in demand estimation in other industries, it is not used in apparel for demand estimations and can be greater use in its simplest form.

Originality/value

This research is the first one to model consumer preferences-based data to estimate demand in apparel. This research was practically tested in an apparel retail store. It is original.

Details

Journal of Fashion Marketing and Management: An International Journal, vol. 28 no. 2
Type: Research Article
ISSN: 1361-2026

Keywords

Article
Publication date: 20 January 2023

Sakshi Soni, Ashish Kumar Shukla and Kapil Kumar

This article aims to develop procedures for estimation and prediction in case of Type-I hybrid censored samples drawn from a two-parameter generalized half-logistic distribution…

Abstract

Purpose

This article aims to develop procedures for estimation and prediction in case of Type-I hybrid censored samples drawn from a two-parameter generalized half-logistic distribution (GHLD).

Design/methodology/approach

The GHLD is a versatile model which is useful in lifetime modelling. Also, hybrid censoring is a time and cost-effective censoring scheme which is widely used in the literature. The authors derive the maximum likelihood estimates, the maximum product of spacing estimates and Bayes estimates with squared error loss function for the unknown parameters, reliability function and stress-strength reliability. The Bayesian estimation is performed under an informative prior set-up using the “importance sampling technique”. Afterwards, we discuss the Bayesian prediction problem under one and two-sample frameworks and obtain the predictive estimates and intervals with corresponding average interval lengths. Applications of the developed theory are illustrated with the help of two real data sets.

Findings

The performances of these estimates and prediction methods are examined under Type-I hybrid censoring scheme with different combinations of sample sizes and time points using Monte Carlo simulation techniques. The simulation results show that the developed estimates are quite satisfactory. Bayes estimates and predictive intervals estimate the reliability characteristics efficiently.

Originality/value

The proposed methodology may be used to estimate future observations when the available data are Type-I hybrid censored. This study would help in estimating and predicting the mission time as well as stress-strength reliability when the data are censored.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 9
Type: Research Article
ISSN: 0265-671X

Keywords

1 – 10 of over 16000