Search results

1 – 10 of over 2000
Book part
Publication date: 3 June 2008

Nathaniel T. Wilcox

Choice under risk has a large stochastic (unpredictable) component. This chapter examines five stochastic models for binary discrete choice under risk and how they combine with…

Abstract

Choice under risk has a large stochastic (unpredictable) component. This chapter examines five stochastic models for binary discrete choice under risk and how they combine with “structural” theories of choice under risk. Stochastic models are substantive theoretical hypotheses that are frequently testable in and of themselves, and also identifying restrictions for hypothesis tests, estimation and prediction. Econometric comparisons suggest that for the purpose of prediction (as opposed to explanation), choices of stochastic models may be far more consequential than choices of structures such as expected utility or rank-dependent utility.

Details

Risk Aversion in Experiments
Type: Book
ISBN: 978-1-84950-547-5

Abstract

Details

Optimal Growth Economics: An Investigation of the Contemporary Issues and the Prospect for Sustainable Growth
Type: Book
ISBN: 978-0-44450-860-7

Book part
Publication date: 21 December 2010

Saleem Shaik and Ashok K. Mishra

In this chapter, we utilize the residual concept of productivity measures defined in the context of normal-gamma stochastic frontier production model with heterogeneity to…

Abstract

In this chapter, we utilize the residual concept of productivity measures defined in the context of normal-gamma stochastic frontier production model with heterogeneity to differentiate productivity and inefficiency measures. In particular, three alternative two-way random effects panel estimators of normal-gamma stochastic frontier model are proposed using simulated maximum likelihood estimation techniques. For the three alternative panel estimators, we use a generalized least squares procedure involving the estimation of variance components in the first stage and estimated variance–covariance matrix to transform the data. Empirical estimates indicate difference in the parameter coefficients of gamma distribution, production function, and heterogeneity function variables between pooled and the two alternative panel estimators. The difference between pooled and panel model suggests the need to account for spatial, temporal, and within residual variations as in Swamy–Arora estimator, and within residual variation in Amemiya estimator with panel framework. Finally, results from this study indicate that short- and long-run variations in financial exposure (solvency, liquidity, and efficiency) play an important role in explaining the variance of inefficiency and productivity.

Details

Maximum Simulated Likelihood Methods and Applications
Type: Book
ISBN: 978-0-85724-150-4

Book part
Publication date: 30 August 2019

Md. Nazmul Ahsan and Jean-Marie Dufour

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult…

Abstract

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult to apply due to the presence of latent variables. The existing methods are either computationally costly and/or inefficient. In this paper, we propose computationally simple estimators for the SV model, which are at the same time highly efficient. The proposed class of estimators uses a small number of moment equations derived from an ARMA representation associated with the SV model, along with the possibility of using “winsorization” to improve stability and efficiency. We call these ARMA-SV estimators. Closed-form expressions for ARMA-SV estimators are obtained, and no numerical optimization procedure or choice of initial parameter values is required. The asymptotic distributional theory of the proposed estimators is studied. Due to their computational simplicity, the ARMA-SV estimators allow one to make reliable – even exact – simulation-based inference, through the application of Monte Carlo (MC) test or bootstrap methods. We compare them in a simulation experiment with a wide array of alternative estimation methods, in terms of bias, root mean square error and computation time. In addition to confirming the enormous computational advantage of the proposed estimators, the results show that ARMA-SV estimators match (or exceed) alternative estimators in terms of precision, including the widely used Bayesian estimator. The proposed methods are applied to daily observations on the returns for three major stock prices (Coca-Cola, Walmart, Ford) and the S&P Composite Price Index (2000–2017). The results confirm the presence of stochastic volatility with strong persistence.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
ISBN: 978-1-78973-241-2

Keywords

Abstract

Details

Optimal Growth Economics: An Investigation of the Contemporary Issues and the Prospect for Sustainable Growth
Type: Book
ISBN: 978-0-44450-860-7

Book part
Publication date: 5 July 2012

Jens Carsten Jackwerth and Mark Rubinstein

How do stock prices evolve over time? The standard assumption of geometric Brownian motion, questionable as it has been right along, is even more doubtful in light of the recent…

Abstract

How do stock prices evolve over time? The standard assumption of geometric Brownian motion, questionable as it has been right along, is even more doubtful in light of the recent stock market crash and the subsequent prices of U.S. index options. With the development of rich and deep markets in these options, it is now possible to use options prices to make inferences about the risk-neutral stochastic process governing the underlying index. We compare the ability of models including Black–Scholes, naïve volatility smile predictions of traders, constant elasticity of variance, displaced diffusion, jump diffusion, stochastic volatility, and implied binomial trees to explain otherwise identical observed option prices that differ by strike prices, times-to-expiration, or times. The latter amounts to examining predictions of future implied volatilities.

Certain naïve predictive models used by traders seem to perform best, although some academic models are not far behind. We find that the better-performing models all incorporate the negative correlation between index level and volatility. Further improvements to the models seem to require predicting the future at-the-money implied volatility. However, an “efficient markets result” makes these forecasts difficult, and improvements to the option-pricing models might then be limited.

Details

Derivative Securities Pricing and Modelling
Type: Book
ISBN: 978-1-78052-616-4

Book part
Publication date: 4 December 2020

K.S.S. Iyer and Madhavi Damle

This chapter has been seminal work of Dr K.S.S. Iyer, which has taken time to develop, for over the last 56 years to be presented here. The method in advance predictive analytics…

Abstract

This chapter has been seminal work of Dr K.S.S. Iyer, which has taken time to develop, for over the last 56 years to be presented here. The method in advance predictive analytics has developed, from his several other applications, in predictive modeling by using the stochastic point process technique. In the chapter on advance predictive analytics, Dr Iyer is collecting his approaches and generalizing it in this chapter. In this chapter, two of the techniques of stochastic point process known as Product Density and Random point process used in modelling problems in High energy particles and cancer, are redefined to suit problems currently in demand in IoT and customer equity in marketing (Iyer, Patil, & Chetlapalli, 2014b). This formulation arises from these techniques being used in different fields like energy requirement in Internet of Things (IoT) devices, growth of cancer cells, cosmic rays’ study, to customer equity and many more approaches.

Book part
Publication date: 4 September 2023

Stephen E. Spear and Warren Young

Abstract

Details

Overlapping Generations: Methods, Models and Morphology
Type: Book
ISBN: 978-1-83753-052-6

Book part
Publication date: 18 October 2019

Eri Nakamura, Takuya Urakami and Kazuhiko Kakamu

This chapter examines the effect of the division of labor from a Bayesian viewpoint. While organizational reforms are crucial for cost reduction in the Japanese water supply…

Abstract

This chapter examines the effect of the division of labor from a Bayesian viewpoint. While organizational reforms are crucial for cost reduction in the Japanese water supply industry, the effect of labor division in intra-organizational units on total costs has, to the best of our knowledge, not been examined empirically. Fortunately, a one-time survey of 79 Japanese water suppliers conducted in 2010 enables us to examine the effect. To examine this problem, a cost stochastic frontier model with endogenous regressors is considered in a cross-sectional setting, because the cost and the division of labor are regarded as simultaneously determined factors. From the empirical analysis, we obtain the following results: (1) total costs rise when the level of labor division becomes high; (2) ignoring the endogeneity leads to the underestimation of the impact of labor division on total costs; and (3) the estimation bias on inefficiency can be mitigated for relatively efficient organizations by including the labor division variable in the model, while the bias for relatively inefficient organizations needs to be controlled by considering its endogeneity. In summary, our results indicate that integration of internal sections is better than specialization in terms of costs for Japanese water supply organizations.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part B
Type: Book
ISBN: 978-1-83867-419-9

Keywords

Book part
Publication date: 5 April 2024

Christine Amsler, Robert James, Artem Prokhorov and Peter Schmidt

The traditional predictor of technical inefficiency proposed by Jondrow, Lovell, Materov, and Schmidt (1982) is a conditional expectation. This chapter explores whether, and by…

Abstract

The traditional predictor of technical inefficiency proposed by Jondrow, Lovell, Materov, and Schmidt (1982) is a conditional expectation. This chapter explores whether, and by how much, the predictor can be improved by using auxiliary information in the conditioning set. It considers two types of stochastic frontier models. The first type is a panel data model where composed errors from past and future time periods contain information about contemporaneous technical inefficiency. The second type is when the stochastic frontier model is augmented by input ratio equations in which allocative inefficiency is correlated with technical inefficiency. Compared to the standard kernel-smoothing estimator, a newer estimator based on a local linear random forest helps mitigate the curse of dimensionality when the conditioning set is large. Besides numerous simulations, there is an illustrative empirical example.

1 – 10 of over 2000