Search results

1 – 10 of 607
Book part
Publication date: 1 August 2004

Harry P. Bowen and Margarethe F. Wiersema

Research on strategic choices available to the firm are often modeled as a limited number of possible decision outcomes and leads to a discrete limited dependent variable. A…

Abstract

Research on strategic choices available to the firm are often modeled as a limited number of possible decision outcomes and leads to a discrete limited dependent variable. A limited dependent variable can also arise when values of a continuous dependent variable are partially or wholly unobserved. This chapter discusses the methodological issues associated with such phenomena and the appropriate statistical methods developed to allow for consistent and efficient estimation of models that involve a limited dependent variable. The chapter also provides a road map for selecting the appropriate statistical technique and it offers guidelines for consistent interpretation and reporting of the statistical results.

Details

Research Methodology in Strategy and Management
Type: Book
ISBN: 978-1-84950-235-1

Book part
Publication date: 26 August 2019

Howard Bodenhorn, Timothy W. Guinnane and Thomas A. Mroz

Long-run changes in living standards occupy an important place in development and growth economics, as well as in economic history. An extensive literature uses heights to study…

Abstract

Long-run changes in living standards occupy an important place in development and growth economics, as well as in economic history. An extensive literature uses heights to study historical living standards. Most historical heights data, however, come from selected subpopulations such as volunteer soldiers, raising concerns about the role of selection bias in these results. Variations in sample mean heights can reflect selection rather than changes in population heights. A Roy-style model of the decision to join the military formalizes the selection problem. Simulations show that even modest differential rewards to the civilian sector produce a military heights sample that is significantly shorter than the cohort from which it is drawn. Monte Carlos show that diagnostics based on departure from the normal distribution have little power to detect selection. To detect height-related selection, we develop a simple, robust diagnostic based on differential selection by age at recruitment. A companion paper (H. Bodenhorn, T. Guinnane, and T. Mroz, 2017) uses this diagnostic to show that the selection problems affect important results in the historical heights literature.

Book part
Publication date: 23 November 2011

Myoung-jae Lee and Sanghyeok Lee

Standard stratified sampling (SSS) is a popular non-random sampling scheme. Maximum likelihood estimator (MLE) is inconsistent if some sampled strata depend on the response…

Abstract

Standard stratified sampling (SSS) is a popular non-random sampling scheme. Maximum likelihood estimator (MLE) is inconsistent if some sampled strata depend on the response variable Y (‘endogenous samples’) or if some Y-dependent strata are not sampled at all (‘truncated sample’ – a missing data problem). Various versions of MLE have appeared in the literature, and this paper reviews practical likelihood-based estimators for endogenous or truncated samples in SSS. Also a new estimator ‘Estimated-EX MLE’ is introduced using an extra random sample on X (not on Y) to estimate the distribution EX of X. As information on Y may be hard to get, this estimator's data demand is weaker than an extra random sample on Y in some other estimators. The estimator can greatly improve the efficiency of ‘Fixed-X MLE’ which conditions on X, even if the extra sample size is small. In fact, Estimated-EX MLE does not estimate the full FX as it needs only a sample average using the extra sample. Estimated-EX MLE can be almost as efficient as the ‘Known-FX MLE’. A small-scale simulation study is provided to illustrate these points.

Details

Missing Data Methods: Cross-sectional Methods and Applications
Type: Book
ISBN: 978-1-78052-525-9

Keywords

Book part
Publication date: 26 November 2014

Emmanuel Kengni Ncheuguim, Seth Appiah-Kubi and Joseph Ofori-Dankwa

The Truncated Levy Flight (TLF) model has been successfully used to model the return distribution of stock markets in developed economies and a few developing economies such as…

Abstract

Purpose

The Truncated Levy Flight (TLF) model has been successfully used to model the return distribution of stock markets in developed economies and a few developing economies such as India. Our primary purpose is to use the TLF to model the S&P 500 and the firms operating in the Ghana Stock Exchange (GSE).

Methodology

We assess the predictive efficacy of the TLF model by comparing a simulation of the Standard and Poor's 500 (S&P 500) index and that of firms in the stock market in Ghana, using data from the same time period (June 2007–September 2013).

Finding

We find that the Levy models relatively accurately models the return distributions of the S&P 500 but does not accurately model the return distributions of firms in the Ghana stock market.

Limitations/implications

A major limitation is that we examined stock market data only from Ghana, while there are over 29 other African stock markets. We suggest that doctoral students and faculty can compare these stock markets either on the basis of age or the number of firms listed. For example, the oldest stock market was set up in 1883 in Egypt, while the more recent ones were set up in 2012 in the Seychelles and in Somalia.

Practical implications

Scholarly inquiry about the stock markets in Africa represents a rich area of research that we will encourage doctoral students and faculty to go into.

Originality/value

There has been little research done regarding the TLF model and African stock markets and this research has much utility and high level of originality.

Details

Advancing Research Methodology in the African Context: Techniques, Methods, and Designs
Type: Book
ISBN: 978-1-78441-489-4

Keywords

Book part
Publication date: 1 December 2016

Roman Liesenfeld, Jean-François Richard and Jan Vogler

We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and…

Abstract

We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and non-Gaussian response variables. The class of models under consideration includes specifications for discrete choices, event counts and limited-dependent variables (truncation, censoring, and sample selection) among others. Our algorithm relies upon a novel implementation of efficient importance sampling (EIS) specifically designed to exploit typical sparsity of high-dimensional spatial precision (or covariance) matrices. It is numerically very accurate and computationally feasible even for very high-dimensional latent processes. Thus, maximum likelihood (ML) estimation of high-dimensional non-Gaussian spatial models, hitherto considered to be computationally prohibitive, becomes feasible. We illustrate our approach with ML estimation of a spatial probit for US presidential voting decisions and spatial count data models (Poisson and Negbin) for firm location choices.

Details

Spatial Econometrics: Qualitative and Limited Dependent Variables
Type: Book
ISBN: 978-1-78560-986-2

Keywords

Book part
Publication date: 6 September 2019

Amitava Mitra

The service industry is a major component of the economy. Raw material, components, assemblies, and finished products are shipped between suppliers, manufacturers, distributors…

Abstract

The service industry is a major component of the economy. Raw material, components, assemblies, and finished products are shipped between suppliers, manufacturers, distributors, and retailers. Accordingly, timely receipt of shipped goods is crucial in maintaining the efficiency and effectiveness of such service processes. A service provider offers an incentive to the customer by specifying a competitive target time for delivery of goods. Further, if the delivery time is deviant from the target value, the provider offers to reimburse the customer for an amount that is proportional to the value of the goods and the degree of deviation from the target value. The service provider may set the price to be charged as a function of product value. This price is in addition to the operational costs of logistics that are not considered in the formulated model. For protection against deviation from target due dates, the service provider agrees to reimburse the customer. The reimbursement could be based on an asymmetric loss function influenced by the degree of deviation from the target due date as well as product value. The penalties could be different for early and late deliveries since the customer may experience different impact and consequences accordingly. The chapter develops a model to determine the amount (price) that the provider should add to the cost estimate of the delivery contract for protection against delivery deviations. Such a cost estimate will include the operational costs (fixed and variable) of the shipment, to which an amount is added to cover the expected payout to customers when the delivery time deviates from the target value. The optimal price should be such that the expected revenue will at least exceed the expected payout.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-1-78754-290-7

Keywords

Book part
Publication date: 24 March 2006

Ngai Hang Chan and Wilfredo Palma

Since the seminal works by Granger and Joyeux (1980) and Hosking (1981), estimations of long-memory time series models have been receiving considerable attention and a number of…

Abstract

Since the seminal works by Granger and Joyeux (1980) and Hosking (1981), estimations of long-memory time series models have been receiving considerable attention and a number of parameter estimation procedures have been proposed. This paper gives an overview of this plethora of methodologies with special focus on likelihood-based techniques. Broadly speaking, likelihood-based techniques can be classified into the following categories: the exact maximum likelihood (ML) estimation (Sowell, 1992; Dahlhaus, 1989), ML estimates based on autoregressive approximations (Granger & Joyeux, 1980; Li & McLeod, 1986), Whittle estimates (Fox & Taqqu, 1986; Giraitis & Surgailis, 1990), Whittle estimates with autoregressive truncation (Beran, 1994a), approximate estimates based on the Durbin–Levinson algorithm (Haslett & Raftery, 1989), state-space-based maximum likelihood estimates for ARFIMA models (Chan & Palma, 1998), and estimation of stochastic volatility models (Ghysels, Harvey, & Renault, 1996; Breidt, Crato, & de Lima, 1998; Chan & Petris, 2000) among others. Given the diversified applications of these techniques in different areas, this review aims at providing a succinct survey of these methodologies as well as an overview of important related problems such as the ML estimation with missing data (Palma & Chan, 1997), influence of subsets of observations on estimates and the estimation of seasonal long-memory models (Palma & Chan, 2005). Performances and asymptotic properties of these techniques are compared and examined. Inter-connections and finite sample performances among these procedures are studied. Finally, applications to financial time series of these methodologies are discussed.

Details

Econometric Analysis of Financial and Economic Time Series
Type: Book
ISBN: 978-1-84950-388-4

Book part
Publication date: 3 June 2008

Nathaniel T. Wilcox

Choice under risk has a large stochastic (unpredictable) component. This chapter examines five stochastic models for binary discrete choice under risk and how they combine with…

Abstract

Choice under risk has a large stochastic (unpredictable) component. This chapter examines five stochastic models for binary discrete choice under risk and how they combine with “structural” theories of choice under risk. Stochastic models are substantive theoretical hypotheses that are frequently testable in and of themselves, and also identifying restrictions for hypothesis tests, estimation and prediction. Econometric comparisons suggest that for the purpose of prediction (as opposed to explanation), choices of stochastic models may be far more consequential than choices of structures such as expected utility or rank-dependent utility.

Details

Risk Aversion in Experiments
Type: Book
ISBN: 978-1-84950-547-5

Book part
Publication date: 27 June 2014

Xin Li and Hany A. Shawky

Good market timing skills can be an important factor contributing to hedge funds’ outperformance. In this chapter, we use a unique semiparametric panel data model capable of…

Abstract

Good market timing skills can be an important factor contributing to hedge funds’ outperformance. In this chapter, we use a unique semiparametric panel data model capable of providing consistent short period estimates of the return correlations with three market factors for a sample of Long/Short equity hedge funds. We find evidence of significant market timing ability by fund managers around market crisis periods. Studying the behavior of individual fund managers, we show that at the 10% significance level, 17.12% of funds exhibit good linear timing skills and 21.32% of funds possess some level of good nonlinear market timing skills. Further, we find that market timing strategies of hedge funds are different in good and bad markets, and that a significant number of managers behave more conservatively when the market return is expected to be far above average and more aggressively when the market return is expected to be far below average. We find that good market timers are also likely to possess good stock selection skills.

Details

Signs that Markets are Coming Back
Type: Book
ISBN: 978-1-78350-931-7

Keywords

Book part
Publication date: 18 April 2018

Dominique Lord and Srinivas Reddy Geedipally

Purpose – This chapter provides an overview of issues related to analysing crash data characterised by excess zero responses and/or long tails and how to overcome these problems…

Abstract

Purpose – This chapter provides an overview of issues related to analysing crash data characterised by excess zero responses and/or long tails and how to overcome these problems. Factors affecting excess zeros and/or long tails are discussed, as well as how they can bias the results when traditional distributions or models are used. Recently introduced multi-parameter distributions and models developed specifically for such datasets are described. The chapter is intended to guide readers on how to properly analyse crash datasets with excess zeros and long or heavy tails.

Methodology – Key references from the literature are summarised and discussed, and two examples detailing how multi-parameter distributions and models compare with the negative binomial distribution and model are presented.

Findings – In the event that the characteristics of the crash dataset cannot be changed or modified, recently introduced multi-parameter distributions and models can be used efficiently to analyse datasets characterised by excess zero responses and/or long tails. They offer a simpler way to interpret the relationship between crashes and explanatory variables, while providing better statistical performance in terms of goodness-of-fit and predictive capabilities.

Research implications – Multi-parameter models are expected to become the next series of traditional distributions and models. The research on these models is still ongoing.

Practical implications – With the advancement of computing power and Bayesian simulation methods, multi-parameter models can now be easily coded and applied to analyse crash datasets characterised by excess zero responses and/or long tails.

Details

Safe Mobility: Challenges, Methodology and Solutions
Type: Book
ISBN: 978-1-78635-223-1

Keywords

1 – 10 of 607