Search results

11 – 20 of over 4000
Book part
Publication date: 18 October 2019

Mohammad Arshad Rahman and Shubham Karnawat

This article is motivated by the lack of flexibility in Bayesian quantile regression for ordinal models where the error follows an asymmetric Laplace (AL) distribution. The…

Abstract

This article is motivated by the lack of flexibility in Bayesian quantile regression for ordinal models where the error follows an asymmetric Laplace (AL) distribution. The inflexibility arises because the skewness of the distribution is completely specified when a quantile is chosen. To overcome this shortcoming, we derive the cumulative distribution function (and the moment-generating function) of the generalized asymmetric Laplace (GAL) distribution – a generalization of AL distribution that separates the skewness from the quantile parameter – and construct a working likelihood for the ordinal quantile model. The resulting framework is termed flexible Bayesian quantile regression for ordinal (FBQROR) models. However, its estimation is not straightforward. We address estimation issues and propose an efficient Markov chain Monte Carlo (MCMC) procedure based on Gibbs sampling and joint Metropolis–Hastings algorithm. The advantages of the proposed model are demonstrated in multiple simulation studies and implemented to analyze public opinion on homeownership as the best long-term investment in the United States following the Great Recession.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part B
Type: Book
ISBN: 978-1-83867-419-9

Keywords

Book part
Publication date: 30 December 2004

Leslie W. Hepple

Within spatial econometrics a whole family of different spatial specifications has been developed, with associated estimators and tests. This lead to issues of model comparison…

Abstract

Within spatial econometrics a whole family of different spatial specifications has been developed, with associated estimators and tests. This lead to issues of model comparison and model choice, measuring the relative merits of alternative specifications and then using appropriate criteria to choose the “best” model or relative model probabilities. Bayesian theory provides a comprehensive and coherent framework for such model choice, including both nested and non-nested models within the choice set. The paper reviews the potential application of this Bayesian theory to spatial econometric models, examining the conditions and assumptions under which application is possible. Problems of prior distributions are outlined, and Bayes factors and marginal likelihoods are derived for a particular subset of spatial econometric specifications. These are then applied to two well-known spatial data-sets to illustrate the methods. Future possibilities, and comparisons with other approaches to both Bayesian and non-Bayesian model choice are discussed.

Details

Spatial and Spatiotemporal Econometrics
Type: Book
ISBN: 978-0-76231-148-4

Article
Publication date: 11 September 2017

Arvind Shrivastava, Nitin Kumar and Purnendu Kumar

Decisions pertaining to working capital management have pivotal role for firms’ short-term financial decisions. The purpose of this paper is to examine impact of working capital…

1610

Abstract

Purpose

Decisions pertaining to working capital management have pivotal role for firms’ short-term financial decisions. The purpose of this paper is to examine impact of working capital on profitability for Indian corporate entities.

Design/methodology/approach

Both classical panel analysis and Bayesian techniques have been employed that provides opportunity not only to perform comparative analysis but also allows flexibility in prior distribution assumptions.

Findings

It is found that longer cash conversion period has detrimental influence on profitability. Financial soundness indicators are playing significant role in determining firm profitability. Larger firms seem to be more profitable and significant as per Bayesian approach. Bayesian approach has led to considerable gain in estimation fit.

Practical implications

Observing the highly skewed distribution of dependent variable, Multivariate Student t-distribution has been considered along with normal distribution to model stochastic term. Accordingly, Bayesian methodology is applied.

Originality/value

Analysis of working capital for firms has been performed in Indian context. Application of Bayesian methodology is performed on balanced panel spanning from 2003 to 2012. As per author’s knowledge, this is the first study which applies Bayesian approach employing panel data for the analysis of working capital management for Indian firms.

Details

Journal of Economic Studies, vol. 44 no. 4
Type: Research Article
ISSN: 0144-3585

Keywords

Article
Publication date: 1 May 1990

B.D. Bunday and I.D. Al‐Ayoubi

The contents and function of a computer package to fit reliability models for computer software are outlined. Parameters in the models are, in the first place, estimated by…

Abstract

The contents and function of a computer package to fit reliability models for computer software are outlined. Parameters in the models are, in the first place, estimated by maximum likelihood estimation procedures. Bayesian estimation methods are also used and are shown to give estimates with a smaller variance than their MLE counterparts. An example of the application to a particular set of failure times is given.

Details

International Journal of Quality & Reliability Management, vol. 7 no. 5
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 1 January 1986

ROGER N. CONWAY and RON C. MITTELHAMMER

In the last two decades there has been considerable progress made in the development of alternative estimation techniques to ordinary least squares (OLS) regression. The search…

Abstract

In the last two decades there has been considerable progress made in the development of alternative estimation techniques to ordinary least squares (OLS) regression. The search for alternative estimators has no doubt been motivated by the observance of erratic OLS estimator behavior in cases where there are too few observations, multicollinearity problems, or simply “information‐poor” data sets. Imprecise and unreliable OLS coefficient estimates have been the result.

Details

Studies in Economics and Finance, vol. 10 no. 1
Type: Research Article
ISSN: 1086-7376

Article
Publication date: 6 September 2011

Manoj Kumar Rastogi and Yogesh Mani Tripathi

Burr distribution has been proved to be a useful failure model. It can assume different shapes which allow it to be a good fit for various lifetimes data. Hybrid censoring is an…

501

Abstract

Purpose

Burr distribution has been proved to be a useful failure model. It can assume different shapes which allow it to be a good fit for various lifetimes data. Hybrid censoring is an important way of generating lifetimes data. The purpose of this paper is to estimate an unknown parameter of the Burr type XII distribution when data are hybrid censored.

Design/methodology/approach

The problem is dealt with through both the classical and Bayesian point of view. Specifically, the methods of estimation used to tackle the problem are maximum likelihood estimation method and Bayesian method. Empirical Bayesian approach is also considered. The performance of all estimates is compared through their mean square error values. The paper employs Monte Carlo simulation to evaluate the mean square error values of all estimates.

Findings

The key findings of the paper are that the Bayesian estimates are superior to the maximum likelihood estimates (MLE).

Practical implications

This work has practical importance. Indeed, the proposed methods are applied to real life data.

Originality/value

The paper is original and is quite applicable in lifetimes data analysis.

Details

International Journal of Quality & Reliability Management, vol. 28 no. 8
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 22 June 2020

Siju K C, Mahesh Kumar and Michael Beer

This article presents the multi-state stress-strength reliability computation of a component having three states namely, working, deteriorating and failed state.

Abstract

Purpose

This article presents the multi-state stress-strength reliability computation of a component having three states namely, working, deteriorating and failed state.

Design/methodology/approach

The probabilistic approach is used to obtain the reliability expression by considering the difference between the values of stress and strength of a component, say, for example, the stress (load) and strength of a power generating unit is in terms of megawatt. The range of values taken by the difference variable determines the various states of the component. The method of maximum likelihood and Bayesian estimation is used to obtain the estimators of the parameters and system reliability.

Findings

The maximum likelihood and Bayesian estimates of the reliability approach the actual reliability for increasing sample size.

Originality/value

Obtained a new expression for the multi-state stress-strength reliability of a component and the findings are positively supported by presenting the general trend of estimated values of reliability approaching the actual value of reliability.

Details

International Journal of Quality & Reliability Management, vol. 38 no. 2
Type: Research Article
ISSN: 0265-671X

Keywords

Book part
Publication date: 18 April 2018

Simon Washington, Amir Pooyan Afghari and Mohammed Mazharul Haque

Purpose – The purpose of this chapter is to review the methodological and empirical underpinnings of transport network screening, or management, as it relates to improving road…

Abstract

Purpose – The purpose of this chapter is to review the methodological and empirical underpinnings of transport network screening, or management, as it relates to improving road safety. As jurisdictions around the world are charged with transport network management in order to reduce externalities associated with road crashes, identifying potential blackspots or hotspots is an important if not critical function and responsibility of transport agencies.

Methodology – Key references from within the literature are summarised and discussed, along with a discussion of the evolution of thinking around hotspot identification and management. The theoretical developments that correspond with the evolution in thinking are provided, sprinkled with examples along the way.

Findings – Hotspot identification methodologies have evolved considerably over the past 30 or so years, correcting for methodological deficiencies along the way. Despite vast and significant advancements, identifying hotspots remains a reactive approach to managing road safety – relying on crashes to accrue in order to mitigate their occurrence. The most fruitful directions for future research will be in the establishment of reliable relationships between surrogate measures of road safety – such as ‘near misses’ – and actual crashes – so that safety can be proactively managed without the need for crashes to accrue.

Research implications – Research in hotspot identification will continue; however, it is likely to shift over time to both closer to ‘real-time’ crash risk detection and considering safety improvements using surrogate measures of road safety – described in Chapter 17.

Practical implications – There are two types of errors made in hotspot detection – identifying a ‘risky’ site as ‘safe’ and identifying a ‘safe’ site as ‘risky’. In the former case no investments will be made to improve safety, while in the latter case ineffective or inefficient safety improvements could be made. To minimise these errors, transport network safety managers should be applying the current state of the practice methods for hotspot detection. Moreover, transport network safety managers should be eager to transition to proactive methods of network safety management to avoid the need for crashes to occur. While in its infancy, the use of surrogate measures of safety holds significant promise for the future.

Details

Safe Mobility: Challenges, Methodology and Solutions
Type: Book
ISBN: 978-1-78635-223-1

Keywords

Article
Publication date: 5 April 2021

Byron J. Idrovo-Aguirre and Javier E. Contreras-Reyes

This paper combines the objective information of six mixed-frequency partial-activity indicators with assumptions or beliefs (called priors) regarding the distribution of the…

Abstract

Purpose

This paper combines the objective information of six mixed-frequency partial-activity indicators with assumptions or beliefs (called priors) regarding the distribution of the parameters that approximate the state of the construction activity cycle. Thus, this paper uses Bayesian inference with Gibbs simulations and the Kalman filter to estimate the parameters of the state-space model, used to design the Imacon.

Design/methodology/approach

Unlike other economic sectors of similar importance in aggregate gross domestic product, such as mining and industry, the construction sector lacked a short-term measure that helps to identify its most recent performance.

Findings

Indeed, because these priors are susceptible to changes, they provide flexibility to the original Imacon model, allowing for the assessment of risk scenarios and adaption to the greater relative volatility that characterizes the sector's activity.

Originality/value

The classic maximum likelihood method of estimating the monthly construction activity index (Imacon) is rigid to the incorporation of new measures of uncertainty, expectations or different volatility (risks) levels in the state of construction activity. In this context, this paper uses Bayesian inference with 10,000 Gibbs simulations and the Kalman filter to estimate the parameters of the state-space model, used to design the Imacon, inspired by the original works of Mariano and Murasawa (2003) and Kim and Nelson (1998). Thus, this paper consists of a natural extension of the classic method used by Tejada (2006) in the estimation of the old Imacon.

Details

Journal of Economic Studies, vol. 49 no. 3
Type: Research Article
ISSN: 0144-3585

Keywords

Book part
Publication date: 21 December 2010

Ivan Jeliazkov and Esther Hee Lee

A major stumbling block in multivariate discrete data analysis is the problem of evaluating the outcome probabilities that enter the likelihood function. Calculation of these…

Abstract

A major stumbling block in multivariate discrete data analysis is the problem of evaluating the outcome probabilities that enter the likelihood function. Calculation of these probabilities involves high-dimensional integration, making simulation methods indispensable in both Bayesian and frequentist estimation and model choice. We review several existing probability estimators and then show that a broader perspective on the simulation problem can be afforded by interpreting the outcome probabilities through Bayes’ theorem, leading to the recognition that estimation can alternatively be handled by methods for marginal likelihood computation based on the output of Markov chain Monte Carlo (MCMC) algorithms. These techniques offer stand-alone approaches to simulated likelihood estimation but can also be integrated with traditional estimators. Building on both branches in the literature, we develop new methods for estimating response probabilities and propose an adaptive sampler for producing high-quality draws from multivariate truncated normal distributions. A simulation study illustrates the practical benefits and costs associated with each approach. The methods are employed to estimate the likelihood function of a correlated random effects panel data model of women's labor force participation.

Details

Maximum Simulated Likelihood Methods and Applications
Type: Book
ISBN: 978-0-85724-150-4

11 – 20 of over 4000