Search results

1 – 10 of 385
Book part
Publication date: 18 April 2018

Mohammed Quddus

Purpose – Time-series regression models are applied to analyse transport safety data for three purposes: (1) to develop a relationship between transport accidents (or incidents…

Abstract

Purpose – Time-series regression models are applied to analyse transport safety data for three purposes: (1) to develop a relationship between transport accidents (or incidents) and various time-varying factors, with the aim of identifying the most important factors; (2) to develop a time-series accident model in forecasting future accidents for the given values of future time-varying factors and (3) to evaluate the impact of a system-wide policy, education or engineering intervention on accident counts. Regression models for analysing transport safety data are well established, especially in analysing cross-sectional and panel datasets. There is, however, a dearth of research relating to time-series regression models in the transport safety literature. The purpose of this chapter is to examine existing literature with the aim of identifying time-series regression models that have been employed in safety analysis in relation to wider applications. The aim is to identify time-series regression models that are applicable in analysing disaggregated accident counts.

Methodology/Approach – There are two main issues in modelling time-series accident counts: (1) a flexible approach in addressing serial autocorrelation inherent in time-series processes of accident counts and (2) the fact that the conditional distribution (conditioned on past observations and covariates) of accident counts follow a Poisson-type distribution. Various time-series regression models are explored to identify the models most suitable for analysing disaggregated time-series accident datasets. A recently developed time-series regression model – the generalised linear autoregressive and moving average (GLARMA) – has been identified as the best model to analyse safety data.

Findings – The GLARMA model was applied to a time-series dataset of airproxes (aircraft proximity) that indicate airspace safety in the United Kingdom. The aim was to evaluate the impact of an airspace intervention (i.e., the introduction of reduced vertical separation minima, RVSM) on airspace safety while controlling for other factors, such as air transport movements (ATMs) and seasonality. The results indicate that the GLARMA model is more appropriate than a generalised linear model (e.g., Poisson or Poisson-Gamma), and it has been found that the introduction of RVSM has reduced the airprox events by 15%. In addition, it was found that a 1% increase in ATMs within UK airspace would lead to a 1.83% increase in monthly airproxes in UK airspace.

Practical applications – The methodology developed in this chapter is applicable to many time-series processes of accident counts. The models recommended in this chapter could be used to identify different time-varying factors and to evaluate the effectiveness of various policy and engineering interventions on transport safety or similar data (e.g., crimes).

Originality/value of paper – The GLARMA model has not been properly explored in modelling time-series safety data. This new class of model has been applied to a dataset in evaluating the effectiveness of an intervention. The model recommended in this chapter would greatly benefit researchers and analysts working with time-series data.

Details

Safe Mobility: Challenges, Methodology and Solutions
Type: Book
ISBN: 978-1-78635-223-1

Keywords

Book part
Publication date: 12 November 2014

Matthew Lindsey and Robert Pavur

A Bayesian approach to demand forecasting to optimize spare parts inventory that requires periodic replenishment is examined relative to a non-Bayesian approach when the demand…

Abstract

A Bayesian approach to demand forecasting to optimize spare parts inventory that requires periodic replenishment is examined relative to a non-Bayesian approach when the demand rate is unknown. That is, optimal inventory levels are decided using these two approaches at consecutive time intervals. Simulations were conducted to compare the total inventory cost using a Bayesian approach and a non-Bayesian approach to a theoretical minimum cost over a variety of demand rate conditions including the challenging slow moving or intermittent type of spare parts. Although Bayesian approaches are often recommended, this study’s results reveal that under conditions of large variability across the demand rates of spare parts, the inventory cost using the Bayes model was not superior to that using the non-Bayesian approach. For spare parts with homogeneous demand rates, the inventory cost using the Bayes model for forecasting was generally lower than that of the non-Bayesian model. Practitioners may still opt to use the non-Bayesian model since a prior distribution for the demand does not need to be identified.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-1-78441-209-8

Keywords

Book part
Publication date: 10 December 2018

George Levy

Abstract

Details

Energy Power Risk
Type: Book
ISBN: 978-1-78743-527-8

Abstract

Details

Fundamentals of Transportation and Traffic Operations
Type: Book
ISBN: 978-0-08-042785-0

Book part
Publication date: 17 January 2009

Virginia M. Miori

The challenge of truckload routing is increased in complexity by the introduction of stochastic demand. Typically, this demand is generalized to follow a Poisson distribution. In…

Abstract

The challenge of truckload routing is increased in complexity by the introduction of stochastic demand. Typically, this demand is generalized to follow a Poisson distribution. In this chapter, we cluster the demand data using data mining techniques to establish the more acceptable distribution to predict demand. We then examine this stochastic truckload demand using an econometric discrete choice model known as a count data model. Using actual truckload demand data and data from the bureau of transportation statistics, we perform count data regressions. Two outcomes are produced from every regression run, the predicted demand between every origin and destination, and the likelihood that that demand will occur. The two allow us to generate an expected value forecast of truckload demand as input to a truckload routing formulation. The negative binomial distribution produces an improved forecast over the Poisson distribution.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-1-84855-548-8

Book part
Publication date: 1 December 2016

Roman Liesenfeld, Jean-François Richard and Jan Vogler

We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and…

Abstract

We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and non-Gaussian response variables. The class of models under consideration includes specifications for discrete choices, event counts and limited-dependent variables (truncation, censoring, and sample selection) among others. Our algorithm relies upon a novel implementation of efficient importance sampling (EIS) specifically designed to exploit typical sparsity of high-dimensional spatial precision (or covariance) matrices. It is numerically very accurate and computationally feasible even for very high-dimensional latent processes. Thus, maximum likelihood (ML) estimation of high-dimensional non-Gaussian spatial models, hitherto considered to be computationally prohibitive, becomes feasible. We illustrate our approach with ML estimation of a spatial probit for US presidential voting decisions and spatial count data models (Poisson and Negbin) for firm location choices.

Details

Spatial Econometrics: Qualitative and Limited Dependent Variables
Type: Book
ISBN: 978-1-78560-986-2

Keywords

Book part
Publication date: 7 June 2013

Nhuong Tran, Norbert Wilson and Diane Hite

The purpose of the chapter is to test the hypothesis that food safety (chemical) standards act as barriers to international seafood imports. We use zero-accounting gravity models…

Abstract

The purpose of the chapter is to test the hypothesis that food safety (chemical) standards act as barriers to international seafood imports. We use zero-accounting gravity models to test the hypothesis that food safety (chemical) standards act as barriers to international seafood imports. The chemical standards on which we focus include chloramphenicol required performance limit, oxytetracycline maximum residue limit, fluoro-quinolones maximum residue limit, and dichlorodiphenyltrichloroethane (DDT) pesticide residue limit. The study focuses on the three most important seafood markets: the European Union’s 15 members, Japan, and North America.Our empirical results confirm the hypothesis and are robust to the OLS as well as alternative zero-accounting gravity models such as the Heckman estimation and the Poisson family regressions. For the choice of the best model specification to account for zero trade and heteroskedastic issues, it is inconclusive to base on formal statistical tests; however, the Heckman sample selection and zero-inflated negative binomial (ZINB) models provide the most reliable parameter estimates based on the statistical tests, magnitude of coefficients, economic implications, and the literature findings. Our findings suggest that continually tightening of seafood safety standards has had a negative impact on exporting countries. Increasing the stringency of regulations by reducing analytical limits or maximum residue limits in seafood in developed countries has negative impacts on their bilateral seafood imports. The chapter furthers the literature on food safety standards on international trade. We show competing gravity model specifications and provide additional evidence that no one gravity model is superior.

Details

Nontariff Measures with Market Imperfections: Trade and Welfare Implications
Type: Book
ISBN: 978-1-78190-754-2

Keywords

Book part
Publication date: 18 April 2018

Dominique Lord and Srinivas Reddy Geedipally

Purpose – This chapter provides an overview of issues related to analysing crash data characterised by excess zero responses and/or long tails and how to overcome these problems…

Abstract

Purpose – This chapter provides an overview of issues related to analysing crash data characterised by excess zero responses and/or long tails and how to overcome these problems. Factors affecting excess zeros and/or long tails are discussed, as well as how they can bias the results when traditional distributions or models are used. Recently introduced multi-parameter distributions and models developed specifically for such datasets are described. The chapter is intended to guide readers on how to properly analyse crash datasets with excess zeros and long or heavy tails.

Methodology – Key references from the literature are summarised and discussed, and two examples detailing how multi-parameter distributions and models compare with the negative binomial distribution and model are presented.

Findings – In the event that the characteristics of the crash dataset cannot be changed or modified, recently introduced multi-parameter distributions and models can be used efficiently to analyse datasets characterised by excess zero responses and/or long tails. They offer a simpler way to interpret the relationship between crashes and explanatory variables, while providing better statistical performance in terms of goodness-of-fit and predictive capabilities.

Research implications – Multi-parameter models are expected to become the next series of traditional distributions and models. The research on these models is still ongoing.

Practical implications – With the advancement of computing power and Bayesian simulation methods, multi-parameter models can now be easily coded and applied to analyse crash datasets characterised by excess zero responses and/or long tails.

Details

Safe Mobility: Challenges, Methodology and Solutions
Type: Book
ISBN: 978-1-78635-223-1

Keywords

Abstract

Details

Understanding Financial Risk Management, Second Edition
Type: Book
ISBN: 978-1-78973-794-3

Book part
Publication date: 16 September 2022

Vasileios Ouranos and Alexandra Livada

Probability of Default (PD) is a crucial credit risk parameter. International accords have motivated banks and credit institutions to adopt objective systems of evaluating and…

Abstract

Probability of Default (PD) is a crucial credit risk parameter. International accords have motivated banks and credit institutions to adopt objective systems of evaluating and monitoring the PD. This study examines retail unsecured loans of a major Greek bank during the period of the financial crisis. It focusses on the stochastic behaviour of the financial states of the loans. It is tested whether a first-order Markov chain (MC) model describes sufficiently the transitions from one state to another. Moreover, Poisson regression models are estimated in order to calculate the limiting transition matrix, the limiting state probabilities and the PD. It is proved that the MC of the financial states of loans is non-homogeneous suggesting that the transition probabilities from one financial state to another are not constant across time. From the Poisson regression models, the transition probability matrix is estimated from one state to another in alternative time periods. From the limiting transition matrix, it is shown that if a loan is delayed then it is very likely to move towards the next worst case. The findings of this research could be useful for bank management.

Details

The New Digital Era: Other Emerging Risks and Opportunities
Type: Book
ISBN: 978-1-80382-983-8

Keywords

1 – 10 of 385