Search results
1 – 10 of 25Purpose – Time-series regression models are applied to analyse transport safety data for three purposes: (1) to develop a relationship between transport accidents (or incidents…
Abstract
Purpose – Time-series regression models are applied to analyse transport safety data for three purposes: (1) to develop a relationship between transport accidents (or incidents) and various time-varying factors, with the aim of identifying the most important factors; (2) to develop a time-series accident model in forecasting future accidents for the given values of future time-varying factors and (3) to evaluate the impact of a system-wide policy, education or engineering intervention on accident counts. Regression models for analysing transport safety data are well established, especially in analysing cross-sectional and panel datasets. There is, however, a dearth of research relating to time-series regression models in the transport safety literature. The purpose of this chapter is to examine existing literature with the aim of identifying time-series regression models that have been employed in safety analysis in relation to wider applications. The aim is to identify time-series regression models that are applicable in analysing disaggregated accident counts.
Methodology/Approach – There are two main issues in modelling time-series accident counts: (1) a flexible approach in addressing serial autocorrelation inherent in time-series processes of accident counts and (2) the fact that the conditional distribution (conditioned on past observations and covariates) of accident counts follow a Poisson-type distribution. Various time-series regression models are explored to identify the models most suitable for analysing disaggregated time-series accident datasets. A recently developed time-series regression model – the generalised linear autoregressive and moving average (GLARMA) – has been identified as the best model to analyse safety data.
Findings – The GLARMA model was applied to a time-series dataset of airproxes (aircraft proximity) that indicate airspace safety in the United Kingdom. The aim was to evaluate the impact of an airspace intervention (i.e., the introduction of reduced vertical separation minima, RVSM) on airspace safety while controlling for other factors, such as air transport movements (ATMs) and seasonality. The results indicate that the GLARMA model is more appropriate than a generalised linear model (e.g., Poisson or Poisson-Gamma), and it has been found that the introduction of RVSM has reduced the airprox events by 15%. In addition, it was found that a 1% increase in ATMs within UK airspace would lead to a 1.83% increase in monthly airproxes in UK airspace.
Practical applications – The methodology developed in this chapter is applicable to many time-series processes of accident counts. The models recommended in this chapter could be used to identify different time-varying factors and to evaluate the effectiveness of various policy and engineering interventions on transport safety or similar data (e.g., crimes).
Originality/value of paper – The GLARMA model has not been properly explored in modelling time-series safety data. This new class of model has been applied to a dataset in evaluating the effectiveness of an intervention. The model recommended in this chapter would greatly benefit researchers and analysts working with time-series data.
Details
Keywords
Farouk Metiri, Halim Zeghdoudi and Ahmed Saadoun
This paper generalizes the quadratic framework introduced by Le Courtois (2016) and Sumpf (2018), to obtain new credibility premiums in the balanced case, i.e. under the balanced…
Abstract
Purpose
This paper generalizes the quadratic framework introduced by Le Courtois (2016) and Sumpf (2018), to obtain new credibility premiums in the balanced case, i.e. under the balanced squared error loss function. More precisely, the authors construct a quadratic credibility framework under the net quadratic loss function where premiums are estimated based on the values of past observations and of past squared observations under the parametric and the non-parametric approaches, this framework is useful for the practitioner who wants to explicitly take into account higher order (cross) moments of past data.
Design/methodology/approach
In the actuarial field, credibility theory is an empirical model used to calculate the premium. One of the crucial tasks of the actuary in the insurance company is to design a tariff structure that will fairly distribute the burden of claims among insureds. In this work, the authors use the weighted balanced loss function (WBLF, henceforth) to obtain new credibility premiums, and WBLF is a generalized loss function introduced by Zellner (1994) (see Gupta and Berger (1994), pp. 371-390) which appears also in Dey et al. (1999) and Farsipour and Asgharzadhe (2004).
Findings
The authors declare that there is no conflict of interest and the funding information is not applicable.
Research limitations/implications
This work is motivated by the following: quadratic credibility premium under the balanced loss function is useful for the practitioner who wants to explicitly take into account higher order (cross) moments and new effects such as the clustering effect to finding a premium more credible and more precise, which arranges both parts: the insurer and the insured. Also, it is easy to apply for parametric and non-parametric approaches. In addition, the formulas of the parametric (Poisson–gamma case) and the non-parametric approach are simple in form and may be used to find a more flexible premium in many special cases. On the other hand, this work neglects the semi-parametric approach because it is rarely used by practitioners.
Practical implications
There are several examples of actuarial science (credibility).
Originality/value
In this paper, the authors used the WBLF and a quadratic adjustment to obtain new credibility premiums. More precisely, the authors construct a quadratic credibility framework under the net quadratic loss function where premiums are estimated based on the values of past observations and of past squared observations under the parametric and the non-parametric approaches, this framework is useful for the practitioner who wants to explicitly take into account higher order (cross) moments of past data.
Details
Keywords
Luiz Paulo Lopes Fávero, Marco Aurélio dos Santos and Ricardo Goulart Serra
Branching is not the only way for foreign banks to enter a national market, and it is impractical when there are informational and cultural barriers and asymmetries among…
Abstract
Purpose
Branching is not the only way for foreign banks to enter a national market, and it is impractical when there are informational and cultural barriers and asymmetries among countries. The purpose of this paper is to analyze the determinants of cross-border branching in the Latin American banking sector, a region with regulatory disparity and political and economic instability, offering elements to a grounded strategic decision.
Design/methodology/approach
This study uses data from six Latin American countries. To account for the preponderance of zero counts, classes of zero-inflated models are applied (Poisson, negative binomial, and mixed). Model fit indicators obtained from differences between observed and estimated counts are used for comparisons, considering branches in each region established by banks from every other foreign region of the sample.
Findings
Branching by foreign banks is positively correlated with the population, GDP per capita, household disposable income, and economic freedom score of the host country. The opposite holds for the unemployment rate and entry regulations of the host country.
Originality/value
Few paper address cross-border banking in emerging economies. This paper analyzes cross-border branching in Latin America in the context of the current financial integration and bank strategy. Econometrically, its pioneering design allows modeling of inflation of zeros, over-dispersion, and the multilevel data structure. This design allowed testing of a novel country-level variable: the host country’s economic freedom score.
Details
Keywords
Dominique Lord and Srinivas Reddy Geedipally
Purpose – This chapter provides an overview of issues related to analysing crash data characterised by excess zero responses and/or long tails and how to overcome these problems…
Abstract
Purpose – This chapter provides an overview of issues related to analysing crash data characterised by excess zero responses and/or long tails and how to overcome these problems. Factors affecting excess zeros and/or long tails are discussed, as well as how they can bias the results when traditional distributions or models are used. Recently introduced multi-parameter distributions and models developed specifically for such datasets are described. The chapter is intended to guide readers on how to properly analyse crash datasets with excess zeros and long or heavy tails.
Methodology – Key references from the literature are summarised and discussed, and two examples detailing how multi-parameter distributions and models compare with the negative binomial distribution and model are presented.
Findings – In the event that the characteristics of the crash dataset cannot be changed or modified, recently introduced multi-parameter distributions and models can be used efficiently to analyse datasets characterised by excess zero responses and/or long tails. They offer a simpler way to interpret the relationship between crashes and explanatory variables, while providing better statistical performance in terms of goodness-of-fit and predictive capabilities.
Research implications – Multi-parameter models are expected to become the next series of traditional distributions and models. The research on these models is still ongoing.
Practical implications – With the advancement of computing power and Bayesian simulation methods, multi-parameter models can now be easily coded and applied to analyse crash datasets characterised by excess zero responses and/or long tails.
Details
Keywords
Robson Braga, Luiz Paulo Lopes Fávero and Renata Turola Takamatsu
The purpose of this paper is to evaluate investor behaviour related to the timing of selling financial assets based on an intuitive evaluation of the current market trend and…
Abstract
Purpose
The purpose of this paper is to evaluate investor behaviour related to the timing of selling financial assets based on an intuitive evaluation of the current market trend and growth expectation.
Design/methodology/approach
The experiment involved 1,052 volunteer participants who made decisions about stock sales in an environment that simulated a home broker platform to negotiate stocks. Zero-inflated regression models were used.
Findings
The results show that investors’ attitudes, or beliefs, determine whether they will buy or keep risky assets in their investment portfolios; they may decide to sell such assets, even though market shows an upward trend. Such results make a new contribution to behavioural finance within the context of prospect theory and the disposition effect.
Originality/value
The originality of this paper lies in the use of new and innovative techniques (zero-inflated Poisson and negative binomial regression models) applied to real data obtained experimentally.
Propósito
Este artigo estuda o comportamento de investidores relacionado ao momento da venda de ativos financeiros com base em uma avaliação intuitiva da tendência atual do mercado e da expectativa de crescimento.
Desenho/metodologia/abordagem
Nosso experimento envolveu 1,052 participantes voluntários que tomaram decisões sobre a venda de ações em um ambiente que simulava uma plataforma de corretagem para negociação. Foram utilizados modelos de regressão inflacionados de zeros.
Resultados
Os resultados mostram que as atitudes ou crenças dos investidores determinam se eles comprarão ou manterão ativos de risco em suas carteiras de investimento; eles podem decidir vender esses ativos, mesmo que o mercado mostre uma tendência ascendente. Tais resultados constituem uma nova contribuição para o campo das finanças comportamentais, dentro do contexto da teoria do prospecto e do efeito disposição.
Originalidade/valor
A originalidade deste artigo reside no uso de técnicas novas e inovadoras (modelos de regressão Poisson e binomial negativo inflacionados de zeros) aplicadas a dados reais obtidos experimentalmente.
Details
Keywords
Successful strategies for maintenance require good decisions and we commonly use stochastic reliability models to help this process. These models involve unknown parameters, so we…
Abstract
Successful strategies for maintenance require good decisions and we commonly use stochastic reliability models to help this process. These models involve unknown parameters, so we gather data to learn about these parameters. However, such data are often difficult to collect for maintenance applications, leading to poor parameter estimates and incorrect decisions. A subjective modelling approach can resolve this problem, but requires us to specify suitable prior distributions for the unknown parameters. This paper considers which priors to adopt for common maintenance models and describes the method of predictive elicitation for determining unknown hyperparameters associated with these prior distributions. We discuss the computational difficulties of this approach and consider numerical methods for resolving this problem. Finally, we present practical demonstrations to illustrate the potential benefits of predictive elicitation and subjective analysis. This work provides a major step forward in making the methods of subjective Bayesian inference available to maintenance decision makers in practice. Practical implications. This paper recommends powerful strategies for expressing subjective knowledge about unknown model parameters, in the context of maintenance applications that involve making decisions.
Details
Keywords
With the rise of employer-promulgated mandatory employment arbitration, scholars have become concerned that these policies may reduce the economic viability of lower value…
Abstract
With the rise of employer-promulgated mandatory employment arbitration, scholars have become concerned that these policies may reduce the economic viability of lower value employment claims. Of particular worry are claims made under the Fair Labor Standards Act since the FLSA does not include punitive damages. This study empirically tests the relationship between 368 Fortune 1000 companies’ employment arbitration policies and their wage and hour violations discovered during the Department of Labor inspections. Surprisingly, firms that used arbitration were found to have fewer violations and lower back wages for those violation compared to firms that did not use arbitration. This suggests that viewing arbitration merely as a cost-reduction tool may cast the practice too narrowly and instead it may be part of a larger conflict management system that seeks to address conflict at the earliest possible stage.
Details
Keywords
Simon Washington, Amir Pooyan Afghari and Mohammed Mazharul Haque
Purpose – The purpose of this chapter is to review the methodological and empirical underpinnings of transport network screening, or management, as it relates to improving road…
Abstract
Purpose – The purpose of this chapter is to review the methodological and empirical underpinnings of transport network screening, or management, as it relates to improving road safety. As jurisdictions around the world are charged with transport network management in order to reduce externalities associated with road crashes, identifying potential blackspots or hotspots is an important if not critical function and responsibility of transport agencies.
Methodology – Key references from within the literature are summarised and discussed, along with a discussion of the evolution of thinking around hotspot identification and management. The theoretical developments that correspond with the evolution in thinking are provided, sprinkled with examples along the way.
Findings – Hotspot identification methodologies have evolved considerably over the past 30 or so years, correcting for methodological deficiencies along the way. Despite vast and significant advancements, identifying hotspots remains a reactive approach to managing road safety – relying on crashes to accrue in order to mitigate their occurrence. The most fruitful directions for future research will be in the establishment of reliable relationships between surrogate measures of road safety – such as ‘near misses’ – and actual crashes – so that safety can be proactively managed without the need for crashes to accrue.
Research implications – Research in hotspot identification will continue; however, it is likely to shift over time to both closer to ‘real-time’ crash risk detection and considering safety improvements using surrogate measures of road safety – described in Chapter 17.
Practical implications – There are two types of errors made in hotspot detection – identifying a ‘risky’ site as ‘safe’ and identifying a ‘safe’ site as ‘risky’. In the former case no investments will be made to improve safety, while in the latter case ineffective or inefficient safety improvements could be made. To minimise these errors, transport network safety managers should be applying the current state of the practice methods for hotspot detection. Moreover, transport network safety managers should be eager to transition to proactive methods of network safety management to avoid the need for crashes to occur. While in its infancy, the use of surrogate measures of safety holds significant promise for the future.
Details