Search results

1 – 10 of over 1000
To view the access options for this content please click here
Article

Younès Mouatassim

The purpose of this paper is to introduce the zero‐modified distributions in the calculation of operational value‐at‐risk.

Abstract

Purpose

The purpose of this paper is to introduce the zero‐modified distributions in the calculation of operational value‐at‐risk.

Design/methodology/approach

This kind of distributions is preferred when excess of zeroes is observed. In operational risk, this phenomenon may be due to the scarcity of data, the existence of extreme values and/or the threshold from which banks start to collect losses. In this article, the paper focuses on the analysis of damage to physical assets.

Findings

The results show that basic Poisson distribution underestimates the dispersion, and then leads to the underestimation of the capital charge. However, zero‐modified Poisson distributions perform well the frequency. In addition, basic negative binomial and its related zero‐modified distributions, in their turn, offer a good prediction of count events. To choose the distribution that suits better the frequency, the paper uses the Vuong's test. Its results indicate that zero‐modified Poisson distributions, basic negative binomial and its related zero‐modified distributions are equivalent. This conclusion is confirmed by the capital charge calculated since the differences between the six aggregations are not significant except that of basic Poisson distribution.

Originality/value

Recently, the zero‐modified formulations are widely used in many fields because of the low frequency of the events. This article aims to describe the frequency of operational risk using zero‐modified distributions.

To view the access options for this content please click here
Book part

Virginia M. Miori

The challenge of truckload routing is increased in complexity by the introduction of stochastic demand. Typically, this demand is generalized to follow a Poisson…

Abstract

The challenge of truckload routing is increased in complexity by the introduction of stochastic demand. Typically, this demand is generalized to follow a Poisson distribution. In this chapter, we cluster the demand data using data mining techniques to establish the more acceptable distribution to predict demand. We then examine this stochastic truckload demand using an econometric discrete choice model known as a count data model. Using actual truckload demand data and data from the bureau of transportation statistics, we perform count data regressions. Two outcomes are produced from every regression run, the predicted demand between every origin and destination, and the likelihood that that demand will occur. The two allow us to generate an expected value forecast of truckload demand as input to a truckload routing formulation. The negative binomial distribution produces an improved forecast over the Poisson distribution.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-1-84855-548-8

To view the access options for this content please click here
Article

JEAN TAGUE and ISOLA AJIFERUKE

Two dynamic models of library circulation, the Markov model originally proposed by Morse and the mixed Poisson model proposed by Burrell and Cane, are applied to a large…

Abstract

Two dynamic models of library circulation, the Markov model originally proposed by Morse and the mixed Poisson model proposed by Burrell and Cane, are applied to a large eleven‐year university circulation data set. Goodness of fit tests indicate that neither model fits the data. In both cases, the set of non‐circulating items is larger than that predicted by the model.

Details

Journal of Documentation, vol. 43 no. 3
Type: Research Article
ISSN: 0022-0418

To view the access options for this content please click here
Book part

Dominique Lord and Srinivas Reddy Geedipally

Purpose – This chapter provides an overview of issues related to analysing crash data characterised by excess zero responses and/or long tails and how to overcome these…

Abstract

Purpose – This chapter provides an overview of issues related to analysing crash data characterised by excess zero responses and/or long tails and how to overcome these problems. Factors affecting excess zeros and/or long tails are discussed, as well as how they can bias the results when traditional distributions or models are used. Recently introduced multi-parameter distributions and models developed specifically for such datasets are described. The chapter is intended to guide readers on how to properly analyse crash datasets with excess zeros and long or heavy tails.

Methodology – Key references from the literature are summarised and discussed, and two examples detailing how multi-parameter distributions and models compare with the negative binomial distribution and model are presented.

Findings – In the event that the characteristics of the crash dataset cannot be changed or modified, recently introduced multi-parameter distributions and models can be used efficiently to analyse datasets characterised by excess zero responses and/or long tails. They offer a simpler way to interpret the relationship between crashes and explanatory variables, while providing better statistical performance in terms of goodness-of-fit and predictive capabilities.

Research implications – Multi-parameter models are expected to become the next series of traditional distributions and models. The research on these models is still ongoing.

Practical implications – With the advancement of computing power and Bayesian simulation methods, multi-parameter models can now be easily coded and applied to analyse crash datasets characterised by excess zero responses and/or long tails.

Details

Safe Mobility: Challenges, Methodology and Solutions
Type: Book
ISBN: 978-1-78635-223-1

Keywords

To view the access options for this content please click here
Article

Stephen J. Bensman

The purpose of this article is to analyze the historical significance of Donald J. Urquhart, who established the National Lending Library for Science and Technology (NLL…

Abstract

Purpose

The purpose of this article is to analyze the historical significance of Donald J. Urquhart, who established the National Lending Library for Science and Technology (NLL) that later was merged into the British Library Lending Division (BLLD), now called the British Library Document Supply Centre (BLDSC).

Design/methodology/approach

The paper presents a short history of the probabilistic revolution, particularly as it developed in the UK in the form of biometric statistics due to Darwin's theory of evolution. It focuses on the overthrow of the normal paradigm, according to which frequency distributions in nature and society conform to the normal law of error. The paper discusses the importance of the Poisson distribution and its utilization in the construction of stochastic models that better describe reality. Here the focus is on the compound Poisson distribution in the form of the negative binomial distribution (NBD). The paper then shows how Urquhart extended the probabilistic revolution to librarianship by using the Poisson as the probabilistic model in his analyses of the 1956 external loans made by the Science Museum Library (SML) as well as in his management of the scientific and technical (sci/tech) journal collection of the NLL. Thanks to this, Urquhart can be considered as playing a pivotal role in the creation of bibliometrics or the statistical bases of modern library and information science. The paper relates how Urquhart's son and daughter‐in‐law, John A. and Norma C. Urquhart, completed Urquhart's probabilistic breakthrough by advancing for the first time the NBD as the model for library use in a study executed at the University of Newcastle upon Tyne, connecting bibliometrics with biometrics. It concludes with a discussion of Urquhart's Law and its probabilistic implications for the use of sci/tech journals in a library system.

Findings

By being the first librarian to apply probability to the analysis of sci/tech journal use, Urquhart was instrumental in the creation of modern library and information science. His findings force a probabilistic re‐conceptualization of sci/tech journal use in a library system that has great implications for the transition of sci/tech journals from locally held paper copies to shared electronic databases.

Originality/value

Urquhart's significance is considered from the perspective of the development of science as a whole as well as library and information science in particular.

Details

Interlending & Document Supply, vol. 35 no. 2
Type: Research Article
ISSN: 0264-1615

Keywords

To view the access options for this content please click here
Article

A. BAGUST

Starting from a basis laid by Burrell, this paper develops a stochastic model of library borrowing using the Negative Binomial distribution. This shows an improvement over…

Abstract

Starting from a basis laid by Burrell, this paper develops a stochastic model of library borrowing using the Negative Binomial distribution. This shows an improvement over previous characterizations for academic libraries and accords well with new data obtained at Huddersfield Public Library. Evidence concerning the process of issue decay is presented and employed to obtain an optimum stock turnover rate for any collection in its ‘steady state’. A method is then given by which simple relegation tests can be constructed to maintain such as optimum turnover. Although not the ‘final word’ in circulation modelling, the negative binomial distribution extends the range of model applicability into the area of high volume, rapid movement collections with some success.

Details

Journal of Documentation, vol. 39 no. 1
Type: Research Article
ISSN: 0022-0418

To view the access options for this content please click here
Article

P.R. BIRD

Most documentation systems allocate a variable number of descriptors to their documents. From a consideration of indexing as a stochastic process it is suggested that the…

Abstract

Most documentation systems allocate a variable number of descriptors to their documents. From a consideration of indexing as a stochastic process it is suggested that the distribution of indexing depth in such a system might represent samples of a (truncated) mixed Poisson process. Examination of five different systems showed that indexing depth does appear to be distributed in this manner, since a reasonable fit to negative binomial distributions can be made statistically. Factors in the art of indexing which influence the distribution are discussed. As a first approximation the distribution of indexing depth, i, of a system, or of any subset of descriptors in it, is simple Poisson, p(i) = e−m(mi/i!), where m is the average depth of indexing. The results contradict previous reports that a log‐normal distribution of indexing depth is to be expected.

Details

Journal of Documentation, vol. 30 no. 4
Type: Research Article
ISSN: 0022-0418

To view the access options for this content please click here
Article

M. Xie and T.N. Goh

Control charts based on geometric distribution have shown to be useful when this is a better approximation of the underlying distribution than the Poisson distribution

Abstract

Control charts based on geometric distribution have shown to be useful when this is a better approximation of the underlying distribution than the Poisson distribution. The traditional c‐chart, if used, will cause too many false alarms. It is noted that for geometric distribution, the control limits are based on k times standard deviation which has been used previously, will cause a frequent false alarm, and cannot derive any reasonable lower control limits. Studies the use of probability limits to resolve these problems. Also discusses the use of geometric distribution for process control of high‐yield processes.

Details

International Journal of Quality & Reliability Management, vol. 14 no. 1
Type: Research Article
ISSN: 0265-671X

Keywords

To view the access options for this content please click here
Article

Reza Ghazal and Muhamed Zulkhibri

The purpose of this paper is to examine the determinants of innovation outputs proxied by number of patent applications, trademarks and industrial designs in developing…

Abstract

Purpose

The purpose of this paper is to examine the determinants of innovation outputs proxied by number of patent applications, trademarks and industrial designs in developing countries.

Design/methodology/approach

The paper employs a panel data and Negative Binomial method to analyse the main determinants affecting the innovation outputs.

Findings

The results implicitly suggest that providing a fertile ground to attract more foreign direct investment (FDI) can lead to much better innovation outputs. The study also strongly supports the role of institutions and governance for increasing innovation activities in developing economies as indicated by positive impacts of governance factors in the model. However, the impact of economic freedom indicators on improving innovation outputs is mixed.

Originality/value

This paper contributes to the existing literature in two ways: it examines the effect of FDI and research and development on innovation of selected developing countries; and the study uses a panel data approach to increase the accuracy of the results through exploiting the significant variations of innovation outputs across countries, while controlling for a larger number of innovation outputs and product determinants. To the authors knowledge, this is the first empirical study on the behaviour of innovation outputs for developing countries.

Details

Journal of Economic Studies, vol. 42 no. 2
Type: Research Article
ISSN: 0144-3585

Keywords

To view the access options for this content please click here
Article

Wael Hemrit and Mounira Ben Arab

The purpose of this paper is to examine the determinants of operational losses in insurance companies.

Abstract

Purpose

The purpose of this paper is to examine the determinants of operational losses in insurance companies.

Design/methodology/approach

By using most common estimates of frequency and severity of losses that affected business‐lines during 2009, the paper integrates a quantitative aspect that reflects the mode of organization in the insurance company. In this paper, it would be more appropriate to focus on the frequency and severity of losses estimated by insurers and which are related to each category of operational risk events that took place in 2009.

Findings

The paper finds that the frequency of operational losses is positively related to the Market Share (MARKSHARE) and the Rate of Geographic Location (RAGELOC). However, the occurrence of loss is negatively related to the Variety of Insurance Activities (VARIACT). The paper also found a decrease in the frequency of losses associated with a large number of employees. Therefore, there is a significant relationship between the Human Factor (HF) and the occurrence of operational losses. In terms of severity, the empirical study has shown that the probability of zero intensity of operational losses is negatively influenced by the Market Share (MARKSHARE) and the Rate of Geographic Location (RAGELOC). In the same framework, the Variety of Insurance Activities (VARIACT) has a negative effect on the probability of high operational loss severity.

Originality/value

Despite the absence of the quantitative data of operational risk, this article will discover a new research perspective to estimate the frequency and severity of operational losses in the insurance sector in Tunisia.

Details

The Journal of Risk Finance, vol. 13 no. 5
Type: Research Article
ISSN: 1526-5943

Keywords

1 – 10 of over 1000