Search results

1 – 10 of 499
Article
Publication date: 2 November 2012

Younès Mouatassim

The purpose of this paper is to introduce the zero‐modified distributions in the calculation of operational value‐at‐risk.

Abstract

Purpose

The purpose of this paper is to introduce the zero‐modified distributions in the calculation of operational value‐at‐risk.

Design/methodology/approach

This kind of distributions is preferred when excess of zeroes is observed. In operational risk, this phenomenon may be due to the scarcity of data, the existence of extreme values and/or the threshold from which banks start to collect losses. In this article, the paper focuses on the analysis of damage to physical assets.

Findings

The results show that basic Poisson distribution underestimates the dispersion, and then leads to the underestimation of the capital charge. However, zero‐modified Poisson distributions perform well the frequency. In addition, basic negative binomial and its related zero‐modified distributions, in their turn, offer a good prediction of count events. To choose the distribution that suits better the frequency, the paper uses the Vuong's test. Its results indicate that zero‐modified Poisson distributions, basic negative binomial and its related zero‐modified distributions are equivalent. This conclusion is confirmed by the capital charge calculated since the differences between the six aggregations are not significant except that of basic Poisson distribution.

Originality/value

Recently, the zero‐modified formulations are widely used in many fields because of the low frequency of the events. This article aims to describe the frequency of operational risk using zero‐modified distributions.

Article
Publication date: 5 June 2007

Stephen J. Bensman

The purpose of this article is to analyze the historical significance of Donald J. Urquhart, who established the National Lending Library for Science and Technology (NLL) that…

Abstract

Purpose

The purpose of this article is to analyze the historical significance of Donald J. Urquhart, who established the National Lending Library for Science and Technology (NLL) that later was merged into the British Library Lending Division (BLLD), now called the British Library Document Supply Centre (BLDSC).

Design/methodology/approach

The paper presents a short history of the probabilistic revolution, particularly as it developed in the UK in the form of biometric statistics due to Darwin's theory of evolution. It focuses on the overthrow of the normal paradigm, according to which frequency distributions in nature and society conform to the normal law of error. The paper discusses the importance of the Poisson distribution and its utilization in the construction of stochastic models that better describe reality. Here the focus is on the compound Poisson distribution in the form of the negative binomial distribution (NBD). The paper then shows how Urquhart extended the probabilistic revolution to librarianship by using the Poisson as the probabilistic model in his analyses of the 1956 external loans made by the Science Museum Library (SML) as well as in his management of the scientific and technical (sci/tech) journal collection of the NLL. Thanks to this, Urquhart can be considered as playing a pivotal role in the creation of bibliometrics or the statistical bases of modern library and information science. The paper relates how Urquhart's son and daughter‐in‐law, John A. and Norma C. Urquhart, completed Urquhart's probabilistic breakthrough by advancing for the first time the NBD as the model for library use in a study executed at the University of Newcastle upon Tyne, connecting bibliometrics with biometrics. It concludes with a discussion of Urquhart's Law and its probabilistic implications for the use of sci/tech journals in a library system.

Findings

By being the first librarian to apply probability to the analysis of sci/tech journal use, Urquhart was instrumental in the creation of modern library and information science. His findings force a probabilistic re‐conceptualization of sci/tech journal use in a library system that has great implications for the transition of sci/tech journals from locally held paper copies to shared electronic databases.

Originality/value

Urquhart's significance is considered from the perspective of the development of science as a whole as well as library and information science in particular.

Details

Interlending & Document Supply, vol. 35 no. 2
Type: Research Article
ISSN: 0264-1615

Keywords

Book part
Publication date: 17 January 2009

Virginia M. Miori

The challenge of truckload routing is increased in complexity by the introduction of stochastic demand. Typically, this demand is generalized to follow a Poisson distribution. In…

Abstract

The challenge of truckload routing is increased in complexity by the introduction of stochastic demand. Typically, this demand is generalized to follow a Poisson distribution. In this chapter, we cluster the demand data using data mining techniques to establish the more acceptable distribution to predict demand. We then examine this stochastic truckload demand using an econometric discrete choice model known as a count data model. Using actual truckload demand data and data from the bureau of transportation statistics, we perform count data regressions. Two outcomes are produced from every regression run, the predicted demand between every origin and destination, and the likelihood that that demand will occur. The two allow us to generate an expected value forecast of truckload demand as input to a truckload routing formulation. The negative binomial distribution produces an improved forecast over the Poisson distribution.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-1-84855-548-8

Article
Publication date: 1 April 1974

P.R. BIRD

Most documentation systems allocate a variable number of descriptors to their documents. From a consideration of indexing as a stochastic process it is suggested that the…

Abstract

Most documentation systems allocate a variable number of descriptors to their documents. From a consideration of indexing as a stochastic process it is suggested that the distribution of indexing depth in such a system might represent samples of a (truncated) mixed Poisson process. Examination of five different systems showed that indexing depth does appear to be distributed in this manner, since a reasonable fit to negative binomial distributions can be made statistically. Factors in the art of indexing which influence the distribution are discussed. As a first approximation the distribution of indexing depth, i, of a system, or of any subset of descriptors in it, is simple Poisson, p(i) = e−m(mi/i!), where m is the average depth of indexing. The results contradict previous reports that a log‐normal distribution of indexing depth is to be expected.

Details

Journal of Documentation, vol. 30 no. 4
Type: Research Article
ISSN: 0022-0418

Article
Publication date: 1 March 1987

JEAN TAGUE and ISOLA AJIFERUKE

Two dynamic models of library circulation, the Markov model originally proposed by Morse and the mixed Poisson model proposed by Burrell and Cane, are applied to a large…

Abstract

Two dynamic models of library circulation, the Markov model originally proposed by Morse and the mixed Poisson model proposed by Burrell and Cane, are applied to a large eleven‐year university circulation data set. Goodness of fit tests indicate that neither model fits the data. In both cases, the set of non‐circulating items is larger than that predicted by the model.

Details

Journal of Documentation, vol. 43 no. 3
Type: Research Article
ISSN: 0022-0418

Book part
Publication date: 18 April 2018

Dominique Lord and Srinivas Reddy Geedipally

Purpose – This chapter provides an overview of issues related to analysing crash data characterised by excess zero responses and/or long tails and how to overcome these problems…

Abstract

Purpose – This chapter provides an overview of issues related to analysing crash data characterised by excess zero responses and/or long tails and how to overcome these problems. Factors affecting excess zeros and/or long tails are discussed, as well as how they can bias the results when traditional distributions or models are used. Recently introduced multi-parameter distributions and models developed specifically for such datasets are described. The chapter is intended to guide readers on how to properly analyse crash datasets with excess zeros and long or heavy tails.

Methodology – Key references from the literature are summarised and discussed, and two examples detailing how multi-parameter distributions and models compare with the negative binomial distribution and model are presented.

Findings – In the event that the characteristics of the crash dataset cannot be changed or modified, recently introduced multi-parameter distributions and models can be used efficiently to analyse datasets characterised by excess zero responses and/or long tails. They offer a simpler way to interpret the relationship between crashes and explanatory variables, while providing better statistical performance in terms of goodness-of-fit and predictive capabilities.

Research implications – Multi-parameter models are expected to become the next series of traditional distributions and models. The research on these models is still ongoing.

Practical implications – With the advancement of computing power and Bayesian simulation methods, multi-parameter models can now be easily coded and applied to analyse crash datasets characterised by excess zero responses and/or long tails.

Details

Safe Mobility: Challenges, Methodology and Solutions
Type: Book
ISBN: 978-1-78635-223-1

Keywords

Article
Publication date: 11 February 2019

Nataliya Chukhrova and Arne Johannssen

In acceptance sampling, the hypergeometric operating characteristic (OC) function (so called type-A OC) is used to be approximated by the binomial or Poisson OC function, which…

101

Abstract

Purpose

In acceptance sampling, the hypergeometric operating characteristic (OC) function (so called type-A OC) is used to be approximated by the binomial or Poisson OC function, which actually reduce computational effort, but do not provide suffcient approximation results. The purpose of this paper is to examine binomial- and Poisson-type approximations to the hypergeometric distribution, in order to find a simple but accurate approximation that can be successfully applied in acceptance sampling.

Design/methodology/approach

The authors present a new binomial-type approximation for the type-A OC function, and derive its properties. Further, the authors compare this approximation via an extensive numerical study with other common approximations in terms of variation distance and relative efficiency under various conditions on the parameters including limiting cases.

Findings

The introduced approximation generates best numerical results over a wide range of parameter values, and ensures arithmetic simplicity of the binomial distribution and high accuracy to meet requirements regarding acceptance sampling problems. Additionally, it can considerably reduce the computational effort in relation to the type-A OC function and therefore is strongly recommended for calculating sampling plans.

Originality/value

The newly presented approximation provides a remarkably close fit to the type-A OC function, is discrete and needs no correction for continuity, and is skewed in the same direction by roughly the same amount as the exact OC. Due to less factorials, this OC in general involves lower powers than the type-A OC function. Moreover, the binomial-type approximation is easy to fit to the conventional statistical computing packages.

Details

International Journal of Quality & Reliability Management, vol. 36 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 2 November 2012

Wael Hemrit and Mounira Ben Arab

The purpose of this paper is to examine the determinants of operational losses in insurance companies.

1021

Abstract

Purpose

The purpose of this paper is to examine the determinants of operational losses in insurance companies.

Design/methodology/approach

By using most common estimates of frequency and severity of losses that affected business‐lines during 2009, the paper integrates a quantitative aspect that reflects the mode of organization in the insurance company. In this paper, it would be more appropriate to focus on the frequency and severity of losses estimated by insurers and which are related to each category of operational risk events that took place in 2009.

Findings

The paper finds that the frequency of operational losses is positively related to the Market Share (MARKSHARE) and the Rate of Geographic Location (RAGELOC). However, the occurrence of loss is negatively related to the Variety of Insurance Activities (VARIACT). The paper also found a decrease in the frequency of losses associated with a large number of employees. Therefore, there is a significant relationship between the Human Factor (HF) and the occurrence of operational losses. In terms of severity, the empirical study has shown that the probability of zero intensity of operational losses is negatively influenced by the Market Share (MARKSHARE) and the Rate of Geographic Location (RAGELOC). In the same framework, the Variety of Insurance Activities (VARIACT) has a negative effect on the probability of high operational loss severity.

Originality/value

Despite the absence of the quantitative data of operational risk, this article will discover a new research perspective to estimate the frequency and severity of operational losses in the insurance sector in Tunisia.

Details

The Journal of Risk Finance, vol. 13 no. 5
Type: Research Article
ISSN: 1526-5943

Keywords

Article
Publication date: 16 May 2016

Xinzhong Li and Seung-Rok Park

The purpose of this paper is to indicate trade characteristics of Foreign direct investment (FDI) inflows in China and examine the dynamic interaction between FDI inflows and

1463

Abstract

Purpose

The purpose of this paper is to indicate trade characteristics of Foreign direct investment (FDI) inflows in China and examine the dynamic interaction between FDI inflows and China’s international trade through empirical analysis.

Design/methodology/approach

At first, this paper builds the probability distribution model (Poisson and negative binomial (NB)) to capture the characteristics of spatial distribution of all kinds of FDI firms in Chinese cities and provinces based on count data, so as to indicate the potentials for further introducing FDI inflows in China; Second, this paper investigates the effects of trade on FDI firms inflows based on probability regress model (Binary Logit, Tobit, NB, Poisson, zero inflated negative binomial) and shows how international trade accelerates the different kinds of FDI firms to agglomerate in Eastern, Middle and Western region by the endowments of factors; third, this paper empirically examines the magnitude and characteristics of trade effects generated by FDI inflows by building dynamic panel model based on continuous data.

Findings

First, statistical tests of probability distribution model based on count data show that there are characteristics of spatial agglomeration of FDI firms such as manufacture firm, R & D firm, managing and marketing firm and total sectors, which obey NB distribution as whole; Second, this study indicate that FDI inflows have strong positive effects on the international trade in China’s provinces and on China’s regional trade, and that most of foreign firms in China are export oriented being strongly characterized as labor-intensive industries, especially, contributions of FDI to imports are greater than the contributions of FDI to exports in China’s Middle and Western trade, and the growth of FDI trade in China’s trade volume has been strong over the past years; third, the empirical results of models based on count data and continuous data indicate that FDI inflows have significantly positive relationship with international trade, that is, the relationship between FDI and international trade in the case of China is the characteristics with complement and imports substituting relationship.

Research limitations/implications

Because of mixed data set for FDI inflows of processing and assembling trade and production-oriented FDI, efficiency-seeking and knowledge or technology – intensive FDI inflows in the past 36 years, the paper only investigate characteristics of FDI inflows in China before the turning point of financial crisis, but it is important for capturing the whole picture of trade characteristics of FDI inflows in China.

Practical implications

The derived quantitative results imply that there are still greater potentials for further introducing FDI inflows in China, and decision-maker should make policy of introducing FDI inflows which are favorable to supporting innovative activities and economic agglomeration, and preferably encourage efficiency-seeking and export-oriented FDI inflows so as enhance quality and efficiency of economic growth, which are also helpful to accelerate upgrade of Chinese industry and gradually shorten gap of growth among Eastern, Middle and Western region.

Social implications

FDI inflows in China not only stimulate the remarkable growth of bilateral trade between host country and home country, but also promote the growth of international trade between China and the rest of the world. Thus, policies of bilateral or multilateral free-trade and investment area should be encouraged, which will be also favorable to promote the growth and welfare in all the regions.

Originality/value

This paper demonstrates that spatial distributions of FDI firms in Chinese cities and provinces obey NB probability distribution pattern, and puts forward the methodology of model based on count data and continuous data. Besides, this paper quantitatively indicates trade characteristics of FDI inflows in China as well as the dynamic interaction between FDI inflows and China’s international trade.

Details

China Finance Review International, vol. 6 no. 2
Type: Research Article
ISSN: 2044-1398

Keywords

Article
Publication date: 1 March 1989

MICHAEL J. NELSON

Distributions of index terms have been used in modelling information retrieval systems and databases. Most previous models used some form of the Zipf distribution. This work uses…

Abstract

Distributions of index terms have been used in modelling information retrieval systems and databases. Most previous models used some form of the Zipf distribution. This work uses a probability model of the occurrence of index terms to derive discrete distributions which are mixtures of Poisson and negative binomial distributions. These distributions, the generalised inverse Gaussian‐Poisson and the Generalised Waring give better fits than the simpler Zipf distribution, particularly in the tails of the distribution where the high frequency terms are found. They have the advantage of being more explanatory and can incorporate a time parameter if necessary.

Details

Journal of Documentation, vol. 45 no. 3
Type: Research Article
ISSN: 0022-0418

1 – 10 of 499