Search results
1 – 10 of over 1000The purpose of this article is to analyze the historical significance of Donald J. Urquhart, who established the National Lending Library for Science and Technology (NLL) that…
Abstract
Purpose
The purpose of this article is to analyze the historical significance of Donald J. Urquhart, who established the National Lending Library for Science and Technology (NLL) that later was merged into the British Library Lending Division (BLLD), now called the British Library Document Supply Centre (BLDSC).
Design/methodology/approach
The paper presents a short history of the probabilistic revolution, particularly as it developed in the UK in the form of biometric statistics due to Darwin's theory of evolution. It focuses on the overthrow of the normal paradigm, according to which frequency distributions in nature and society conform to the normal law of error. The paper discusses the importance of the Poisson distribution and its utilization in the construction of stochastic models that better describe reality. Here the focus is on the compound Poisson distribution in the form of the negative binomial distribution (NBD). The paper then shows how Urquhart extended the probabilistic revolution to librarianship by using the Poisson as the probabilistic model in his analyses of the 1956 external loans made by the Science Museum Library (SML) as well as in his management of the scientific and technical (sci/tech) journal collection of the NLL. Thanks to this, Urquhart can be considered as playing a pivotal role in the creation of bibliometrics or the statistical bases of modern library and information science. The paper relates how Urquhart's son and daughter‐in‐law, John A. and Norma C. Urquhart, completed Urquhart's probabilistic breakthrough by advancing for the first time the NBD as the model for library use in a study executed at the University of Newcastle upon Tyne, connecting bibliometrics with biometrics. It concludes with a discussion of Urquhart's Law and its probabilistic implications for the use of sci/tech journals in a library system.
Findings
By being the first librarian to apply probability to the analysis of sci/tech journal use, Urquhart was instrumental in the creation of modern library and information science. His findings force a probabilistic re‐conceptualization of sci/tech journal use in a library system that has great implications for the transition of sci/tech journals from locally held paper copies to shared electronic databases.
Originality/value
Urquhart's significance is considered from the perspective of the development of science as a whole as well as library and information science in particular.
Details
Keywords
Nataliya Chukhrova and Arne Johannssen
In acceptance sampling, the hypergeometric operating characteristic (OC) function (so called type-A OC) is used to be approximated by the binomial or Poisson OC function, which…
Abstract
Purpose
In acceptance sampling, the hypergeometric operating characteristic (OC) function (so called type-A OC) is used to be approximated by the binomial or Poisson OC function, which actually reduce computational effort, but do not provide suffcient approximation results. The purpose of this paper is to examine binomial- and Poisson-type approximations to the hypergeometric distribution, in order to find a simple but accurate approximation that can be successfully applied in acceptance sampling.
Design/methodology/approach
The authors present a new binomial-type approximation for the type-A OC function, and derive its properties. Further, the authors compare this approximation via an extensive numerical study with other common approximations in terms of variation distance and relative efficiency under various conditions on the parameters including limiting cases.
Findings
The introduced approximation generates best numerical results over a wide range of parameter values, and ensures arithmetic simplicity of the binomial distribution and high accuracy to meet requirements regarding acceptance sampling problems. Additionally, it can considerably reduce the computational effort in relation to the type-A OC function and therefore is strongly recommended for calculating sampling plans.
Originality/value
The newly presented approximation provides a remarkably close fit to the type-A OC function, is discrete and needs no correction for continuity, and is skewed in the same direction by roughly the same amount as the exact OC. Due to less factorials, this OC in general involves lower powers than the type-A OC function. Moreover, the binomial-type approximation is easy to fit to the conventional statistical computing packages.
Details
Keywords
The purpose of this paper is to introduce the zero‐modified distributions in the calculation of operational value‐at‐risk.
Abstract
Purpose
The purpose of this paper is to introduce the zero‐modified distributions in the calculation of operational value‐at‐risk.
Design/methodology/approach
This kind of distributions is preferred when excess of zeroes is observed. In operational risk, this phenomenon may be due to the scarcity of data, the existence of extreme values and/or the threshold from which banks start to collect losses. In this article, the paper focuses on the analysis of damage to physical assets.
Findings
The results show that basic Poisson distribution underestimates the dispersion, and then leads to the underestimation of the capital charge. However, zero‐modified Poisson distributions perform well the frequency. In addition, basic negative binomial and its related zero‐modified distributions, in their turn, offer a good prediction of count events. To choose the distribution that suits better the frequency, the paper uses the Vuong's test. Its results indicate that zero‐modified Poisson distributions, basic negative binomial and its related zero‐modified distributions are equivalent. This conclusion is confirmed by the capital charge calculated since the differences between the six aggregations are not significant except that of basic Poisson distribution.
Originality/value
Recently, the zero‐modified formulations are widely used in many fields because of the low frequency of the events. This article aims to describe the frequency of operational risk using zero‐modified distributions.
Details
Keywords
Thankappan Vasanthi and Ganapathy Arulmozhi
The purpose of this paper is to use Bayesian probability theory to analyze the software reliability model with multiple types of faults. The probability that all faults are…
Abstract
Purpose
The purpose of this paper is to use Bayesian probability theory to analyze the software reliability model with multiple types of faults. The probability that all faults are detected and corrected after a series of independent software tests and correction cycles is presented. This in turn has a number of applications, such as how long to test a software, estimating the cost of testing, etc.
Design/methodology/approach
The use of Bayesian probabilistic models, when compared to traditional point forecast estimation models, provides tools for risk estimation and allows decision makers to combine historical data with subjective expert estimates. Probability evaluation is done both prior to and after observing the number of faults detected in each cycle. The conditions under which these two measures, the conditional and unconditional probabilities, are the same is also shown. Expressions are derived to evaluate the probability that, after a series of sequential independent reviews have been completed, no class of fault remains in the software system by assuming the prior distribution as Poisson and binomial.
Findings
From results in Sections 4 and 5 it can be observed that the conditional and unconditional probabilities are the same if the prior probability distribution is Poisson and binomial. In these cases the confidence that all faults are deleted is not a function of the number of faults observed during the successive reviews but it is a function of the number of reviews, the detection probabilities and the mean of the prior distribution. This is a remarkable result because it gives a circumstance in which the statistical confidence from a Bayesian analysis is actually independent of all observed data. From the result in Section 4 it can be seen that exponential formula could be used to evaluate the probability that no fault remains when a Poisson prior distribution is combined with a multinomial detection process in each review cycle.
Originality/value
The paper is part of research work for a PhD degree.
Details
Keywords
Starting from a basis laid by Burrell, this paper develops a stochastic model of library borrowing using the Negative Binomial distribution. This shows an improvement over…
Abstract
Starting from a basis laid by Burrell, this paper develops a stochastic model of library borrowing using the Negative Binomial distribution. This shows an improvement over previous characterizations for academic libraries and accords well with new data obtained at Huddersfield Public Library. Evidence concerning the process of issue decay is presented and employed to obtain an optimum stock turnover rate for any collection in its ‘steady state’. A method is then given by which simple relegation tests can be constructed to maintain such as optimum turnover. Although not the ‘final word’ in circulation modelling, the negative binomial distribution extends the range of model applicability into the area of high volume, rapid movement collections with some success.
Michael Yaccino and James Maynard
Examines a way to dramatically reduce the number of tests it takes toqualify a vision inspection system [VIS] which carries out attributeinspections by applying a statistical…
Abstract
Examines a way to dramatically reduce the number of tests it takes to qualify a vision inspection system [VIS] which carries out attribute inspections by applying a statistical operation called a binomial distribution. Describes a binomial distribution and looks at its employment in manufacturing terms. Outlines how a conventional inspection system works and compares the two techniques. Concludes that the advantages of using a binomial distribution include a reduced number of tests, a reduction in equipment cost and the technique is simple to understand and perform.
Details
Keywords
Distributions of index terms have been used in modelling information retrieval systems and databases. Most previous models used some form of the Zipf distribution. This work uses…
Abstract
Distributions of index terms have been used in modelling information retrieval systems and databases. Most previous models used some form of the Zipf distribution. This work uses a probability model of the occurrence of index terms to derive discrete distributions which are mixtures of Poisson and negative binomial distributions. These distributions, the generalised inverse Gaussian‐Poisson and the Generalised Waring give better fits than the simpler Zipf distribution, particularly in the tails of the distribution where the high frequency terms are found. They have the advantage of being more explanatory and can incorporate a time parameter if necessary.
To discuss subcopula estimation for discrete models.
Abstract
Purpose
To discuss subcopula estimation for discrete models.
Design/methodology/approach
The convergence of estimators is considered under the weak convergence of distribution functions and its equivalent properties known in prior works.
Findings
The domain of the true subcopula associated with discrete random variables is found to be discrete on the interior of the unit hypercube. The construction of an estimator in which their domains have the same form as that of the true subcopula is provided, in case, the marginal distributions are binomial.
Originality/value
To the best of our knowledge, this is the first time such an estimator is defined and proved to be converged to the true subcopula.
Details
Keywords
Loganathan Appaia, Padmanaban Muthu Krishnan and Sankaran Kalaiselvi
– The purpose of this paper is the determination of reliability sampling plans in the Bayesian approach assuming that the lifetime distribution is exponential.
Abstract
Purpose
The purpose of this paper is the determination of reliability sampling plans in the Bayesian approach assuming that the lifetime distribution is exponential.
Design/methodology/approach
Sampling plans are used in manufacturing companies as a tool for carrying out sampling inspections, in order to make decisions about the disposition of many finished products. If the quality characteristic is considered as the lifetime of the products, the plan is known as a reliability sampling plan. In life testing, censoring schemes are adopted in order to save time and cost of life test. The inverted gamma distribution is employed as the natural conjugate prior to the average lifetime of the products. The sampling plans are developed assuming various probability distributions to the lifetime of the products.
Findings
The optimum plans n and c are obtained for some sets of values of (p1, a, p2, ß). The selection of sampling plans is illustrated through numerical examples.
Originality/value
Results obtained in this paper are original and the study has been done for the first time in this regard. Reliability sampling plans are essential for making decisions either to accept or reject based on the inspection of the sample.
Details
Keywords
Alternative forms of the desirability distribution for library materials, as defined by the author in an earlier work, are discussed. It is demonstrated that, while several…
Abstract
Alternative forms of the desirability distribution for library materials, as defined by the author in an earlier work, are discussed. It is demonstrated that, while several different distributions may adequately describe an observed circulation frequency distribution over a fairly short time period (one year, say), the long‐term implications may be quite different. Some of the statistical aspects are discussed with an eye to ensuring that the most appropriate model is used.