Search results

1 – 10 of 546
Book part
Publication date: 2 December 2021

Edwin Fourrier-Nicolaï and Michel Lubrano

The growth incidence curve of Ravallion and Chen (2003) is based on the quantile function. Its distribution-free estimator behaves erratically with usual sample sizes leading to…

Abstract

The growth incidence curve of Ravallion and Chen (2003) is based on the quantile function. Its distribution-free estimator behaves erratically with usual sample sizes leading to problems in the tails. The authors propose a series of parametric models in a Bayesian framework. A first solution consists in modeling the underlying income distribution using simple densities for which the quantile function has a closed analytical form. This solution is extended by considering a mixture model for the underlying income distribution. However, in this case, the quantile function is semi-explicit and has to be evaluated numerically. The last solution consists in adjusting directly a functional form for the Lorenz curve and deriving its first-order derivative to find the corresponding quantile function. The authors compare these models by Monte Carlo simulations and using UK data from the Family Expenditure Survey. The authors devote a particular attention to the analysis of subgroups.

Details

Research on Economic Inequality: Poverty, Inequality and Shocks
Type: Book
ISBN: 978-1-80071-558-5

Keywords

Article
Publication date: 8 May 2009

V. Riihimäki

The purpose of this paper is to analyze the suitability of the real option methods in the valuation of WiMAX networks. Particularly, the shapes of the probability distributions

1423

Abstract

Purpose

The purpose of this paper is to analyze the suitability of the real option methods in the valuation of WiMAX networks. Particularly, the shapes of the probability distributions for the investment costs and net present values (NPV) are examined.

Design/methodology/approach

The study analyzes the costs and NPV distributions by simulating an investment project in a rural area. The paper examines the influences of different uncertainty models and the shapes of the resulting investment costs, NPVs, and NPV ratios. The simulated option values are compared to results from different analytical equations.

Findings

The analysis in this study shows that the shape of the uncertainty – or error – in the parameters does not affect the shapes of the investment costs or NPV distribution. Instead, the subject of the uncertainty – i.e. the parameters for which the uncertainty is modeled – matters.

Practical implications

The study shows that the uncertainties and opportunities in network investments may increase the value of the projects dramatically and thus they should be taken into account. The shape of the NPV distribution varies depending on the technology and construction strategy of the network. This makes the real option valuation challenging since the assumptions of the valuation models must be satisfied for reliable results. Analytical option valuation formulas give the same results as simulation, only if the assumptions are sufficiently fulfilled and the parameters properly estimated.

Originality/value

The uncertainty in the service rate growth or population growth parameter influences the resulting distributions. The investment costs are positively skewed and can be approximated by a log‐normal distribution. This makes NPV negatively skewed, which suits badly in the existing analytical option valuation methods assuming log‐normal assets. Also, the NPV ratio is correlated with the investment costs.

Details

info, vol. 11 no. 3
Type: Research Article
ISSN: 1463-6697

Keywords

Open Access
Article
Publication date: 5 December 2016

Sang Sup Cho

This study aims to estimate the firm size distributions that belong to the service sector and manufacturing sector in Korea.

4151

Abstract

Purpose

This study aims to estimate the firm size distributions that belong to the service sector and manufacturing sector in Korea.

Design/methodology/approach

When estimating the firm size distribution, the author considers the following two major factors. First, the firm size distribution can have a gamma distribution rather than traditional accepted distributions such as Pareto distribution or log-normal distribution. In particular, industry-specific enterprises can have different size distributions of the type of gamma distribution. Second, the firm size distribution that is applied to this study’s data set should reflect a number of factors. For example, estimating mixture gamma distribution for firm size distribution should be required and compared, because the total amount of configuration data is composed of small businesses, medium-sized and large companies.

Findings

Using 8,230 number of firm data in 2013, the author estimates mixture gamma distribution for the firm size.

Originality/value

From the comparison, empirical results are found for the following characteristics of core firm size distribution: first, the firm size distribution of the manufacturing sector has a longer tail than firm size distribution of the service sector. Second, the manufacturing firm size distribution dominates the entire country firm size distribution. Third, one factor among the three factors that make up the mixed gamma firm size distribution is described for 99 per cent of the firm size distributions. From the estimated firm size distributions of the service sector and manufacturing sector in Korea, the author simply implies the strategy and policy implications for the start-up firm.

Details

Asia Pacific Journal of Innovation and Entrepreneurship, vol. 10 no. 1
Type: Research Article
ISSN: 2071-1395

Keywords

Article
Publication date: 4 January 2022

Xiang Li, Ming Yang, Hongguang Ma and Kaitao (Stella) Yu

Travel time at inter-stops is a set of important parameters in bus timetabling, which is usually assumed to be normal (log-normal) random variable in literature. With the…

Abstract

Purpose

Travel time at inter-stops is a set of important parameters in bus timetabling, which is usually assumed to be normal (log-normal) random variable in literature. With the development of digital technology and big data analytics ability in the bus industry, practitioners prefer to generate deterministic travel time based on the on-board GPS data under maximum probability rule and mean value rule, which simplifies the optimization procedure, but performs poorly in the timetabling practice due to the loss of uncertain nature on travel time. The purpose of this study is to propose a GPS-data-driven bus timetabling approach with consideration of the spatial-temporal characteristic of travel time.

Design/methodology/approach

The authors illustrate that the real-life on-board GPS data does not support the hypothesis of normal (log-normal) distribution on travel time at inter-stops, thereby formulating the travel time as a scenario-based spatial-temporal matrix, where K-means clustering approach is utilized to identify the scenarios of spatial-temporal travel time from daily observation data. A scenario-based robust timetabling model is finally proposed to maximize the expected profit of the bus carrier. The authors introduce a set of binary variables to transform the robust model into an integer linear programming model, and speed up the solving process by solution space compression, such that the optimal timetable can be well solved by CPLEX.

Findings

Case studies based on the Beijing bus line 628 are given to demonstrate the efficiency of the proposed methodology. The results illustrate that: (1) the scenario-based robust model could increase the expected profits by 15.8% compared with the maximum probability model; (2) the scenario-based robust model could increase the expected profit by 30.74% compared with the mean value model; (3) the solution space compression approach could effectively shorten the computing time by 97%.

Originality/value

This study proposes a scenario-based robust bus timetabling approach driven by GPS data, which significantly improves the practicality and optimality of timetable, and proves the importance of big data analytics in improving public transport operations management.

Details

Industrial Management & Data Systems, vol. 122 no. 10
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 1 April 1974

P.R. BIRD

Most documentation systems allocate a variable number of descriptors to their documents. From a consideration of indexing as a stochastic process it is suggested that the…

Abstract

Most documentation systems allocate a variable number of descriptors to their documents. From a consideration of indexing as a stochastic process it is suggested that the distribution of indexing depth in such a system might represent samples of a (truncated) mixed Poisson process. Examination of five different systems showed that indexing depth does appear to be distributed in this manner, since a reasonable fit to negative binomial distributions can be made statistically. Factors in the art of indexing which influence the distribution are discussed. As a first approximation the distribution of indexing depth, i, of a system, or of any subset of descriptors in it, is simple Poisson, p(i) = e−m(mi/i!), where m is the average depth of indexing. The results contradict previous reports that a log‐normal distribution of indexing depth is to be expected.

Details

Journal of Documentation, vol. 30 no. 4
Type: Research Article
ISSN: 0022-0418

Article
Publication date: 25 June 2019

Doraid Dalalah

The purpose of this paper is to assess and benchmark Six Sigma strategies in services sector, namely, the telecom field, by establishing tables of fallouts of non-conforming…

Abstract

Purpose

The purpose of this paper is to assess and benchmark Six Sigma strategies in services sector, namely, the telecom field, by establishing tables of fallouts of non-conforming services and their associated costs along with a custom data envelopment model for benchmarking the different strategic alternatives.

Design/methodology/approach

Under normality assumption, process fallout in Six Sigma is around 0.002/3.4 part per million for a centered/shifted process. By introducing Six Sigma to applications in services sector, normality assumption may no longer be valid; hence, fallouts of non-normal attributes are computed for different one-sided quality levels. The associated costs of strategy deployment, fallout and transaction completion are all considered. Data envelopment analysis model is also established to benchmark the Six Sigma strategic plans. The strategies are detailed down to processes and to quality characteristics which constitute the decision-making units. The efficiency of each service unit is computed using both CCR and super efficiency models.

Findings

The amount of efforts/costs needed to reduce the variation in a service may differ according to the targeted quality level. For the same Six Sigma quality level, services demonstrate different performance/efficiencies and hence different returns. In some scenarios, moderate quality levels could present high efficiencies as compared to services of higher levels. It was also found that the required improvement is less in the case of Log-normal as compared to normal distributions at some quality levels. This observation is also noted across the presented distributions of this study (Normal, Log-normal, Exponential, Gamma and Weibull).

Social implications

The deployment of Six Sigma in services is mostly found in time-related concepts such as timeliness of billing, lifetimes in reliability engineering, queueing theory, healthcare and telecommunication.

Originality/value

The paper contributes to the existing research by presenting an assessment model of Six Sigma strategies in services of non-normal distributions. Strategies of different quality levels present diverse efficiencies; hence, higher quality levels may not be the best alternatives in terms of the returns on investment. The computed fallout rates of the different distributions can serve as palm lines for further deployment of Six Sigma in services. Besides, the combination of optimization and Six Sigma analysis provides additional benchmarking tool of strategic plans in both manufacturing and services sector.

Details

Benchmarking: An International Journal, vol. 26 no. 6
Type: Research Article
ISSN: 1463-5771

Keywords

Book part
Publication date: 12 December 2007

Lisa M. Berry

To date, many environmental policy discussions consider inequalities between groups (typically by comparing the average or aggregate resource use of one group to another group)…

Abstract

To date, many environmental policy discussions consider inequalities between groups (typically by comparing the average or aggregate resource use of one group to another group), but most ignore disproportionalities within groups. Disproportionality, as discussed in a small but growing body of work, refers to resource use that is highly unequal among members of the same group, and is characterized by a positively skewed distribution, where a small number of resource users create far more environmental harm than “typical” group members. Focusing on aggregated or average impacts effectively treats all members of a group as interchangeable, missing the few “outliers” that actually tend to be responsible for a large fraction of overall resource use. This chapter offers reasons why we should or should not expect disproportional production of environmental impacts (from both mathematical and sociological perspectives), looks at empirical evidence of disproportionality, and offers a framework for detecting disproportionality and assessing just how much difference the outliers make. I find that in cases where the within-group distribution of resource use is highly disproportionate (characterized by extreme outliers), targeting reduction efforts at the disproportionate polluters can offer opportunities to decrease environmental degradation substantially, at a relatively low cost.

Details

Equity and the Environment
Type: Book
ISBN: 978-0-7623-1417-1

Article
Publication date: 1 October 2003

K. Sadananda Upadhya and N.K. Srinivasan

Maintaining a high level of availability of weapon systems during battles becomes important from the point of view of winning the battle. Due to attrition factors (failure due to…

2372

Abstract

Maintaining a high level of availability of weapon systems during battles becomes important from the point of view of winning the battle. Due to attrition factors (failure due to battle damage and unreliability) and logistic delays in the repair process, maintaining the required level of availability is difficult. In this paper, we develop a simulation model for availability of fighter aircraft considering multiple failures causing system failure and logistic delays in the repair process. The methodology is based on discrete event simulation using Monte Carlo techniques. The failure time distribution (Weibull) and the repair time distribution (exponential) for the considered subsystems of the aircraft and the logistic delay time distribution (log‐normal) for the logistic factors spares, crew and equipment were chosen with suitable parameters. The results indicate the pronounced decrease in availability (as low as less than 10 per cent in some cases) due to multiple failures and logistic delays. The results are, however, highly sensitive to a combination of reliability, maintainability and logistic delay parameters.

Details

International Journal of Quality & Reliability Management, vol. 20 no. 7
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 1 March 2004

Giovanni Bifulco, Sebastiano Capozzi, Sergio Fortuna, Tiziana Mormile and Alfredo Testa

Distributing the train traction power over cars of modern High Speed trains, which represent one of the main loads of European electrical power systems, is considered and its…

Abstract

Distributing the train traction power over cars of modern High Speed trains, which represent one of the main loads of European electrical power systems, is considered and its effects on dependability are analyzed with reference to the daily duty‐cycle. Two different possible solutions for the traction systems, the former based on four converters and eight motors, the latter on six converters and 12 motors are compared in terms of service dependability, immobilizing risks and expected failure entity per day. Simplified Markov models are obtained by means of a proper selection of the most likely states. The models are also extended to represent the case of log‐normal distributions for repair times, and are solved separately for mission and idle times, by tuning the transition rates with the different duty‐cycle stages. Numerical applications give the opportunity of verifying the proposed approach suitability and of making quantitative comparisons between the two different considered trains.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 23 no. 1
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 1 June 1997

Akella S.R. Murty and V.N. Achutha Naikan

Reliability of a product is highly dependent on the process capability index of the manufacturing process. Discusses a mathematical modelling technique for deriving the…

1096

Abstract

Reliability of a product is highly dependent on the process capability index of the manufacturing process. Discusses a mathematical modelling technique for deriving the relationship between the product reliability strength and the process capability requirement to meet the product reliability strength for different types of external stress/load distributions which the product undergoes in the actual working environment. Considers four cases of external stress distributions: normal, log‐normal, expotential and Weibull. These techniques can be applied effectively in industrial production plants while selecting machines with required process capability to meet the product reliability strength demand.

Details

International Journal of Quality & Reliability Management, vol. 14 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

1 – 10 of 546