Search results

1 – 10 of over 55000
Article
Publication date: 1 November 1994

Jozef Dopke

Consideration of the reliability of products can be frequently describedby gamma distribution. The Johnson estimation method for any data andsimplified maximum likelihood…

1620

Abstract

Consideration of the reliability of products can be frequently described by gamma distribution. The Johnson estimation method for any data and simplified maximum likelihood estimation method for complete samples can be used to assess the parameters of this distribution. Describes the application of IBM PC programs to determine the parameters of gamma distribution according to this method.

Details

International Journal of Quality & Reliability Management, vol. 11 no. 8
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 17 May 2013

Dorothea Diers, Martin Eling and Marc Linde

The purpose of this paper is to illustrate the importance of modeling parameter risk in premium risk, especially when data are scarce and a multi‐year projection horizon is…

Abstract

Purpose

The purpose of this paper is to illustrate the importance of modeling parameter risk in premium risk, especially when data are scarce and a multi‐year projection horizon is considered. Internal risk models often integrate both process and parameter risks in modeling reserve risk, whereas parameter risk is typically omitted in premium risk, the modeling of which considers only process risk.

Design/methodology/approach

The authors present a variety of methods for modeling parameter risk (asymptotic normality, bootstrap, Bayesian) with different statistical properties. They then integrate these different modeling approaches in an internal risk model and compare their results with those from modeling approaches that measure only process risk in premium risk.

Findings

The authors show that parameter risk is substantial, especially when a multi‐year projection horizon is considered and when there is only limited historical data available for parameterization (as is often the case in practice). The authors' results also demonstrate that parameter risk substantially influences risk‐based capital and strategic management decisions, such as reinsurance.

Practical implications

The authors' findings emphasize that it is necessary to integrate parameter risk in risk modeling. Their findings are thus not only of interest to academics, but of high relevance to practitioners and regulators working toward appropriate risk modeling in an enterprise risk management and solvency context.

Originality/value

To the authors' knowledge, there are no model approaches or studies on parameter uncertainty for projection periods of not just one, but several, accident years; however, consideration of multiple years is crucial when thinking strategically about enterprise risk management.

Details

The Journal of Risk Finance, vol. 14 no. 3
Type: Research Article
ISSN: 1526-5943

Keywords

Open Access
Article
Publication date: 27 July 2022

Ruilin Yu, Yuxin Zhang, Luyao Wang and Xinyi Du

Time headway (THW) is an essential parameter in traffic safety and is used as a typical control variable by many vehicle control algorithms, especially in safety-critical ADAS and…

1270

Abstract

Purpose

Time headway (THW) is an essential parameter in traffic safety and is used as a typical control variable by many vehicle control algorithms, especially in safety-critical ADAS and automated driving systems. However, due to the randomness of human drivers, THW cannot be accurately represented, affecting scholars’ more profound research.

Design/methodology/approach

In this work, two data sets are used as the experimental data to calculate the goodness-of-fit of 18 commonly used distribution models of THW to select the best distribution model. Subsequently, the characteristic parameters of traffic flow are extracted from the data set, and three variables with higher importance are extracted using the random forest model. Combining the best distribution model parameters of the data set, this study obtained a distribution model with adaptive parameters, and its performance and applicability are verified.

Findings

In this work, two data sets are used as the experimental data to calculate the goodness-of-fit of 18 commonly used distribution models of THW to select the best distribution model. Subsequently, the characteristic parameters of traffic flow are extracted from the data set, and three variables with higher importance are extracted using the random forest model. Combining the best distribution model parameters of the data set, this study obtained a distribution model with adaptive parameters, and its performance and applicability are verified.

Originality/value

The results show that the proposed model has a 62.7% performance improvement over the distribution model with fixed parameters. Moreover, the parameter function of the distribution model can be regarded as a quantitative analysis of the degree of influence of the traffic flow state on THW.

Details

Journal of Intelligent and Connected Vehicles, vol. 5 no. 3
Type: Research Article
ISSN: 2399-9802

Keywords

Book part
Publication date: 18 April 2018

Dominique Lord and Srinivas Reddy Geedipally

Purpose – This chapter provides an overview of issues related to analysing crash data characterised by excess zero responses and/or long tails and how to overcome these problems…

Abstract

Purpose – This chapter provides an overview of issues related to analysing crash data characterised by excess zero responses and/or long tails and how to overcome these problems. Factors affecting excess zeros and/or long tails are discussed, as well as how they can bias the results when traditional distributions or models are used. Recently introduced multi-parameter distributions and models developed specifically for such datasets are described. The chapter is intended to guide readers on how to properly analyse crash datasets with excess zeros and long or heavy tails.

Methodology – Key references from the literature are summarised and discussed, and two examples detailing how multi-parameter distributions and models compare with the negative binomial distribution and model are presented.

Findings – In the event that the characteristics of the crash dataset cannot be changed or modified, recently introduced multi-parameter distributions and models can be used efficiently to analyse datasets characterised by excess zero responses and/or long tails. They offer a simpler way to interpret the relationship between crashes and explanatory variables, while providing better statistical performance in terms of goodness-of-fit and predictive capabilities.

Research implications – Multi-parameter models are expected to become the next series of traditional distributions and models. The research on these models is still ongoing.

Practical implications – With the advancement of computing power and Bayesian simulation methods, multi-parameter models can now be easily coded and applied to analyse crash datasets characterised by excess zero responses and/or long tails.

Details

Safe Mobility: Challenges, Methodology and Solutions
Type: Book
ISBN: 978-1-78635-223-1

Keywords

Article
Publication date: 12 February 2021

Ahmet Esat Suzer and Aziz Kaba

The purpose of this study is to describe precisely the wind speed regime and characteristics of a runway of an International Airport, the north-western part of Turkey.

Abstract

Purpose

The purpose of this study is to describe precisely the wind speed regime and characteristics of a runway of an International Airport, the north-western part of Turkey.

Design methodology approach

Three different probability distributions, namely, Inverse Gaussian (IG), widely used two-parameter Weibull and Rayleigh distributions in the literature, are used to represent wind regime and characteristics of the runway. The parameters of each distribution are estimated by the pattern search (PS)-based heuristic algorithm. The results are compared with the other three methods-based numerical computation, including maximum-likelihood method, moment method (MoM) and power density method, respectively. To evaluate the fitting performance of the proposed method, several statistical goodness tests including the mostly used root mean square error (RMSE) and chi-squared (X2) are conducted.

Findings

In the light of the statistical goodness tests, the results of the IG-based PS attain better performance than the classical Weibull and Rayleigh functions. Both the RMSE and X2 values achieved by the IG-based PS method lower than that of Weibull and Rayleigh distributions. It exhibits a better fitting performance with 0.0074 for RMSE and 0.58 × 10−4 for X2 for probability density function (PDF) in 2012 and with RMSE of 0.0084 and X2 of 0.74 × 10−4 for PDF in 2013. As regard the cumulative density function of the measured wind data, the best results are found to be Weibull-based PS with RMSE of 0.0175 and X2 of 3.25 × 10−4 in 2012. However, Weibull-based MoM shows more excellent ability in 2013, with RMSE of 0.0166 and X2 of 2.94 × 10−4. Consequently, it is considered that the results of this study confirm that IG-based PS with the lowest error value can a good choice to model more accurately and characterize the wind speed profile of the airport.

Practical implications

This paper presents a realistic point of view regarding the wind regime and characteristics of an airport. This study may cast the light on researchers, policymakers, policy analysts and airport designers intending to investigate the wind profile of a runway at the airport in the world and also provide a significant pathway on how to determine the wind distribution of the runway.

Originality value

Instead of the well-known Weibull distribution for the representing of wind distribution in the literature, in this paper, IG distribution is used. Furthermore, the suitability of IG to represent the wind distribution is validated when compared with two-parameter Weibull and Rayleigh distributions. Besides, the performance and efficiency of PS have been evaluated by comparing it with other methods.

Details

Aircraft Engineering and Aerospace Technology, vol. 93 no. 2
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 8 May 2018

Przemyslaw Markiewicz, Roman Sikora and Wieslawa Pabjanczyk

The purpose of this paper is to estimate that the start-up current parameters are stochastic or not. Electronic equipment in luminaries significantly improves their luminous…

Abstract

Purpose

The purpose of this paper is to estimate that the start-up current parameters are stochastic or not. Electronic equipment in luminaries significantly improves their luminous efficiency, thereby increasing the energy efficiency of lighting installations. However, the use of electronics [e.g. electronic ballasts for discharge lamps or power supply units for light-emitting diode (LED) luminaries] may also cause some negative effects in lighting installations. One of such effects is large inrush current, which can greatly exceed the admissible line load and trigger the overcurrent protective devices.

Design/methodology/approach

The paper presents results of laboratory tests together with their statistical analysis of the inrush currents of lighting luminaires. Three road luminaires build in different technologies of similar power have been selected for the study. The theoretical distributions described by the analytical formulas matched the empirical distributions by using the MATLAB’ Statistical Toolbox.

Findings

As parameters that characterize short-time overcurrent at start-up are the maximum value of overcurrent amplitude in start-up moment (IPIC), the duration of overcurrent in start-up moment (tPIC) and melting integral MI. The aim of this statistical analysis of the selected parameter is to provide an overcurrent mathematical description allowing to estimate the probability of occurrence of values. For lighting luminaire fitted with magnetic ballasts, the parameters analyzed will randomly vary with the moment of power on. For electronic ballasts, the occurrence of this phenomenon depends on the adopted construction solution.

Practical implications

This will allow, for example, to estimate the probability of activation of protection device by comparing the value of the inrush current Joule’s integral MI with its value for the analyzed protection device. The proposed method may be useful for checking the selectivity of the protection devices in the lighting system.

Originality/value

The study enables application of a probabilistic model for analysis of inrush currents of lighting luminaire and predicting the possible consequences of their occurrence.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 37 no. 3
Type: Research Article
ISSN: 0332-1649

Keywords

Book part
Publication date: 23 October 2023

Morten I. Lau, Hong Il Yoo and Hongming Zhao

We evaluate the hypothesis of temporal stability in risk preferences using two recent data sets from longitudinal lab experiments. Both experiments included a combination of…

Abstract

We evaluate the hypothesis of temporal stability in risk preferences using two recent data sets from longitudinal lab experiments. Both experiments included a combination of decision tasks that allows one to identify a full set of structural parameters characterizing risk preferences under Cumulative Prospect Theory (CPT), including loss aversion. We consider temporal stability in those structural parameters at both population and individual levels. The population-level stability pertains to whether the distribution of risk preferences across individuals in the subject population remains stable over time. The individual-level stability pertains to within-individual correlation in risk preferences over time. We embed the CPT structure in a random coefficient model that allows us to evaluate temporal stability at both levels in a coherent manner, without having to switch between different sets of models to draw inferences at a specific level.

Details

Models of Risk Preferences: Descriptive and Normative Challenges
Type: Book
ISBN: 978-1-83797-269-2

Keywords

Abstract

Details

Applying Maximum Entropy to Econometric Problems
Type: Book
ISBN: 978-0-76230-187-4

Article
Publication date: 12 September 2020

Niveditha A and Ravichandran Joghee

While Six Sigma metrics have been studied by researchers in detail for normal distribution-based data, in this paper, we have attempted to study the Six Sigma metrics for two…

Abstract

Purpose

While Six Sigma metrics have been studied by researchers in detail for normal distribution-based data, in this paper, we have attempted to study the Six Sigma metrics for two-parameter Weibull distribution that is useful in many life test data analyses.

Design/methodology/approach

In the theory of Six Sigma, most of the processes are assumed normal and Six Sigma metrics are determined for such a process of interest. In reliability studies non-normal distributions are more appropriate for life tests. In this paper, a theoretical procedure is developed for determining Six Sigma metrics when the underlying process follows two-parameter Weibull distribution. Numerical evaluations are also considered to study the proposed method.

Findings

In this paper, by matching the probabilities under different normal process-based sigma quality levels (SQLs), we first determined the Six Sigma specification limits (Lower and Upper Six Sigma Limits- LSSL and USSL) for the two-parameter Weibull distribution by setting different values for the shape parameter and the scaling parameter. Then, the lower SQL (LSQL) and upper SQL (USQL) values are obtained for the Weibull distribution with centered and shifted cases. We presented numerical results for Six Sigma metrics of Weibull distribution with different parameter settings. We also simulated a set of 1,000 values from this Weibull distribution for both centered and shifted cases to evaluate the Six Sigma performance metrics. It is found that the SQLs under two-parameter Weibull distribution are slightly lesser than those when the process is assumed normal.

Originality/value

The theoretical approach proposed for determining Six Sigma metrics for Weibull distribution is new to the Six Sigma Quality practitioners who commonly deal with normal process or normal approximation to non-normal processes. The procedure developed here is, in fact, used to first determine LSSL and USSL followed by which LSQL and USQL are obtained. This in turn has helped to compute the Six Sigma metrics such as defects per million opportunities (DPMOs) and the parts that are extremely good per million opportunities (EGPMOs) under two-parameter Weibull distribution for lower-the-better (LTB) and higher-the-better (HTB) quality characteristics. We believe that this approach is quite new to the practitioners, and it is not only useful to the practitioners but will also serve to motivate the researchers to do more work in this field of research.

Details

International Journal of Quality & Reliability Management, vol. 38 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 22 February 2021

Carmen Patino-Rodriguez, Diana M. Pérez and Olga Usuga Manco

The purpose of this paper is to evaluate the performance of a modified EWMA control chart (γEWMA control chart), which considers data distribution and incorporate its correlation…

Abstract

Purpose

The purpose of this paper is to evaluate the performance of a modified EWMA control chart (γEWMA control chart), which considers data distribution and incorporate its correlation structure, simulating in-control and out-of-control processes and to select an adequate value for smoothing parameter with these conditions.

Design/methodology/approach

This paper is based on a simulation approach using the methodology for evaluating statistical methods proposed by Morris et al. (2019). Data were generated from a simulation considering two factors that associated with data: (1) quality variable distribution skewness as an indicator of quality variable distribution; (2) the autocorrelation structure for type of relationship between the observations and modeled by AR(1). In addition, one factor associated with the process was considered, (1) the shift in the process mean. In the following step, when the chart control is modeled, the fourth factor intervenes. This factor is a smoothing parameter. Finally, three indicators defined from the Run Length are used to evaluate γEWMA control chart performance this factors and their interactions.

Findings

Interaction analysis for four factor evidence that the modeling and selection of parameters is different for out-of-control and in-control processes therefore the considerations and parameters selected for each case must be carefully analyzed. For out-of-control processes, it is better to preserve the original features of the distribution (mean and variance) for the calculation of the control limits. It makes sense that highly autocorrelated observations require smaller smoothing parameter since the correlation structure enables the preservation of relevant information in past data.

Originality/value

The γEWMA control chart there has advantages because it gathers, in single chart control: the process and modelling characteristics, and data structure process. Although there are other proposals for modified EWMA, none of them simultaneously analyze the four factors nor their interactions. The proposed γEWMA allows setting the appropriate smoothing parameter when these three factors are considered.

Details

International Journal of Quality & Reliability Management, vol. 38 no. 9
Type: Research Article
ISSN: 0265-671X

Keywords

1 – 10 of over 55000