Search results

1 – 10 of over 1000
Article
Publication date: 3 January 2020

Mayank Kumar Jha, Sanku Dey and Yogesh Mani Tripathi

The purpose of this paper is to estimate the multicomponent reliability by assuming the unit-Gompertz (UG) distribution. Both stress and strength are assumed to have an UG…

Abstract

Purpose

The purpose of this paper is to estimate the multicomponent reliability by assuming the unit-Gompertz (UG) distribution. Both stress and strength are assumed to have an UG distribution with common scale parameter.

Design/methodology/approach

The reliability of a multicomponent stress–strength system is obtained by the maximum likelihood (MLE) and Bayesian method of estimation. Bayes estimates of system reliability are obtained by using Lindley’s approximation and Metropolis–Hastings (M–H) algorithm methods when all the parameters are unknown. The highest posterior density credible interval is obtained by using M–H algorithm method. Besides, uniformly minimum variance unbiased estimator and exact Bayes estimates of system reliability have been obtained when the common scale parameter is known and the results are compared for both small and large samples.

Findings

Based on the simulation results, the authors observe that Bayes method provides better estimation results as compared to MLE. Proposed asymptotic and HPD intervals show satisfactory coverage probabilities. However, average length of HPD intervals tends to remain shorter than the corresponding asymptotic interval. Overall the authors have observed that better estimates of the reliability may be achieved when the common scale parameter is known.

Originality/value

Most of the lifetime distributions used in reliability analysis, such as exponential, Lindley, gamma, lognormal, Weibull and Chen, only exhibit constant, monotonically increasing, decreasing and bathtub-shaped hazard rates. However, in many applications in reliability and survival analysis, the most realistic hazard rates are upside-down bathtub and bathtub-shaped, which are found in the unit-Gompertz distribution. Furthermore, when reliability is measured as percentage or ratio, it is important to have models defined on the unit interval in order to have plausible results. Therefore, the authors have studied the multicomponent stress–strength reliability under the unit-Gompertz distribution by comparing the MLEs, Bayes estimators and UMVUEs.

Details

International Journal of Quality & Reliability Management, vol. 37 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 20 January 2023

Sakshi Soni, Ashish Kumar Shukla and Kapil Kumar

This article aims to develop procedures for estimation and prediction in case of Type-I hybrid censored samples drawn from a two-parameter generalized half-logistic distribution…

Abstract

Purpose

This article aims to develop procedures for estimation and prediction in case of Type-I hybrid censored samples drawn from a two-parameter generalized half-logistic distribution (GHLD).

Design/methodology/approach

The GHLD is a versatile model which is useful in lifetime modelling. Also, hybrid censoring is a time and cost-effective censoring scheme which is widely used in the literature. The authors derive the maximum likelihood estimates, the maximum product of spacing estimates and Bayes estimates with squared error loss function for the unknown parameters, reliability function and stress-strength reliability. The Bayesian estimation is performed under an informative prior set-up using the “importance sampling technique”. Afterwards, we discuss the Bayesian prediction problem under one and two-sample frameworks and obtain the predictive estimates and intervals with corresponding average interval lengths. Applications of the developed theory are illustrated with the help of two real data sets.

Findings

The performances of these estimates and prediction methods are examined under Type-I hybrid censoring scheme with different combinations of sample sizes and time points using Monte Carlo simulation techniques. The simulation results show that the developed estimates are quite satisfactory. Bayes estimates and predictive intervals estimate the reliability characteristics efficiently.

Originality/value

The proposed methodology may be used to estimate future observations when the available data are Type-I hybrid censored. This study would help in estimating and predicting the mission time as well as stress-strength reliability when the data are censored.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 9
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 5 March 2021

Mayank Kumar Jha, Yogesh Mani Tripathi and Sanku Dey

The purpose of this article is to derive inference for multicomponent reliability where stress-strength variables follow unit generalized Rayleigh (GR) distributions with common…

Abstract

Purpose

The purpose of this article is to derive inference for multicomponent reliability where stress-strength variables follow unit generalized Rayleigh (GR) distributions with common scale parameter.

Design/methodology/approach

The authors derive inference for the unknown parametric function using classical and Bayesian approaches. In sequel, (weighted) least square (LS) and maximum product of spacing methods are used to estimate the reliability. Bootstrapping is also considered for this purpose. Bayesian inference is derived under gamma prior distributions. In consequence credible intervals are constructed. For the known common scale, unbiased estimator is obtained and is compared with the corresponding exact Bayes estimate.

Findings

Different point and interval estimators of the reliability are examined using Monte Carlo simulations for different sample sizes. In summary, the authors observe that Bayes estimators obtained using gamma prior distributions perform well compared to the other studied estimators. The average length (AL) of highest posterior density (HPD) interval remains shorter than other proposed intervals. Further coverage probabilities of all the intervals are reasonably satisfactory. A data analysis is also presented in support of studied estimation methods. It is noted that proposed methods work good for the considered estimation problem.

Originality/value

In the literature various probability distributions which are often analyzed in life test studies are mostly unbounded in nature, that is, their support of positive probabilities lie in infinite interval. This class of distributions includes generalized exponential, Burr family, gamma, lognormal and Weibull models, among others. In many situations the authors need to analyze data which lie in bounded interval like average height of individual, survival time from a disease, income per-capita etc. Thus use of probability models with support on finite intervals becomes inevitable. The authors have investigated stress-strength reliability based on unit GR distribution. Useful comments are obtained based on the numerical study.

Details

International Journal of Quality & Reliability Management, vol. 38 no. 10
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 25 February 2014

D.R. Barot and M.N. Patel

This paper aims to deal with the estimation of the empirical Bayesian exact confidence limits of reliability indexes of a cold standby series system with (n+k−1) units under the…

Abstract

Purpose

This paper aims to deal with the estimation of the empirical Bayesian exact confidence limits of reliability indexes of a cold standby series system with (n+k−1) units under the general progressive Type II censoring scheme.

Design/methodology/approach

Assuming that the lifetime of each unit in the system is identical and independent random variable with exponential distribution, the exact confidence limits of the reliability indexes are derived by using an empirical Bayes approach when an exponential prior distribution of the failure rate parameter is considered. The accuracy of these confidence limits is examined in terms of their coverage probabilities by means of Monte-Carlo simulations.

Findings

The simulation results show that accuracy of exact confidence limits of reliability indexes of a cold standby series system is efficient. Therefore, this approach is good enough to use for reliability practitioners in order to improve the system reliability.

Practical implications

When items are costly, the general progressive Type II censoring scheme is used to reduce the total test time and the associated cost of an experiment. The proposed method provides the means to estimate the exact confidence limits of reliability indexes of the proposed cold standby series system under this scheme.

Originality/value

The application of the proposed technique will help the reliability engineers/managers/system engineers in various industrial and other setups where a cold standby series system is widely used.

Details

International Journal of Quality & Reliability Management, vol. 31 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

Book part
Publication date: 15 January 2010

Thomas J. Adler, Colin Smith and Jeffrey Dumont

Discrete choice models are widely used for estimating the effects of changes in attributes on a given product's likely market share. These models can be applied directly to…

Abstract

Discrete choice models are widely used for estimating the effects of changes in attributes on a given product's likely market share. These models can be applied directly to situations in which the choice set is constant across the market of interest or in which the choice set varies systematically across the market. In both of these applications, the models are used to determine the effects of different attribute levels on market shares among the available alternatives, given predetermined choice sets, or of varying the choice set in a straightforward way.

Discrete choice models can also be used to identify the “optimal” configuration of a product or service in a given market. This can be computationally challenging when preferences vary with respect to the ordering of levels within an attribute as well the strengths of preferences across attributes. However, this type of optimization can be a relatively straightforward extension of the typical discrete choice model application.

In this paper, we describe two applications that use discrete choice methods to provide a more robust metric for use in Total Unduplicated Reach and Frequency (TURF) applications: apparel and food products. Both applications involve products for which there is a high degree of heterogeneity in preferences among consumers.

We further discuss a significant challenge in using TURF — that with multi-attributed products the method can become computationally intractable — and describe a heuristic approach to support food and apparel applications. We conclude with a summary of the challenges in these applications, which are yet to be addressed.

Details

Choice Modelling: The State-of-the-art and The State-of-practice
Type: Book
ISBN: 978-1-84950-773-8

Article
Publication date: 1 March 1974

S.E. ROBERTSON and D. TEATHER

A model is proposed to explain the retrieval characteristics of an IR system. It is assumed that, for a given system and a given question, there are probabilities of retrieving…

Abstract

A model is proposed to explain the retrieval characteristics of an IR system. It is assumed that, for a given system and a given question, there are probabilities of retrieving relevant or non‐relevant documents, but that these probabilities are not necessarily the same for different questions. A Bayesian method is outlined for estimating these probabilities, on the basis of a model relating them. The method is applied successfully to some Cranfield data. Potentialities of the method are discussed.

Details

Journal of Documentation, vol. 30 no. 3
Type: Research Article
ISSN: 0022-0418

Book part
Publication date: 18 April 2018

Dominique Lord and Srinivas Reddy Geedipally

Purpose – This chapter provides an overview of issues related to analysing crash data characterised by excess zero responses and/or long tails and how to overcome these problems…

Abstract

Purpose – This chapter provides an overview of issues related to analysing crash data characterised by excess zero responses and/or long tails and how to overcome these problems. Factors affecting excess zeros and/or long tails are discussed, as well as how they can bias the results when traditional distributions or models are used. Recently introduced multi-parameter distributions and models developed specifically for such datasets are described. The chapter is intended to guide readers on how to properly analyse crash datasets with excess zeros and long or heavy tails.

Methodology – Key references from the literature are summarised and discussed, and two examples detailing how multi-parameter distributions and models compare with the negative binomial distribution and model are presented.

Findings – In the event that the characteristics of the crash dataset cannot be changed or modified, recently introduced multi-parameter distributions and models can be used efficiently to analyse datasets characterised by excess zero responses and/or long tails. They offer a simpler way to interpret the relationship between crashes and explanatory variables, while providing better statistical performance in terms of goodness-of-fit and predictive capabilities.

Research implications – Multi-parameter models are expected to become the next series of traditional distributions and models. The research on these models is still ongoing.

Practical implications – With the advancement of computing power and Bayesian simulation methods, multi-parameter models can now be easily coded and applied to analyse crash datasets characterised by excess zero responses and/or long tails.

Details

Safe Mobility: Challenges, Methodology and Solutions
Type: Book
ISBN: 978-1-78635-223-1

Keywords

Article
Publication date: 16 January 2023

Intekhab Alam, Ahteshamul Haq, Lalit Kumar Sharma, Sumit Sharma and Ritika

In this paper, the authors design accelerated life test and provide its application in the field of accelerated life test. The authors use maximum likelihood estimation method as…

66

Abstract

Purpose

In this paper, the authors design accelerated life test and provide its application in the field of accelerated life test. The authors use maximum likelihood estimation method as a parameter estimation method.

Design/methodology/approach

In this paper we design accelerated life test and provide its application in the field of accelerated life test. The authors use maximum likelihood estimation method as a parameter estimation method.

Findings

In this study, the authors design accelerated life test under Type-I censoring when the lifetime of test items follows PID and also provides its application in the field of warranty policy. The following conclusion is made on the basis of this study. (1) An inverse relationship is shown between the shape parameter with the expected total cost and expected cycle time, while the shape parameter directly relates to the expected cost rate (see Table 5). (2) A direct relationship is shown between the scale parameter with the expected total cost and expected time cycle, while the inverse relationship is shown with the expected cost rate (see Table 5). (3) An inverse relationship is shown between the replacement age and the expected cost rate, while there are direct relationships between expected total cost and expected time cycle (see Table 5).

Originality/value

This paper is neither published or neither accepted anywhere.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 8
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 11 March 2014

Jared Charles Allen, Alasdair M. Goodwill, Kyle Watters and Eric Beauregard

The purpose of this paper is to discuss and demonstrate “best practices” for creating quantitative behavioural investigative advice (i.e. statements to assist police with…

Abstract

Purpose

The purpose of this paper is to discuss and demonstrate “best practices” for creating quantitative behavioural investigative advice (i.e. statements to assist police with psychological and behavioural aspects of investigations) where complex statistical modelling is not available.

Design/methodology/approach

Using a sample of 361 serial stranger sexual offenses and a cross-validation approach, the paper demonstrates prediction of offender characteristics using base rates and using Bayes’ Theorem. The paper predicts four dichotomous offender characteristic variables, first using simple base rates, then using Bayes’ Theorem with 16 categorical crime scene variable predictors.

Findings

Both methods consistently predict better than chance. By incorporating more information, analyses based on Bayes’ Theorem (74.6 per cent accurate) predict with 11.1 per cent more accuracy overall than analyses based on base rates (63.5 per cent accurate), and provide improved advising estimates in line with best practices.

Originality/value

The study demonstrates how useful predictions of offender characteristics can be acquired from crime information without large (i.e. >500 cases) data sets or “trained” statistical models. Advising statements are constructed for discussion, and results are discussed in terms of the pragmatic usefulness of the methods for police investigations.

Details

Policing: An International Journal of Police Strategies & Management, vol. 37 no. 1
Type: Research Article
ISSN: 1363-951X

Keywords

Article
Publication date: 30 March 2012

Marcelo Mendoza

Automatic text categorization has applications in several domains, for example e‐mail spam detection, sexual content filtering, directory maintenance, and focused crawling, among…

Abstract

Purpose

Automatic text categorization has applications in several domains, for example e‐mail spam detection, sexual content filtering, directory maintenance, and focused crawling, among others. Most information retrieval systems contain several components which use text categorization methods. One of the first text categorization methods was designed using a naïve Bayes representation of the text. Currently, a number of variations of naïve Bayes have been discussed. The purpose of this paper is to evaluate naïve Bayes approaches on text categorization introducing new competitive extensions to previous approaches.

Design/methodology/approach

The paper focuses on introducing a new Bayesian text categorization method based on an extension of the naïve Bayes approach. Some modifications to document representations are introduced based on the well‐known BM25 text information retrieval method. The performance of the method is compared to several extensions of naïve Bayes using benchmark datasets designed for this purpose. The method is compared also to training‐based methods such as support vector machines and logistic regression.

Findings

The proposed text categorizer outperforms state‐of‐the‐art methods without introducing new computational costs. It also achieves performance results very similar to more complex methods based on criterion function optimization as support vector machines or logistic regression.

Practical implications

The proposed method scales well regarding the size of the collection involved. The presented results demonstrate the efficiency and effectiveness of the approach.

Originality/value

The paper introduces a novel naïve Bayes text categorization approach based on the well‐known BM25 information retrieval model, which offers a set of good properties for this problem.

Details

International Journal of Web Information Systems, vol. 8 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of over 1000