Search results

1 – 10 of over 1000
Article
Publication date: 7 August 2017

Soumya Roy, Biswabrata Pradhan and E.V. Gijo

The purpose of this paper is to compare various methods of estimation of P(X<Y) based on Type-II censored data, where X and Y represent a quality characteristic of interest for…

Abstract

Purpose

The purpose of this paper is to compare various methods of estimation of P(X<Y) based on Type-II censored data, where X and Y represent a quality characteristic of interest for two groups.

Design/methodology/approach

This paper assumes that both X and Y are independently distributed generalized half logistic random variables. The maximum likelihood estimator and the uniformly minimum variance unbiased estimator of R are obtained based on Type-II censored data. An exact 95 percent maximum likelihood estimate-based confidence interval for R is also provided. Next, various Bayesian point and interval estimators are obtained using both the subjective and non-informative priors. A real life data set is analyzed for illustration.

Findings

The performance of various point and interval estimators is judged through a detailed simulation study. The finite sample properties of the estimators are found to be satisfactory. It is observed that the posterior mean marginally outperform other estimators with respect to the mean squared error even under the non-informative prior.

Originality/value

The proposed methodology can be used for comparing two groups with respect to a suitable quality characteristic of interest. It can also be applied for estimation of the stress-strength reliability, which is of particular interest to the reliability engineers.

Details

International Journal of Quality & Reliability Management, vol. 34 no. 7
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 20 January 2023

Sakshi Soni, Ashish Kumar Shukla and Kapil Kumar

This article aims to develop procedures for estimation and prediction in case of Type-I hybrid censored samples drawn from a two-parameter generalized half-logistic distribution

Abstract

Purpose

This article aims to develop procedures for estimation and prediction in case of Type-I hybrid censored samples drawn from a two-parameter generalized half-logistic distribution (GHLD).

Design/methodology/approach

The GHLD is a versatile model which is useful in lifetime modelling. Also, hybrid censoring is a time and cost-effective censoring scheme which is widely used in the literature. The authors derive the maximum likelihood estimates, the maximum product of spacing estimates and Bayes estimates with squared error loss function for the unknown parameters, reliability function and stress-strength reliability. The Bayesian estimation is performed under an informative prior set-up using the “importance sampling technique”. Afterwards, we discuss the Bayesian prediction problem under one and two-sample frameworks and obtain the predictive estimates and intervals with corresponding average interval lengths. Applications of the developed theory are illustrated with the help of two real data sets.

Findings

The performances of these estimates and prediction methods are examined under Type-I hybrid censoring scheme with different combinations of sample sizes and time points using Monte Carlo simulation techniques. The simulation results show that the developed estimates are quite satisfactory. Bayes estimates and predictive intervals estimate the reliability characteristics efficiently.

Originality/value

The proposed methodology may be used to estimate future observations when the available data are Type-I hybrid censored. This study would help in estimating and predicting the mission time as well as stress-strength reliability when the data are censored.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 9
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 2 March 2020

Ronald Nojosa and Pushpa Narayan Rathie

This paper deals with the estimation of the stress–strength reliability R = P(X < Y), when X and Y follow (1) independent generalized gamma (GG) distributions with only a common…

Abstract

Purpose

This paper deals with the estimation of the stress–strength reliability R = P(X < Y), when X and Y follow (1) independent generalized gamma (GG) distributions with only a common shape parameter and (2) independent Weibull random variables with arbitrary scale and shape parameters and generalize the proposal from Kundu and Gupta (2006), Kundu and Raqab (2009) and Ali et al. (2012).

Design/methodology/approach

First, a closed form expression for R is derived under the conditions (1) and (2). Next, sufficient conditions are given for the convergence of the infinite series expansions used to calculate the value of R in case (2). The models GG and Weibull are fitted by maximum likelihood using Broyden–Fletcher–Goldfarb–Shanno (BFGS) quasi-Newton method. Confidence intervals and standard errors are calculated using bootstrap. For illustration purpose, two real data sets are analyzed and the results are compared with the existing recent results available in the literature.

Findings

The proposed approaches improve the estimation of the R by not using transformations in the data and flexibilize the modeling with Weibull distributions with arbitrary scale and shape parameters.

Originality/value

The proposals of the paper eliminate the misestimation of R caused by subtracting a constant value from the data (Kundu and Raqab, 2009) and treat the estimation of R in a more adequate way by using the Weibull distributions without restrictions in the parameters. The two cases covered generalize a number of distributions and unify a number of stress–strength probability P(X < Y) results available in the literature.

Details

International Journal of Quality & Reliability Management, vol. 37 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 10 November 2023

Chenchen Yang, Lu Chen and Qiong Xia

The development of digital technology has provided technical support to various industries. Specifically, Internet-based freight platforms can ensure the high-quality development…

Abstract

Purpose

The development of digital technology has provided technical support to various industries. Specifically, Internet-based freight platforms can ensure the high-quality development of the logistics industry. Online freight platforms can use cargo transportation insurance to improve their service capabilities, promote their differentiated development, create products with platform characteristics and increase their core competitiveness.

Design/methodology/approach

This study uses a generalised linear model to fit the claim probability and claim intensity data and analyses freight insurance pricing based on the freight insurance claim data of a freight platform in China.

Findings

Considering traditional pricing risk factors, this study adds two risk factors to fit the claim probability data, that is, the purchase behaviour of freight insurance customers and road density. The two variables can significantly influence the claim probability, and the model fitting outcomes obtained with the logit connection function are excellent. In addition, this study examines the model results under various distribution types for the fitting of the claim intensity data. The fitting outcomes under a gamma distribution are superior to those under the other distribution types, as measured by the Akaike information criterion.

Originality/value

With actual data from an online freight platform in China, this study empirically proves that a generalised linear model is superior to traditional pricing methods for freight insurance. This study constructs a generalised linear pricing model considering the unique features of the freight industry and determines that the transportation distance, cargo weight and road density have a significant influence on the claim probability and claim intensity.

Details

Industrial Management & Data Systems, vol. 123 no. 11
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 4 September 2017

Rosaiah K., Srinivasa Rao Gadde, Kalyani K. and Sivakumar D.C.U.

The purpose of this paper is to develop a group acceptance sampling plan (GASP) for a resubmitted lot when the lifetime of a product follows odds exponential log logistic

Abstract

Purpose

The purpose of this paper is to develop a group acceptance sampling plan (GASP) for a resubmitted lot when the lifetime of a product follows odds exponential log logistic distribution introduced by Rao and Rao (2014). The parameters of the proposed plan such as minimum group size and acceptance number are determined for a pre-specified consumer’s risk, number of testers and the test termination time. The authors compare the proposed plan with the ordinary GASP, and the results are illustrated with live data example.

Design/methodology/approach

The parameters of the proposed plan such as minimum group size and acceptance number are determined for a pre-specified consumer’s risk, number of testers and the test termination time.

Findings

The authors determined the group size and acceptance number.

Research limitations/implications

No specific limitations.

Practical implications

This methodology can be applicable in industry to study quality control.

Social implications

This methodology can be applicable in health study.

Originality/value

The parameters of the proposed plan such as minimum group size and acceptance number are determined for a pre-specified consumer’s risk, number of testers and the test termination time.

Details

International Journal of Quality & Reliability Management, vol. 34 no. 8
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 1 December 2004

Olga W. Lemoine and Tage Skjoett‐Larsen

A large number of firms have reconfigured their supply chains. The general trends entail, among others, the reduction, centralization and re‐location of plants and distribution

5555

Abstract

A large number of firms have reconfigured their supply chains. The general trends entail, among others, the reduction, centralization and re‐location of plants and distribution centers, the design of new distribution systems, and the reduction of the supplier base. The analysis of the implications of such reconfiguration for freight transport has received comparatively little attention, and most of the analysis has focused on the development of different theoretical models showing how changes in logistic structures and decisions could affect the transport demand. Using empirical data from Denmark, this paper sheds some light on the implications of reconfiguration supply chains on transport. Industry mail surveys among Danish firms as well as an in‐depth case study were performed. The consequences of the reconfiguration process on the present and future demand for transport are measured and analyzed in terms of the quantity of transport units used (trucks/containers), and the transport‐work (ton/km).

Details

International Journal of Physical Distribution & Logistics Management, vol. 34 no. 10
Type: Research Article
ISSN: 0960-0035

Keywords

Book part
Publication date: 30 November 2011

Massimo Guidolin

I review the burgeoning literature on applications of Markov regime switching models in empirical finance. In particular, distinct attention is devoted to the ability of Markov…

Abstract

I review the burgeoning literature on applications of Markov regime switching models in empirical finance. In particular, distinct attention is devoted to the ability of Markov Switching models to fit the data, filter unknown regimes and states on the basis of the data, to allow a powerful tool to test hypotheses formulated in light of financial theories, and to their forecasting performance with reference to both point and density predictions. The review covers papers concerning a multiplicity of sub-fields in financial economics, ranging from empirical analyses of stock returns, the term structure of default-free interest rates, the dynamics of exchange rates, as well as the joint process of stock and bond returns.

Details

Missing Data Methods: Time-Series Methods and Applications
Type: Book
ISBN: 978-1-78052-526-6

Keywords

Book part
Publication date: 18 October 2019

Mohammad Arshad Rahman and Shubham Karnawat

This article is motivated by the lack of flexibility in Bayesian quantile regression for ordinal models where the error follows an asymmetric Laplace (AL) distribution. The…

Abstract

This article is motivated by the lack of flexibility in Bayesian quantile regression for ordinal models where the error follows an asymmetric Laplace (AL) distribution. The inflexibility arises because the skewness of the distribution is completely specified when a quantile is chosen. To overcome this shortcoming, we derive the cumulative distribution function (and the moment-generating function) of the generalized asymmetric Laplace (GAL) distribution – a generalization of AL distribution that separates the skewness from the quantile parameter – and construct a working likelihood for the ordinal quantile model. The resulting framework is termed flexible Bayesian quantile regression for ordinal (FBQROR) models. However, its estimation is not straightforward. We address estimation issues and propose an efficient Markov chain Monte Carlo (MCMC) procedure based on Gibbs sampling and joint Metropolis–Hastings algorithm. The advantages of the proposed model are demonstrated in multiple simulation studies and implemented to analyze public opinion on homeownership as the best long-term investment in the United States following the Great Recession.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part B
Type: Book
ISBN: 978-1-83867-419-9

Keywords

Article
Publication date: 1 October 2018

Aitin Saadatmeli, Mohamad Bameni Moghadam, Asghar Seif and Alireza Faraz

The purpose of this paper is to develop a cost model by the variable sampling interval and optimization of the average cost per unit of time. The paper considers an…

Abstract

Purpose

The purpose of this paper is to develop a cost model by the variable sampling interval and optimization of the average cost per unit of time. The paper considers an economic–statistical design of the X̅ control charts under the Burr shock model and multiple assignable causes were considered and compared with three types of prior distribution for the mean shift parameter.

Design/methodology/approach

The design of the modified X̅ chart is based on the two new concepts of adjusted average time to signal and average number of false alarms for X̅ control chart under Burr XII shock model with multiple assignable causes.

Findings

The cost model was examined through a numerical example, with the same cost and time parameters, so the optimal of design parameters were obtained under uniform and non-uniform sampling schemes. Furthermore, a sensitivity analysis was conducted in a way that the variability of loss cost and design parameters was evaluated supporting the changes of cost, time and Burr XII distribution parameters.

Research limitations/implications

The economic–statistical model scheme of X̅ chart was developed for the Burr XII distributed with multiple assignable causes. The correlated data are among the assumptions to be examined. Moreover, the optimal schemes for the economic-statistic chart can be expanded for correlated observation and continuous process.

Practical implications

The economic–statistical design of control charts depends on the process shock model distribution and due to difficulties from both theoretical and practical aspects; one of the proper alternatives may be the Burr XII distribution which is quite flexible. Yet, in Burr distribution context, only one assignable cause model was considered where more realistic approach may be to consider multiple assignable causes.

Originality/value

This study presents an advanced theoretical model for cost model that improved the shock model that presented in the literature. The study obviously indicates important evidence to justify the implementation of cost models in a real-life industry.

Details

International Journal of Quality & Reliability Management, vol. 35 no. 9
Type: Research Article
ISSN: 0265-671X

Keywords

Book part
Publication date: 3 June 2008

Nathaniel T. Wilcox

Choice under risk has a large stochastic (unpredictable) component. This chapter examines five stochastic models for binary discrete choice under risk and how they combine with…

Abstract

Choice under risk has a large stochastic (unpredictable) component. This chapter examines five stochastic models for binary discrete choice under risk and how they combine with “structural” theories of choice under risk. Stochastic models are substantive theoretical hypotheses that are frequently testable in and of themselves, and also identifying restrictions for hypothesis tests, estimation and prediction. Econometric comparisons suggest that for the purpose of prediction (as opposed to explanation), choices of stochastic models may be far more consequential than choices of structures such as expected utility or rank-dependent utility.

Details

Risk Aversion in Experiments
Type: Book
ISBN: 978-1-84950-547-5

1 – 10 of over 1000