Search results

1 – 10 of 427
Article
Publication date: 16 February 2024

Neeraj Joshi, Sudeep R. Bapat and Raghu Nandan Sengupta

The purpose of this paper is to develop optimal estimation procedures for the stress-strength reliability (SSR) parameter R = P(X > Y) of an inverse Pareto distribution (IPD).

Abstract

Purpose

The purpose of this paper is to develop optimal estimation procedures for the stress-strength reliability (SSR) parameter R = P(X > Y) of an inverse Pareto distribution (IPD).

Design/methodology/approach

We estimate the SSR parameter R = P(X > Y) of the IPD under the minimum risk and bounded risk point estimation problems, where X and Y are strength and stress variables, respectively. The total loss function considered is a combination of estimation error (squared error) and cost, utilizing which we minimize the associated risk in order to estimate the reliability parameter. As no fixed-sample technique can be used to solve the proposed point estimation problems, we propose some “cost and time efficient” adaptive sampling techniques (two-stage and purely sequential sampling methods) to tackle them.

Findings

We state important results based on the proposed sampling methodologies. These include estimations of the expected sample size, standard deviation (SD) and mean square error (MSE) of the terminal estimator of reliability parameters. The theoretical values of reliability parameters and the associated sample size and risk functions are well supported by exhaustive simulation analyses. The applicability of our suggested methodology is further corroborated by a real dataset based on insurance claims.

Originality/value

This study will be useful for scenarios where various logistical concerns are involved in the reliability analysis. The methodologies proposed in this study can reduce the number of sampling operations substantially and save time and cost to a great extent.

Details

International Journal of Quality & Reliability Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 11 September 2017

Unjung Whang

The purpose of this paper is to emphasize the importance of product quality in differentiated-products markets in determining the structure of competition among firms.

1742

Abstract

Purpose

The purpose of this paper is to emphasize the importance of product quality in differentiated-products markets in determining the structure of competition among firms.

Design/methodology/approach

First, two distinct models of firm heterogeneity are considered as two possible structures for firms’ competition: “price competition” and “quality competition.” Then, the author exploits the bilateral trade data of the world’s 83 largest countries in order to examine a link between the empirical findings and the theoretical models.

Findings

The empirical findings support a model of “quality competition” rather than “price competition,” in which firms in a country with a comparative advantage in a given product tend to improve their product quality as opposed to lowering production costs, so they compete on the quality-adjusted price.

Research limitations/implications

This paper used product-level data to examine the spatial pattern of the average export unit value of a product, which is able to answer the question of whether an industry is involved with quality competition. The product-level data used in this study, however, are not ideally suitable for exploring the predictions of a heterogeneous firms’ trade model.

Originality/value

To the best of the author’s knowledge, this is the first paper that investigates a relationship between the country-product pair of comparative advantages and firms’ self-selection behavior in the product-level data to shed light on the role of product quality in determining the structure of firms’ competition.

Details

Journal of Korea Trade, vol. 21 no. 3
Type: Research Article
ISSN: 1229-828X

Keywords

Article
Publication date: 6 September 2011

Muhammad Aslam, Abdur Razzaque Mughal and Munir Ahmad

The purpose of this paper is to propose the group acceptance sampling plans for when the lifetime of the submitted product follows the Pareto distribution.

882

Abstract

Purpose

The purpose of this paper is to propose the group acceptance sampling plans for when the lifetime of the submitted product follows the Pareto distribution.

Design/methodology/approach

The single‐point approach (only consumer's risk) is used to find the plan parameter of the proposed plan for specified values of consumer's risk, producer's risk, acceptance number, number of testers and experiment time.

Findings

Tables are constructed using the Poisson and the weighted Poisson distribution. Extensive tables are provided for practical use.

Research limitations/implications

The tables in this paper can be used only when the lifetime of a product follows the Pareto distribution of 2nd kind.

Practical implications

The result can be used to test the product to save cost and time of the experiment. The use of the weighted Poisson distribution provides the less group size (sample size) as than the plans in the literature.

Social implications

By implementing the proposed plan, the experiment cost can be minimized.

Originality/value

The novelty of this paper is that Poisson and the weighted Poisson distributions are used to find the plan parameter of the proposed plan instead of the binomial distribution when the lifetime of submitted product follows the Pareto distribution of 2nd kind.

Details

International Journal of Quality & Reliability Management, vol. 28 no. 8
Type: Research Article
ISSN: 0265-671X

Keywords

Book part
Publication date: 30 September 2014

Abdoul Aziz Ndoye and Michel Lubrano

We provide a Bayesian inference for a mixture of two Pareto distributions which is then used to approximate the upper tail of a wage distribution. The model is applied to the data…

Abstract

We provide a Bayesian inference for a mixture of two Pareto distributions which is then used to approximate the upper tail of a wage distribution. The model is applied to the data from the CPS Outgoing Rotation Group to analyze the recent structure of top wages in the United States from 1992 through 2009. We find an enormous earnings inequality between the very highest wage earners (the “superstars”), and the other high wage earners. These findings are largely in accordance with the alternative explanations combining the model of superstars and the model of tournaments in hierarchical organization structure. The approach can be used to analyze the recent pay gaps among top executives in large firms so as to exhibit the “superstar” effect.

Details

Economic Well-Being and Inequality: Papers from the Fifth ECINEQ Meeting
Type: Book
ISBN: 978-1-78350-556-2

Keywords

Article
Publication date: 1 April 2003

SERGIO M. FOCARDI and FRANK J. FABOZZI

Fat‐tailed distributions have been found in many financial and economic variables ranging from forecasting returns on financial assets to modeling recovery distributions in…

Abstract

Fat‐tailed distributions have been found in many financial and economic variables ranging from forecasting returns on financial assets to modeling recovery distributions in bankruptcies. They have also been found in numerous insurance applications such as catastrophic insurance claims and in value‐at‐risk measures employed by risk managers. Financial applications include:

Details

The Journal of Risk Finance, vol. 5 no. 1
Type: Research Article
ISSN: 1526-5943

Article
Publication date: 28 October 2014

Paolo Di Barba, Michele Forzan and Elisabetta Sieni

The purpose of this paper is to investigate a bi-objective optimization problem characterized by coupled field analysis. The optimal design of a pancake inductor for the…

Abstract

Purpose

The purpose of this paper is to investigate a bi-objective optimization problem characterized by coupled field analysis. The optimal design of a pancake inductor for the controlled heating of a graphite disk is considered as the benchmark problem. The case study is related to the design of industrial applications of the induction heating of graphite disk.

Design/methodology/approach

The expected goal of the optimization process is twofold: to improve temperature uniformity in the disk and also electrical efficiency of the inductor. The solution of the relevant bi-objective optimization problem is based on multiphysics field analysis. Specifically, the direct problem is solved as a magnetic and thermal coupled problem by means of finite elements; a mesh-inspired definition of thermal uniformity is proposed. In turn, the Pareto front trading off electrical efficiency and thermal uniformity is identified exploiting evolutionary computing.

Findings

By varying the problem targets, different Pareto fronts are identified trading off thermal uniformity and electrical efficiency of the induction-heating device.

Practical implications

These results suggest how to improve the design of this kind of device for the epitaxial growth of silicon wafer; the advantage of using a magnetic concentrator placed close to the inductor axis is pointed out.

Originality/value

The coupling of a multiphysics direct problem with a multiobjective inverse problem is presented as a benchmark problem and accordingly solved. The benchmark provides a simple analysis problem that allows testing various optimization algorithms in a comparative way.

Details

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 33 no. 6
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 11 November 2013

Giovanni Petrone, John Axerio-Cilies, Domenico Quagliarella and Gianluca Iaccarino

A probabilistic non-dominated sorting genetic algorithm (P-NSGA) for multi-objective optimization under uncertainty is presented. The purpose of this algorithm is to create a…

Abstract

Purpose

A probabilistic non-dominated sorting genetic algorithm (P-NSGA) for multi-objective optimization under uncertainty is presented. The purpose of this algorithm is to create a tight coupling between the optimization and uncertainty procedures, use all of the possible probabilistic information to drive the optimizer, and leverage high-performance parallel computing.

Design/methodology/approach

This algorithm is a generalization of a classical genetic algorithm for multi-objective optimization (NSGA-II) by Deb et al. The proposed algorithm relies on the use of all possible information in the probabilistic domain summarized by the cumulative distribution functions (CDFs) of the objective functions. Several analytic test functions are used to benchmark this algorithm, but only the results of the Fonseca-Fleming test function are shown. An industrial application is presented to show that P-NSGA can be used for multi-objective shape optimization of a Formula 1 tire brake duct, taking into account the geometrical uncertainties associated with the rotating rubber tire and uncertain inflow conditions.

Findings

This algorithm is shown to have deterministic consistency (i.e. it turns back to the original NSGA-II) when the objective functions are deterministic. When the quality of the CDF is increased (either using more points or higher fidelity resolution), the convergence behavior improves. Since all the information regarding uncertainty quantification is preserved, all the different types of Pareto fronts that exist in the probabilistic framework (e.g. mean value Pareto, mean value penalty Pareto, etc.) are shown to be generated a posteriori. An adaptive sampling approach and parallel computing (in both the uncertainty and optimization algorithms) are shown to have several fold speed-up in selecting optimal solutions under uncertainty.

Originality/value

There are no existing algorithms that use the full probabilistic distribution to guide the optimizer. The method presented herein bases its sorting on real function evaluations, not merely measures (i.e. mean of the probabilistic distribution) that potentially do not exist.

Details

Engineering Computations, vol. 30 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 June 2000

P.Di Barba

Introduces papers from this area of expertise from the ISEF 1999 Proceedings. States the goal herein is one of identifying devices or systems able to provide prescribed…

Abstract

Introduces papers from this area of expertise from the ISEF 1999 Proceedings. States the goal herein is one of identifying devices or systems able to provide prescribed performance. Notes that 18 papers from the Symposium are grouped in the area of automated optimal design. Describes the main challenges that condition computational electromagnetism’s future development. Concludes by itemizing the range of applications from small activators to optimization of induction heating systems in this third chapter.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 19 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 29 September 2022

Rani Kumari, Chandrakant Lodhi, Yogesh Mani Tripathi and Rajesh Kumar Sinha

Inferences for multicomponent reliability is derived for a family of inverted exponentiated densities having common scale and different shape parameters.

Abstract

Purpose

Inferences for multicomponent reliability is derived for a family of inverted exponentiated densities having common scale and different shape parameters.

Design/methodology/approach

Different estimates for multicomponent reliability is derived from frequentist viewpoint. Two bootstrap confidence intervals of this parametric function are also constructed.

Findings

Form a Monte-Carlo simulation study, the authors find that estimates obtained from maximum product spacing and Right-tail Anderson–Darling procedures provide better point and interval estimates of the reliability. Also the maximum likelihood estimate competes good with these estimates.

Originality/value

In literature several distributions are introduced and studied in lifetime analysis. Among others, exponentiated distributions have found wide applications in such studies. In this regard the authors obtain various frequentist estimates for the multicomponent reliability by considering inverted exponentiated distributions.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 23 April 2020

Xing Xie, Zhenlin Li, Baoshan Zhu and Hong Wang

The purpose of this study is to suppress secondary flows and improve aerodynamic performance of a centrifugal impeller.

Abstract

Purpose

The purpose of this study is to suppress secondary flows and improve aerodynamic performance of a centrifugal impeller.

Design/methodology/approach

A multi-objective optimisation design system was described. The optimization design system was composed of a three-dimensional (3D) inverse design, multi-objective optimisation and computational fluid dynamics (CFD) analysis. First, the control parameter ΔCp for the secondary flows was derived and selected as the optimisation objective. Then, aimed at minimising ΔCp, a 3D inverse design for impellers with different blade loading distributions and blade lean angles was completed and multi-objective optimisation was conducted. Lastly, the improvement in the distribution of secondary flows and aerodynamic performance of the optimal impeller was demonstrated by CFD analysis.

Findings

The study derived the control parameter ΔCp for the secondary flows. ΔCp can indicate the distribution of secondary flows both near the blade pressure and suction surfaces. As ΔCp decreased, secondary flows decreased. The blade loading distribution with fore maximum blade loading at the shroud and aft maximum blade loading at the hub, coupled with a small negative blade lean angle, could help suppress secondary flows and improve aerodynamic efficiency.

Originality/value

A direct control method on internal flow field characteristic-secondary flows by optimisation design was proposed for a centrifugal impeller. The impeller optimisation design process saves time by avoiding substantial CFD sample calculations.

Details

Engineering Computations, vol. 37 no. 9
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of 427