Search results

1 – 10 of over 49000
Article
Publication date: 12 December 2022

Mohit Goswami, M. Ramkumar and Yash Daultani

This research aims to aid product development managers to estimate the expected cost associated with the development of cost-intensive physical prototypes considering transitions…

Abstract

Purpose

This research aims to aid product development managers to estimate the expected cost associated with the development of cost-intensive physical prototypes considering transitions associated with pertinent states of quality of the prototype and corresponding decision policies under the Markovian setting.

Design/methodology/approach

The authors evolve two types of optimization-based mathematical models under both deterministic and randomized policies. Under the deterministic policy, the product development managers take certain decisions such as “Do nothing,” “Overhaul,” or “Replace” corresponding to different quality states of prototype such as “Good as new,” “Functional with minor deterioration,” “Functional with major deterioration” and “Non-functional.” Under the randomized policy, the product development managers ascertain the probability distribution associated with these decisions corresponding to various states of quality. In both types of mathematical models, i.e. related to deterministic and randomized settings, minimization of the expected cost of the prototype remains the objective function.

Findings

Employing an illustrative case of the operator cabin from the construction equipment domain, the authors ascertain that randomized policy provides us with better decision interventions such that the expected cost of the prototype remains lower than that associated with the deterministic policy. The authors also ascertain the steady-state probabilities associated with a prototype remaining in a particular quality state. These findings have implications for product development budget, time to market, product quality, etc.

Originality/value

The authors’ work contributes toward the development of optimization-driven mathematical models that can encapsulate the nuances related to the uncertainty of transition of quality states of a prototype, decision policies at each quality state of the prototype while considering such facets for all constituent subsystems of the prototype. As opposed to a typical prescriptive study, their study captures the inherent uncertainties associated with states of quality in the context of prototype testing, etc.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 7
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 12 February 2021

Abroon Qazi and Mecit Can Emre Simsekler

This paper aims to develop a process for prioritizing project risks that integrates the decision-maker's risk attitude, uncertainty about risks both in terms of the associated

1170

Abstract

Purpose

This paper aims to develop a process for prioritizing project risks that integrates the decision-maker's risk attitude, uncertainty about risks both in terms of the associated probability and impact ratings, and correlations across risk assessments.

Design/methodology/approach

This paper adopts a Monte Carlo Simulation-based approach to capture the uncertainty associated with project risks. Risks are prioritized based on their relative expected utility values. The proposed process is operationalized through a real application in the construction industry.

Findings

The proposed process helped in identifying low-probability, high-impact risks that were overlooked in the conventional risk matrix-based prioritization scheme. While considering the expected risk exposure of individual risks, none of the risks were located in the high-risk exposure zone; however, the proposed Monte Carlo Simulation-based approach revealed risks with a high probability of occurrence in the high-risk exposure zone. Using the expected utility-based approach alone in prioritizing risks may lead to ignoring few critical risks, which can only be captured through a rigorous simulation-based approach.

Originality/value

Monte Carlo Simulation has been used to aggregate the risk matrix-based data and disaggregate and map the resulting risk profiles with underlying distributions. The proposed process supported risk prioritization based on the decision-maker's risk attitude and identified low-probability, high-impact risks and high-probability, high-impact risks.

Details

International Journal of Managing Projects in Business, vol. 14 no. 5
Type: Research Article
ISSN: 1753-8378

Keywords

Article
Publication date: 7 September 2015

Rimante Andrasiunaite Cox, Susanne Balslev Nielsen and Carsten Rode

The purpose of this paper is to consider how to couple and quantify resilience and sustainability, where sustainability refers to not only environmental impact, but also economic…

Abstract

Purpose

The purpose of this paper is to consider how to couple and quantify resilience and sustainability, where sustainability refers to not only environmental impact, but also economic and social impacts. The way a particular function of a building is provisioned may have significant repercussions beyond just resilience. The goal is to develop a decision support tool for facilities managers.

Design/methodology/approach

A risk framework is used to quantify both resilience and sustainability in monetary terms. The risk framework allows to couple resilience and sustainability, so that the provisioning of a particular building can be investigated with consideration of functional, environmental, economic and, possibly, social dimensions.

Findings

The method of coupling and quantifying resilience and sustainability (CQRS) is illustrated with a simple example that highlights how very different conclusions can be drawn when considering only resilience or resilience and sustainability.

Research limitations/implications

The paper is based on a hypothetical example. The example also illustrates the difficulty in deriving the costs and probabilities associated with particular indicators.

Practical implications

The method is generic, allowing the method to be customized for different user communities. Further research is needed to translate this theoretical framework to a practical tool for practitioners and to evaluate the CQRS method in practice.

Originality/value

The intention of this research is to fill the gap between the need for increasing sustainability and resilience of the built environment and the current practices in property maintenance and operation.

Details

Journal of Facilities Management, vol. 13 no. 4
Type: Research Article
ISSN: 1472-5967

Keywords

Article
Publication date: 8 June 2010

Ole‐Christoffer Granmo

The two‐armed Bernoulli bandit (TABB) problem is a classical optimization problem where an agent sequentially pulls one of two arms attached to a gambling machine, with each pull…

Abstract

Purpose

The two‐armed Bernoulli bandit (TABB) problem is a classical optimization problem where an agent sequentially pulls one of two arms attached to a gambling machine, with each pull resulting either in a reward or a penalty. The reward probabilities of each arm are unknown, and thus one must balance between exploiting existing knowledge about the arms, and obtaining new information. The purpose of this paper is to report research into a completely new family of solution schemes for the TABB problem: the Bayesian learning automaton (BLA) family.

Design/methodology/approach

Although computationally intractable in many cases, Bayesian methods provide a standard for optimal decision making. BLA avoids the problem of computational intractability by not explicitly performing the Bayesian computations. Rather, it is based upon merely counting rewards/penalties, combined with random sampling from a pair of twin Beta distributions. This is intuitively appealing since the Bayesian conjugate prior for a binomial parameter is the Beta distribution.

Findings

BLA is to be proven instantaneously self‐correcting, and it converges to only pulling the optimal arm with probability as close to unity as desired. Extensive experiments demonstrate that the BLA does not rely on external learning speed/accuracy control. It also outperforms established non‐Bayesian top performers for the TABB problem. Finally, the BLA provides superior performance in a distributed application, namely, the Goore game (GG).

Originality/value

The value of this paper is threefold. First of all, the reported BLA takes advantage of the Bayesian perspective for tackling TABBs, yet avoids the computational complexity inherent in Bayesian approaches. Second, the improved performance offered by the BLA opens up for increased accuracy in a number of TABB‐related applications, such as the GG. Third, the reported results form the basis for a new avenue of research – even for cases when the reward/penalty distribution is not Bernoulli distributed. Indeed, the paper advocates the use of a Bayesian methodology, used in conjunction with the corresponding appropriate conjugate prior.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 3 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 5 August 2019

Mohit Goswami, Gopal Kumar and Abhijeet Ghadge

Typically, the budgetary requirements for executing a supplier’s process quality improvement program are often done in unstructured ways in that quality improvement managers…

Abstract

Purpose

Typically, the budgetary requirements for executing a supplier’s process quality improvement program are often done in unstructured ways in that quality improvement managers purely use their previous experiences and pertinent historical information. In this backdrop, the purpose of this paper is to ascertain the expected cost of carrying out suppliers’ process quality improvement programs that are driven by original equipment manufacturers (OEMs).

Design/methodology/approach

Using inputs from experts who had prior experience executing suppliers’ quality improvement programs and employing the Bayesian theory, transition probabilities to various quality levels from an initial quality level are ascertained. Thereafter, the Markov chain concept enables the authors to determine steady-state probabilities. These steady-state probabilities in conjunction with quality level cost coefficients yield the expected cost of quality improvement programs.

Findings

The novel method devised in this research is a key contribution of the work. Furthermore, various implications related to experts’ inputs, dynamics related to Markov chain, etc., are discussed. The method is illustrated using a real life of automotive industry in India.

Originality/value

The research contributes to the extant literature in that a new method of determining the expected cost of quality improvement is proposed. Furthermore, the method would be of value to OEMs and suppliers wherein the quality levels at a given time are the function of quality levels in preceding period(s).

Details

International Journal of Quality & Reliability Management, vol. 36 no. 7
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 11 March 2021

Abroon Qazi and Mecit Can Emre Simsekler

The purpose of this paper is to develop and operationalize a process for prioritizing supply chain risks that is capable of capturing the value at risk (VaR), the maximum loss…

Abstract

Purpose

The purpose of this paper is to develop and operationalize a process for prioritizing supply chain risks that is capable of capturing the value at risk (VaR), the maximum loss expected at a given confidence level for a specified timeframe associated with risks within a network setting.

Design/methodology/approach

The proposed “Worst Expected Best” method is theoretically grounded in the framework of Bayesian Belief Networks (BBNs), which is considered an effective technique for modeling interdependency across uncertain variables. An algorithm is developed to operationalize the proposed method, which is demonstrated using a simulation model.

Findings

Point estimate-based methods used for aggregating the network expected loss for a given supply chain risk network are unable to project the realistic risk exposure associated with a supply chain. The proposed method helps in establishing the expected network-wide loss for a given confidence level. The vulnerability and resilience-based risk prioritization schemes for the model considered in this paper have a very weak correlation.

Originality/value

This paper introduces a new “Worst Expected Best” method to the literature on supply chain risk management that helps in assessing the probabilistic network expected VaR for a given supply chain risk network. Further, new risk metrics are proposed to prioritize risks relative to a specific VaR that reflects the decision-maker's risk appetite.

Details

International Journal of Quality & Reliability Management, vol. 39 no. 1
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 1 May 2004

John Maleyeff, Laura B. Newell and Frank C. Kaminsky

A practical model based on basic probability theory is developed to evaluate the operational and financial performance of mammography systems. The model is intended to be used by…

Abstract

A practical model based on basic probability theory is developed to evaluate the operational and financial performance of mammography systems. The model is intended to be used by decision makers to evaluate overall sensitivity, overall specificity, positive and negative predictive values, and expected cost. As an illustration, computer aided detection (CAD) systems that support a radiologist's diagnosis are compared with standard mammography to determine conditions that would support their use. The model's input parameters include the operational performance of mammography (with and without CAD), the age of the patient, the cost of administering the mammogram and the expected costs associated with false positive and false negative outcomes. Sensitivity analyses are presented that show the CAD system projecting financial benefit over ranges of uncertainty associated with each model parameter.

Details

International Journal of Health Care Quality Assurance, vol. 17 no. 3
Type: Research Article
ISSN: 0952-6862

Keywords

Article
Publication date: 5 May 2002

Richard L. Gallagher

A simulation methodology is applied to the loan loss reserve process of an agricultural lender. Weaknesses of the point‐estimate approach to estimating loan loss reserves are…

Abstract

A simulation methodology is applied to the loan loss reserve process of an agricultural lender. Weaknesses of the point‐estimate approach to estimating loan loss reserves are addressed with a “bottom‐up” model. Modeling includes consideration of the producer’s and the lender’s diversification efforts. Implementation of this model will provide the lender a better understanding of the institution’s portfolio risk, as well as the credit risk associated with each loan. This study compares the lender’s loan loss estimates to a distribution of losses with associated probabilities. The comparative results could provide the lender a basis for setting probability levels for determining the regulatory required level of loan loss reserve.

Details

Agricultural Finance Review, vol. 62 no. 1
Type: Research Article
ISSN: 0002-1466

Keywords

Article
Publication date: 10 August 2010

Alain Billionnet

Negative effects of habitat isolation that arise from landscape fragmentation can be mitigated, by connecting natural areas through a network of habitat corridors. To increase the…

Abstract

Purpose

Negative effects of habitat isolation that arise from landscape fragmentation can be mitigated, by connecting natural areas through a network of habitat corridors. To increase the permeability of a given network, i.e. to decrease the resistance to animal movements through this network, often many developments can be made. The available financial resources being limited, the most effective developments must be chosen. This optimization problem, suggested in Finke and Sonnenschein, can be treated by heuristics and simulation approaches, but the method is heavy and the obtained solutions are sub‐optimal. The aim of the paper is to show that the problem can be efficiently solved to optimality by mathematical programming.

Design/methodology/approach

The moves of the individual in the network are modeled by an absorbing Markov chain and the development problem is formulated as a mixed‐integer quadratic program, then this program is linearized, and the best developments to make are determined by mixed‐integer linear programming.

Findings

First, the approach allows the development problem to be solved to optimality contrary to other methods. Second, the definition of the mathematical program is relatively simple, and its implementation is immediate by using standard, commercially available, software. Third, as it is well known with mixed‐integer linear programming formulation it is possible to add new constraints easily if they are linear (or can be linearized).

Research limitations/implications

With a view to propose a simple and efficient tool to solve a difficult combinatorial optimization problem arising in the improvement of permeability across habitat networks, the approach has been tested on simulated habitat networks. The research does not include the study of some precise species movements in a real network.

Practical implications

The results provide a simple and efficient decision‐aid tool to try to improve the permeability of habitat networks.

Originality/value

The joint use of mathematical programming techniques and Markov chain theory is used to try to lessen the negative effects of landscape fragmentation.

Details

Management of Environmental Quality: An International Journal, vol. 21 no. 5
Type: Research Article
ISSN: 1477-7835

Keywords

Article
Publication date: 1 August 2002

Liliane Bonnal, Sylvie Mendes and Catherine Sofer

There has recently been a strong drive to develop apprenticeship in France, as one means of decreasing youth unemployment. Our aim in this paper is to try to measure the “pure”…

1692

Abstract

There has recently been a strong drive to develop apprenticeship in France, as one means of decreasing youth unemployment. Our aim in this paper is to try to measure the “pure” within‐firm training effect on school‐to‐work transition. We address the problem of the transition to the first job, using a model of simultaneous maximum likelihood estimation of several probabilities and of the parameters of the probability density function linked to the exit from unemployment. We conclude that apprentices have a distinct advantage over those who attended vocational school. This effect is stronger when we correct for the negative selection bias associated with the choice of apprenticeship.

Details

International Journal of Manpower, vol. 23 no. 5
Type: Research Article
ISSN: 0143-7720

Keywords

1 – 10 of over 49000