Search results

1 – 10 of over 66000
Article
Publication date: 14 July 2020

Banu Priya and Rajendran P.

The authors consider parallel four-state tandem open queueing network. The queue capacity is infinite. Passenger arrival rate is Poisson distribution and service rate is…

Abstract

Purpose

The authors consider parallel four-state tandem open queueing network. The queue capacity is infinite. Passenger arrival rate is Poisson distribution and service rate is exponential distribution. The queue is constructed in the form of tandem queue, and each and every queue of tandem queue is single server (M/M/1) queue. In tandem queue, passengers will leave the system once they receive service from both the states. The purpose of this paper is to provide performance analysis for four-state tandem open queue network, and a governing equation is formulated with the help of transition diagram. Using Burke theorem, the authors formulated equation for average number of passenger in the system, average waiting time of passenger in the system, average number of passenger in the queue and average waiting time of passenger in the queue.

Design/methodology/approach

This paper used Burke’s theorem.

Findings

In this paper, performance analysis is done for parallel four-state tandem open queueing network and performance measure solved using Burkes theorem formula. K. Sreekanth et al. has done performance analysis for single tandem queue with three states. In this paper, the authors have done performance analysis for two tandem queues parallel with four states. This four-state tandem open queueing network is suitable for real world applications. This paper can extend for more number of service states and multi-server states according to the application, and in such case, the authors have to prove and explain with numerical examples. This analysis is more useful for the applications such as airports, railway stations, bus-stands and banks.

Originality/value

In this paper, parallel four-state tandem open queueing network and performance measure has been solved using Burke’s theorem formula.

Details

International Journal of Pervasive Computing and Communications, vol. 17 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 4 January 2013

Thankappan Vasanthi and Ganapathy Arulmozhi

The purpose of this paper is to use Bayesian probability theory to analyze the software reliability model with multiple types of faults. The probability that all faults are…

Abstract

Purpose

The purpose of this paper is to use Bayesian probability theory to analyze the software reliability model with multiple types of faults. The probability that all faults are detected and corrected after a series of independent software tests and correction cycles is presented. This in turn has a number of applications, such as how long to test a software, estimating the cost of testing, etc.

Design/methodology/approach

The use of Bayesian probabilistic models, when compared to traditional point forecast estimation models, provides tools for risk estimation and allows decision makers to combine historical data with subjective expert estimates. Probability evaluation is done both prior to and after observing the number of faults detected in each cycle. The conditions under which these two measures, the conditional and unconditional probabilities, are the same is also shown. Expressions are derived to evaluate the probability that, after a series of sequential independent reviews have been completed, no class of fault remains in the software system by assuming the prior distribution as Poisson and binomial.

Findings

From results in Sections 4 and 5 it can be observed that the conditional and unconditional probabilities are the same if the prior probability distribution is Poisson and binomial. In these cases the confidence that all faults are deleted is not a function of the number of faults observed during the successive reviews but it is a function of the number of reviews, the detection probabilities and the mean of the prior distribution. This is a remarkable result because it gives a circumstance in which the statistical confidence from a Bayesian analysis is actually independent of all observed data. From the result in Section 4 it can be seen that exponential formula could be used to evaluate the probability that no fault remains when a Poisson prior distribution is combined with a multinomial detection process in each review cycle.

Originality/value

The paper is part of research work for a PhD degree.

Details

International Journal of Quality & Reliability Management, vol. 30 no. 1
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 18 November 2011

Amarjit Singh

The purpose of this paper is to inform facility managers of the type of failure affecting certain pipe types more than others. This is useful in asset management as preventive…

Abstract

Purpose

The purpose of this paper is to inform facility managers of the type of failure affecting certain pipe types more than others. This is useful in asset management as preventive maintenance can be undertaken for those pipe types that experience high probabilities of failure.

Design/methodology/approach

The probability of a specific pipe type failing given the cause of break, age at failure, pipe diameter, and type of soil at the location of the break was found using inventory and main break data from the Honolulu Board of Water Supply (HBWS). Bayes’ theorem was then applied to find the posterior probabilities of failure starting from the prior probabilities of failure.

Findings

It was observed that the greatest probabilities of failure involved corrosion, pipes aged between 20‐30 years, 8″ pipes, and pipes in fill material. The pipe types were ranked and scored based on their probability of failing due to break cause, age, diameter, and soil type. Cast iron pipes were shown to have the highest probability of failing. As such, attention should be given to replace segments of cast iron pipes as they reach the end of their service lives.

Practical implications

This study serves to address a major query in asset management at a public utility, that of which pipes should be selected for replacement when they reach the end of their service life. In addition, this study helps to understand the causes of failure for the various types of pipe.

Social Implications

The importance of having reliable water supply at low cost has immense social implications in modern communities. To deliver such service, water pipe assets have to be managed efficiently.

Originality/value

This paper addresses the probability of failure in a straightforward manner that the water utility can easily apply to its own data, both in its design and asset management.

Details

Built Environment Project and Asset Management, vol. 1 no. 2
Type: Research Article
ISSN: 2044-124X

Keywords

Article
Publication date: 16 February 2024

Qing Wang, Xiaoli Zhang, Jiafu Su and Na Zhang

Platform-based enterprises, as micro-entities in the platform economy, have the potential to effectively promote the low-carbon development of both supply and demand sides in the…

Abstract

Purpose

Platform-based enterprises, as micro-entities in the platform economy, have the potential to effectively promote the low-carbon development of both supply and demand sides in the supply chain. Therefore, this paper aims to provide a multi-criteria decision-making method in a probabilistic hesitant fuzzy environment to assist platform-type companies in selecting cooperative suppliers for carbon reduction in green supply chains.

Design/methodology/approach

This paper combines the advantages of probabilistic hesitant fuzzy sets (PHFS) to address uncertainty issues and proposes an improved multi-criteria decision-making method called PHFS-DNMEREC-MABAC for aiding platform-based enterprises in selecting carbon emission reduction collaboration suppliers in green supply chains. Within this decision-making method, we enhance the standardization process of both the DNMEREC and MABAC methods by directly standardizing probabilistic hesitant fuzzy elements. Additionally, a probability splitting algorithm is introduced to handle probabilistic hesitant fuzzy elements of varying lengths, mitigating information bias that traditional approaches tend to introduce when adding values based on risk preferences.

Findings

In this paper, we apply the proposed method to a case study involving the selection of carbon emission reduction collaboration suppliers for Tmall Mart and compare it with the latest existing decision-making methods. The results demonstrate the applicability of the proposed method and the effectiveness of the introduced probability splitting algorithm in avoiding information bias.

Originality/value

Firstly, this paper proposes a new multi-criteria decision making method for aiding platform-based enterprises in selecting carbon emission reduction collaboration suppliers in green supply chains. Secondly, in this method, we provided a new standard method to process probability hesitant fuzzy decision making information. Finally, the probability splitting algorithm was introduced to avoid information bias in the process of dealing with inconsistent lengths of probabilistic hesitant fuzzy elements.

Details

Asia Pacific Journal of Marketing and Logistics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1355-5855

Keywords

Book part
Publication date: 6 July 2007

Paul D. Thistle

For over 60 years, Lerner's (1944) probabilistic approach to the welfare evaluation of income distributions has aroused controversy. Lerner's famous theorem is that, under…

Abstract

For over 60 years, Lerner's (1944) probabilistic approach to the welfare evaluation of income distributions has aroused controversy. Lerner's famous theorem is that, under ignorance regarding who has which utility function, the optimal distribution of income is completely equal. However, Lerner's probabilistic approach can only be applied to compare distributions with equal means when the number of possible utility functions equals the number of individuals in the population. Lerner's most controversial assumption that each assignment of utility functions to individuals is equally likely. This paper generalizes Lerner's probabilistic approach to the welfare analysis of income distributions by weakening the restrictions of utilitarian welfare, equal means, equal numbers, and equal probabilities and a homogeneous population. We show there is a tradeoff between invariance (measurability and comparability) and the information about the assignment of utility functions to individuals required to evaluate expected social welfare.

Details

Equity
Type: Book
ISBN: 978-0-7623-1450-8

Book part
Publication date: 24 April 2023

Saraswata Chaudhuri, Eric Renault and Oscar Wahlstrom

The authors discuss the econometric underpinnings of Barro (2006)'s defense of the rare disaster model as a way to bring back an asset pricing model “into the right ballpark for…

Abstract

The authors discuss the econometric underpinnings of Barro (2006)'s defense of the rare disaster model as a way to bring back an asset pricing model “into the right ballpark for explaining the equity-premium and related asset-market puzzles.” Arbitrarily low-probability economic disasters can restore the validity of model-implied moment conditions only if the amplitude of disasters may be arbitrary large in due proportion. The authors prove an impossibility theorem that in case of potentially unbounded disasters, there is no such thing as a population empirical likelihood (EL)-based model-implied probability distribution. That is, one cannot identify some belief distortions for which the EL-based implied probabilities in sample, as computed by Julliard and Ghosh (2012), could be a consistent estimator. This may lead to consider alternative statistical discrepancy measures to avoid the problem with EL. Indeed, the authors prove that, under sufficient integrability conditions, power divergence Cressie-Read measures with positive power coefficients properly define a unique population model-implied probability measure. However, when this computation is useful because the reference asset pricing model is misspecified, each power divergence will deliver different model-implied beliefs distortion. One way to provide economic underpinnings to the choice of a particular belief distortion is to see it as the endogenous result of investor's choice when optimizing a recursive multiple-priors utility a la Chen and Epstein (2002). Jeong et al. (2015)'s econometric study confirms that this way of accommodating ambiguity aversion may help to address the Equity Premium puzzle.

Details

Essays in Honor of Joon Y. Park: Econometric Methodology in Empirical Applications
Type: Book
ISBN: 978-1-83753-212-4

Keywords

Book part
Publication date: 11 November 1994

E. Eide

Abstract

Details

Economics of Crime: Deterrence and the Rational Offender
Type: Book
ISBN: 978-0-44482-072-3

Book part
Publication date: 3 June 2008

Nathaniel T. Wilcox

Choice under risk has a large stochastic (unpredictable) component. This chapter examines five stochastic models for binary discrete choice under risk and how they combine with…

Abstract

Choice under risk has a large stochastic (unpredictable) component. This chapter examines five stochastic models for binary discrete choice under risk and how they combine with “structural” theories of choice under risk. Stochastic models are substantive theoretical hypotheses that are frequently testable in and of themselves, and also identifying restrictions for hypothesis tests, estimation and prediction. Econometric comparisons suggest that for the purpose of prediction (as opposed to explanation), choices of stochastic models may be far more consequential than choices of structures such as expected utility or rank-dependent utility.

Details

Risk Aversion in Experiments
Type: Book
ISBN: 978-1-84950-547-5

Book part
Publication date: 23 October 2023

Glenn W. Harrison and J. Todd Swarthout

We take Cumulative Prospect Theory (CPT) seriously by rigorously estimating structural models using the full set of CPT parameters. Much of the literature only estimates a subset…

Abstract

We take Cumulative Prospect Theory (CPT) seriously by rigorously estimating structural models using the full set of CPT parameters. Much of the literature only estimates a subset of CPT parameters, or more simply assumes CPT parameter values from prior studies. Our data are from laboratory experiments with undergraduate students and MBA students facing substantial real incentives and losses. We also estimate structural models from Expected Utility Theory (EUT), Dual Theory (DT), Rank-Dependent Utility (RDU), and Disappointment Aversion (DA) for comparison. Our major finding is that a majority of individuals in our sample locally asset integrate. That is, they see a loss frame for what it is, a frame, and behave as if they evaluate the net payment rather than the gross loss when one is presented to them. This finding is devastating to the direct application of CPT to these data for those subjects. Support for CPT is greater when losses are covered out of an earned endowment rather than house money, but RDU is still the best single characterization of individual and pooled choices. Defenders of the CPT model claim, correctly, that the CPT model exists “because the data says it should.” In other words, the CPT model was borne from a wide range of stylized facts culled from parts of the cognitive psychology literature. If one is to take the CPT model seriously and rigorously then it needs to do a much better job of explaining the data than we see here.

Details

Models of Risk Preferences: Descriptive and Normative Challenges
Type: Book
ISBN: 978-1-83797-269-2

Keywords

Book part
Publication date: 14 July 2010

Hugh Pforsich, Susan Gill and Debra Sanders

This study examines contextual influences on taxpayers’ perceptions of a vague “low” probability of detection and the relationship between taxpayers’ perceptions and their…

Abstract

This study examines contextual influences on taxpayers’ perceptions of a vague “low” probability of detection and the relationship between taxpayers’ perceptions and their likelihood to take questionable tax deductions. As such, we tie psychological theories that explain differential interpretations of qualitative probability phrases (base rate and support theories) to the taxpayer perception literature. Consistent with our hypotheses, taxpayers’ interpretations of “low” differ both between and within subjects, depending on the context in which deductions are presented. On average, our taxpayer subjects are less likely to take questionable deductions perceived to have a higher probability of detection than those perceived to have a lower detection probability. Our results contribute to existing literature by demonstrating that knowledge of subjects’ assessments of an event's probability is integral to designing experiments and drawing conclusions regarding observed behavior. This appears necessary even when researchers provide assessments of detection probabilities and/or employ scenarios for which systematic differences in probability perceptions are not inherently obvious.

Details

Advances in Taxation
Type: Book
ISBN: 978-0-85724-140-5

1 – 10 of over 66000