Search results

1 – 10 of over 8000
Article
Publication date: 4 September 2019

S. Khodaygan and A. Ghaderi

The purpose of this paper is to present a new efficient method for the tolerance–reliability analysis and quality control of complex nonlinear assemblies where explicit assembly…

Abstract

Purpose

The purpose of this paper is to present a new efficient method for the tolerance–reliability analysis and quality control of complex nonlinear assemblies where explicit assembly functions are difficult or impossible to extract based on Bayesian modeling.

Design/methodology/approach

In the proposed method, first, tolerances are modelled as the random uncertain variables. Then, based on the assembly data, the explicit assembly function can be expressed by the Bayesian model in terms of manufacturing and assembly tolerances. According to the obtained assembly tolerance, reliability of the mechanical assembly to meet the assembly requirement can be estimated by a proper first-order reliability method.

Findings

The Bayesian modeling leads to an appropriate assembly function for the tolerance and reliability analysis of mechanical assemblies for assessment of the assembly quality, by evaluation of the assembly requirement(s) at the key characteristics in the assembly process. The efficiency of the proposed method by considering a case study has been illustrated and validated by comparison to Monte Carlo simulations.

Practical implications

The method is practically easy to be automated for use within CAD/CAM software for the assembly quality control in industrial applications.

Originality/value

Bayesian modeling for tolerance–reliability analysis of mechanical assemblies, which has not been previously considered in the literature, is a potentially interesting concept that can be extended to other corresponding fields of the tolerance design and the quality control.

Details

Assembly Automation, vol. 39 no. 5
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 7 June 2021

Carol K.H. Hon, Chenjunyan Sun, Bo Xia, Nerina L. Jimmieson, Kïrsten A. Way and Paul Pao-Yen Wu

Bayesian approaches have been widely applied in construction management (CM) research due to their capacity to deal with uncertain and complicated problems. However, to date…

1016

Abstract

Purpose

Bayesian approaches have been widely applied in construction management (CM) research due to their capacity to deal with uncertain and complicated problems. However, to date, there has been no systematic review of applications of Bayesian approaches in existing CM studies. This paper systematically reviews applications of Bayesian approaches in CM research and provides insights into potential benefits of this technique for driving innovation and productivity in the construction industry.

Design/methodology/approach

A total of 148 articles were retrieved for systematic review through two literature selection rounds.

Findings

Bayesian approaches have been widely applied to safety management and risk management. The Bayesian network (BN) was the most frequently employed Bayesian method. Elicitation from expert knowledge and case studies were the primary methods for BN development and validation, respectively. Prediction was the most popular type of reasoning with BNs. Research limitations in existing studies mainly related to not fully realizing the potential of Bayesian approaches in CM functional areas, over-reliance on expert knowledge for BN model development and lacking guides on BN model validation, together with pertinent recommendations for future research.

Originality/value

This systematic review contributes to providing a comprehensive understanding of the application of Bayesian approaches in CM research and highlights implications for future research and practice.

Details

Engineering, Construction and Architectural Management, vol. 29 no. 5
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 29 November 2019

A. George Assaf and Mike G. Tsionas

This paper aims to present several Bayesian specification tests for both in- and out-of-sample situations.

Abstract

Purpose

This paper aims to present several Bayesian specification tests for both in- and out-of-sample situations.

Design/methodology/approach

The authors focus on the Bayesian equivalents of the frequentist approach for testing heteroskedasticity, autocorrelation and functional form specification. For out-of-sample diagnostics, the authors consider several tests to evaluate the predictive ability of the model.

Findings

The authors demonstrate the performance of these tests using an application on the relationship between price and occupancy rate from the hotel industry. For purposes of comparison, the authors also provide evidence from traditional frequentist tests.

Research limitations/implications

There certainly exist other issues and diagnostic tests that are not covered in this paper. The issues that are addressed, however, are critically important and can be applied to most modeling situations.

Originality/value

With the increased use of the Bayesian approach in various modeling contexts, this paper serves as an important guide for diagnostic testing in Bayesian analysis. Diagnostic analysis is essential and should always accompany the estimation of regression models.

Details

International Journal of Contemporary Hospitality Management, vol. 32 no. 4
Type: Research Article
ISSN: 0959-6119

Keywords

Article
Publication date: 29 April 2014

Ahmed Abou-Elyazied Abdallh and Luc Dupré

Magnetic material properties of an electromagnetic device (EMD) can be recovered by solving a coupled experimental numerical inverse problem. In order to ensure the highest…

Abstract

Purpose

Magnetic material properties of an electromagnetic device (EMD) can be recovered by solving a coupled experimental numerical inverse problem. In order to ensure the highest possible accuracy of the inverse problem solution, all physics of the EMD need to be perfectly modeled using a complex numerical model. However, these fine models demand a high computational time. Alternatively, less accurate coarse models can be used with a demerit of the high expected recovery errors. The purpose of this paper is to present an efficient methodology to reduce the effect of stochastic modeling errors in the inverse problem solution.

Design/methodology/approach

The recovery error in the electromagnetic inverse problem solution is reduced using the Bayesian approximation error approach coupled with an adaptive Kriging-based model. The accuracy of the forward model is assessed and adapted a priori using the cross-validation technique.

Findings

The adaptive Kriging-based model seems to be an efficient technique for modeling EMDs used in inverse problems. Moreover, using the proposed methodology, the recovery error in the electromagnetic inverse problem solution is largely reduced in a relatively small computational time and memory storage.

Originality/value

The proposed methodology is capable of not only improving the accuracy of the inverse problem solution, but also reducing the computational time as well as the memory storage. Furthermore, to the best of the authors knowledge, it is the first time to combine the adaptive Kriging-based model with the Bayesian approximation error approach for the stochastic modeling error reduction.

Details

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 33 no. 3
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 13 March 2017

Lei Xue, Changyin Sun and Fang Yu

The paper aims to build the connections between game theory and the resource allocation problem with general uncertainty. It proposes modeling the distributed resource allocation…

Abstract

Purpose

The paper aims to build the connections between game theory and the resource allocation problem with general uncertainty. It proposes modeling the distributed resource allocation problem by Bayesian game. During this paper, three basic kinds of uncertainties are discussed. Therefore, the purpose of this paper is to build the connections between game theory and the resource allocation problem with general uncertainty.

Design/methodology/approach

In this paper, the Bayesian game is proposed for modeling the resource allocation problem with uncertainty. The basic game theoretical model contains three parts: agents, utility function, and decision-making process. Therefore, the probabilistic weighted Shapley value (WSV) is applied to design the utility function of the agents. For achieving the Bayesian Nash equilibrium point, the rational learning method is introduced for optimizing the decision-making process of the agents.

Findings

The paper provides empirical insights about how the game theoretical model deals with the resource allocation problem uncertainty. A probabilistic WSV function was proposed to design the utility function of agents. Moreover, the rational learning was used to optimize the decision-making process of agents for achieving Bayesian Nash equilibrium point. By comparing with the models with full information, the simulation results illustrated the effectiveness of the Bayesian game theoretical methods for the resource allocation problem under uncertainty.

Originality/value

This paper designs a Bayesian theoretical model for the resource allocation problem under uncertainty. The relationships between the Bayesian game and the resource allocation problem are discussed.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 16 March 2010

Leonidas A. Zampetakis and Vassilis S. Moustakis

The purpose of this paper is to present an inductive methodology, which supports ranking of entities. Methodology is based on Bayesian latent variable measurement modeling and…

Abstract

Purpose

The purpose of this paper is to present an inductive methodology, which supports ranking of entities. Methodology is based on Bayesian latent variable measurement modeling and makes use of assessment across composite indicators to assess internal and external model validity (uncertainty is used in lieu of validity). Proposed methodology is generic and it is demonstrated on a well‐known data set, related to the relative position of a country in a “doing business.”

Design/methodology/approach

The methodology is demonstrated using data from the World Banks' “Doing Business 2008” project. A Bayesian latent variable measurement model is developed and both internal and external model uncertainties are considered.

Findings

The methodology enables the quantification of model structure uncertainty through comparisons among competing models, nested or non‐nested using both an information theoretic approach and a Bayesian approach. Furthermore, it estimates the degree of uncertainty in the rankings of alternatives.

Research limitations/implications

Analyses are restricted to first‐order Bayesian measurement models.

Originality/value

Overall, the presented methodology contributes to a better understanding of ranking efforts providing a useful tool for those who publish rankings to gain greater insights into the nature of the distinctions they disseminate.

Details

Journal of Modelling in Management, vol. 5 no. 1
Type: Research Article
ISSN: 1746-5664

Keywords

Book part
Publication date: 1 January 2008

Arnold Zellner

After briefly reviewing the past history of Bayesian econometrics and Alan Greenspan's (2004) recent description of his use of Bayesian methods in managing policy-making risk…

Abstract

After briefly reviewing the past history of Bayesian econometrics and Alan Greenspan's (2004) recent description of his use of Bayesian methods in managing policy-making risk, some of the issues and needs that he mentions are discussed and linked to past and present Bayesian econometric research. Then a review of some recent Bayesian econometric research and needs is presented. Finally, some thoughts are presented that relate to the future of Bayesian econometrics.

Details

Bayesian Econometrics
Type: Book
ISBN: 978-1-84855-308-8

Article
Publication date: 20 June 2024

Carl Hope Korkpoe, Ferdinand Ahiakpor and Edward Nii Amar Amarteifio

The purpose of this paper is to emphasize the risks involved in modeling inflation volatility in the context of macroeconomic policy. For countries like Ghana that are always…

Abstract

Purpose

The purpose of this paper is to emphasize the risks involved in modeling inflation volatility in the context of macroeconomic policy. For countries like Ghana that are always battling economic problems, accurate models are necessary in any modeling endeavor. We estimate volatility taking into account the heteroscedasticity of the model parameters.

Design/methodology/approach

The estimations considered the quasi-maximum likelihood-based GARCH, stochastic and Bayesian inference models in estimating the parameters of the inflation volatility.

Findings

A comparison of the stochastic volatility and Bayesian inference models reveals that the latter is better at tracking the evolution of month-on-month inflation volatility, thus following closely the data during the period under review.

Research limitations/implications

The paper looks at the effect of parameter uncertainty of inflation volatility alone while considering the effects of other key variables like interest and exchange rates that affect inflation.

Practical implications

Economists have battled with accurate modeling and tracking of inflation volatility in Ghana. Where the data is not well-behaved, for example, in developing economies, the stochastic nature of the parameter estimates should be incorporated in the model estimation.

Social implications

Estimating the parameters of inflation volatility models is not enough in a perpetually gyrating economy. The risks of these parameters are needed to completely describe the evolution of volatility especially in developing economies like Ghana.

Originality/value

This work is one of the first to draw the attention of policymakers in Ghana towards the nature of inflation data generated in the economy and the appropriate model for capturing the uncertainty of the model parameters.

Details

African Journal of Economic and Management Studies, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2040-0705

Keywords

Book part
Publication date: 30 August 2019

Percy K. Mistry and Michael D. Lee

Jeliazkov and Poirier (2008) analyze the daily incidence of violence during the Second Intifada in a statistical way using an analytical Bayesian implementation of a second-order…

Abstract

Jeliazkov and Poirier (2008) analyze the daily incidence of violence during the Second Intifada in a statistical way using an analytical Bayesian implementation of a second-order discrete Markov process. We tackle the same data and modeling problem from our perspective as cognitive scientists. First, we propose a psychological model of violence, based on a latent psychological construct we call “build up” that controls the retaliatory and repetitive violent behavior by both sides in the conflict. Build up is based on a social memory of recent violence and generates the probability and intensity of current violence. Our psychological model is implemented as a generative probabilistic graphical model, which allows for fully Bayesian inference using computational methods. We show that our model is both descriptively adequate, based on posterior predictive checks, and has good predictive performance. We then present a series of results that show how inferences based on the model can provide insight into the nature of the conflict. These inferences consider the base rates of violence in different periods of the Second Intifada, the nature of the social memory for recent violence, and the way repetitive versus retaliatory violent behavior affects each side in the conflict. Finally, we discuss possible extensions of our model and draw conclusions about the potential theoretical and methodological advantages of treating societal conflict as a cognitive modeling problem.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
ISBN: 978-1-78973-241-2

Keywords

Article
Publication date: 3 September 2024

Biplab Bhattacharjee, Kavya Unni and Maheshwar Pratap

Product returns are a major challenge for e-businesses as they involve huge logistical and operational costs. Therefore, it becomes crucial to predict returns in advance. This…

Abstract

Purpose

Product returns are a major challenge for e-businesses as they involve huge logistical and operational costs. Therefore, it becomes crucial to predict returns in advance. This study aims to evaluate different genres of classifiers for product return chance prediction, and further optimizes the best performing model.

Design/methodology/approach

An e-commerce data set having categorical type attributes has been used for this study. Feature selection based on chi-square provides a selective features-set which is used as inputs for model building. Predictive models are attempted using individual classifiers, ensemble models and deep neural networks. For performance evaluation, 75:25 train/test split and 10-fold cross-validation strategies are used. To improve the predictability of the best performing classifier, hyperparameter tuning is performed using different optimization methods such as, random search, grid search, Bayesian approach and evolutionary models (genetic algorithm, differential evolution and particle swarm optimization).

Findings

A comparison of F1-scores revealed that the Bayesian approach outperformed all other optimization approaches in terms of accuracy. The predictability of the Bayesian-optimized model is further compared with that of other classifiers using experimental analysis. The Bayesian-optimized XGBoost model possessed superior performance, with accuracies of 77.80% and 70.35% for holdout and 10-fold cross-validation methods, respectively.

Research limitations/implications

Given the anonymized data, the effects of individual attributes on outcomes could not be investigated in detail. The Bayesian-optimized predictive model may be used in decision support systems, enabling real-time prediction of returns and the implementation of preventive measures.

Originality/value

There are very few reported studies on predicting the chance of order return in e-businesses. To the best of the authors’ knowledge, this study is the first to compare different optimization methods and classifiers, demonstrating the superiority of the Bayesian-optimized XGBoost classification model for returns prediction.

Details

Journal of Systems and Information Technology, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1328-7265

Keywords

1 – 10 of over 8000