Search results
1 – 10 of over 4000The purpose of this paper is to present a new efficient method for the tolerance–reliability analysis and quality control of complex nonlinear assemblies where explicit assembly…
Abstract
Purpose
The purpose of this paper is to present a new efficient method for the tolerance–reliability analysis and quality control of complex nonlinear assemblies where explicit assembly functions are difficult or impossible to extract based on Bayesian modeling.
Design/methodology/approach
In the proposed method, first, tolerances are modelled as the random uncertain variables. Then, based on the assembly data, the explicit assembly function can be expressed by the Bayesian model in terms of manufacturing and assembly tolerances. According to the obtained assembly tolerance, reliability of the mechanical assembly to meet the assembly requirement can be estimated by a proper first-order reliability method.
Findings
The Bayesian modeling leads to an appropriate assembly function for the tolerance and reliability analysis of mechanical assemblies for assessment of the assembly quality, by evaluation of the assembly requirement(s) at the key characteristics in the assembly process. The efficiency of the proposed method by considering a case study has been illustrated and validated by comparison to Monte Carlo simulations.
Practical implications
The method is practically easy to be automated for use within CAD/CAM software for the assembly quality control in industrial applications.
Originality/value
Bayesian modeling for tolerance–reliability analysis of mechanical assemblies, which has not been previously considered in the literature, is a potentially interesting concept that can be extended to other corresponding fields of the tolerance design and the quality control.
Details
Keywords
Carol K.H. Hon, Chenjunyan Sun, Bo Xia, Nerina L. Jimmieson, Kïrsten A. Way and Paul Pao-Yen Wu
Bayesian approaches have been widely applied in construction management (CM) research due to their capacity to deal with uncertain and complicated problems. However, to date…
Abstract
Purpose
Bayesian approaches have been widely applied in construction management (CM) research due to their capacity to deal with uncertain and complicated problems. However, to date, there has been no systematic review of applications of Bayesian approaches in existing CM studies. This paper systematically reviews applications of Bayesian approaches in CM research and provides insights into potential benefits of this technique for driving innovation and productivity in the construction industry.
Design/methodology/approach
A total of 148 articles were retrieved for systematic review through two literature selection rounds.
Findings
Bayesian approaches have been widely applied to safety management and risk management. The Bayesian network (BN) was the most frequently employed Bayesian method. Elicitation from expert knowledge and case studies were the primary methods for BN development and validation, respectively. Prediction was the most popular type of reasoning with BNs. Research limitations in existing studies mainly related to not fully realizing the potential of Bayesian approaches in CM functional areas, over-reliance on expert knowledge for BN model development and lacking guides on BN model validation, together with pertinent recommendations for future research.
Originality/value
This systematic review contributes to providing a comprehensive understanding of the application of Bayesian approaches in CM research and highlights implications for future research and practice.
Details
Keywords
Future developments in methodology have the potential to improve management research and better couple it to management practice. These developments are on six fronts: (1…
Abstract
Future developments in methodology have the potential to improve management research and better couple it to management practice. These developments are on six fronts: (1) computer technology, (2) data capture and experimentation, (3) privacy, confidentiality, and data access, (4) causation, (5) modeling and simulation, and (6) Bayesian statistics. The potential of each is explored, and problems, both technical and administrative, in fulfilling this potential are identified. On the computer and communications front, the key elements are the use of relational database management systems, increased computing power for analysis purposes, and computer networking. On the privacy, confidentiality, and data access front, the key elements are new capabilities for data capture through real‐time surveillance, inferential disclosure threats in computer databases, the demand for more access to detailed data, and public concerns for privacy invasion. Management research is a search for causal mechanisms that can be investigated through empirical studies and that facilitate control of complex processes. In the modeling area, there will be (1) greater use of computing power, (2) less use of model‐independent statistical hypothesis testing, and (3) easier to use computer software for modeling and simulation. The Bayesian perspective of consistently expressing uncertainty through probability distributions will become more widely used in management research.
Keith Becker, Jim Sprigg and Alex Cosmas
The purpose of this paper is to estimate individual promotional campaign impacts through Bayesian inference. Conventional statistics have worked well for analyzing the impact of…
Abstract
Purpose
The purpose of this paper is to estimate individual promotional campaign impacts through Bayesian inference. Conventional statistics have worked well for analyzing the impact of direct marketing promotions on purchase behavior. However, many modern marketing programs must drive multiple purchase objectives, requiring more precise arbitration between multiple offers and collection of more data with which to differentiate individuals. This often results in datasets that are highly dimensional, yet also sparse, straining the power of statistical methods to properly estimate the effect of promotional treatments.
Design/methodology/approach
Improvements in computing power have enabled new techniques for predicting individual behavior. This work investigates a probabilistic machine-learned Bayesian approach to predict individual impacts driven by promotional campaign offers for a leading global travel and hospitality chain. Comparisons were made to a linear regression, representative of the current state of practice.
Findings
The findings of this work focus on comparing a machine-learned Bayesian approach with linear regression (which is representative of the current state of practice among industry practitioners) in the analysis of a promotional campaign across three key areas: highly dimensional data, sparse data and likelihood matching.
Research limitations/implications
Because the findings are based on a single campaign, future work includes generalizing results across multiple promotional campaigns. Also of interest for future work are comparisons of the technique developed here with other techniques from academia.
Practical implications
Because the Bayesian approach allows estimation of the influence of the promotion for each hypothetical customer’s set of promotional attributes, even when no exact look-alikes exist in the control group, a number of possible applications exist. These include optimal campaign design (given the ability to estimate the promotional attributes that are likely to drive the greatest incremental spend in a hypothetical deployment) and operationalizing efficient audience selection given the model’s individualized estimates, reducing the risk of marketing overcommunication, which can prompt costly unsubscriptions.
Originality/value
The original contribution is the application of machine-learning to Bayesian Belief Network construction in the context of analyzing a multi-channel promotional campaign’s impact on individual customers. This is of value to practitioners seeking alternatives for campaign analysis for applications in which more commonly used models are not well-suited, such as the three key areas that this paper highlights: highly dimensional data, sparse data and likelihood matching.
Details
Keywords
Fernando Antonio Moala and Karlla Delalibera Chagas
The step-stress accelerated test is the most appropriate statistical method to obtain information about the reliability of new products faster than would be possible if the…
Abstract
Purpose
The step-stress accelerated test is the most appropriate statistical method to obtain information about the reliability of new products faster than would be possible if the product was left to fail in normal use. This paper presents the multiple step-stress accelerated life test using type-II censored data and assuming a cumulative exposure model. The authors propose a Bayesian inference with the lifetimes of test item under gamma distribution. The choice of the loss function is an essential part in the Bayesian estimation problems. Therefore, the Bayesian estimators for the parameters are obtained based on different loss functions and a comparison with the usual maximum likelihood (MLE) approach is carried out. Finally, an example is presented to illustrate the proposed procedure in this paper.
Design/methodology/approach
A Bayesian inference is performed and the parameter estimators are obtained under symmetric and asymmetric loss functions. A sensitivity analysis of these Bayes and MLE estimators are presented by Monte Carlo simulation to verify if the Bayesian analysis is performed better.
Findings
The authors demonstrated that Bayesian estimators give better results than MLE with respect to MSE and bias. The authors also consider three types of loss functions and they show that the most dominant estimator that had the smallest MSE and bias is the Bayesian under general entropy loss function followed closely by the Linex loss function. In this case, the use of a symmetric loss function as the SELF is inappropriate for the SSALT mainly with small data.
Originality/value
Most of papers proposed in the literature present the estimation of SSALT through the MLE. In this paper, the authors developed a Bayesian analysis for the SSALT and discuss the procedures to obtain the Bayes estimators under symmetric and asymmetric loss functions. The choice of the loss function is an essential part in the Bayesian estimation problems.
Details
Keywords
Leonidas A. Zampetakis and Vassilis S. Moustakis
The purpose of this paper is to present an inductive methodology, which supports ranking of entities. Methodology is based on Bayesian latent variable measurement modeling and…
Abstract
Purpose
The purpose of this paper is to present an inductive methodology, which supports ranking of entities. Methodology is based on Bayesian latent variable measurement modeling and makes use of assessment across composite indicators to assess internal and external model validity (uncertainty is used in lieu of validity). Proposed methodology is generic and it is demonstrated on a well‐known data set, related to the relative position of a country in a “doing business.”
Design/methodology/approach
The methodology is demonstrated using data from the World Banks' “Doing Business 2008” project. A Bayesian latent variable measurement model is developed and both internal and external model uncertainties are considered.
Findings
The methodology enables the quantification of model structure uncertainty through comparisons among competing models, nested or non‐nested using both an information theoretic approach and a Bayesian approach. Furthermore, it estimates the degree of uncertainty in the rankings of alternatives.
Research limitations/implications
Analyses are restricted to first‐order Bayesian measurement models.
Originality/value
Overall, the presented methodology contributes to a better understanding of ranking efforts providing a useful tool for those who publish rankings to gain greater insights into the nature of the distinctions they disseminate.
Details
Keywords
Farzana Akbari, Mahdi Salehi and Mohammad Ali Bagherpour Vlashani
The purpose of this paper is to investigate the relationship between tax avoidance, firm value and managerial ability in Tehran Stock Exchange and Over the Counter (OTC)…
Abstract
Purpose
The purpose of this paper is to investigate the relationship between tax avoidance, firm value and managerial ability in Tehran Stock Exchange and Over the Counter (OTC), according to the related theoretical foundations.
Design/methodology/approach
To calculate the managerial ability in this study, DEA is used based on the accounting data, company profile and industry and the hypotheses are estimated in a period of 12 years during 2004 to 2015 in TSE and OTC. Within the previous studies, to test the hypotheses, only the classical regression method is usually used and most of the times the effect of macroeconomic variables is not considered. In this study, in a new act for testing the hypotheses, three statistical methods are used, that is, classical regression models, mixed effects multilevel models and Bayesian multilevel models. Also in this study, the test of structural change is used to control the effects of macroeconomic variables, like inflation and other economic and political influence, on the results.
Findings
The results of these three methods show that the effect of income smoothing and earnings quality on the relationship between tax avoidance and firm value are significant.
Originality/value
Although several studies are conducted so far on the subject of the study, the current study is the first project which combined Bayesian econometrics. Therefore, the results are quite noble.
Details
Keywords
Bijitaswa Chakraborty, Manali Chatterjee and Titas Bhattacharjee
One of the adverse effects of COVID-19 is on poor economic and financial performance. Such economic underperformance, less demand from the consumer side and supply chain…
Abstract
Purpose
One of the adverse effects of COVID-19 is on poor economic and financial performance. Such economic underperformance, less demand from the consumer side and supply chain disruption is leading to stock market volatility. In such a backdrop, this paper aims to find the impact of COVID-19 on the Indian stock market by analyzing the analyst’s report.
Design/methodology/approach
The sample includes a cross-sectional data set on selected Indian firms that are indexed in BSE 100. The authors calculate the score of disclosure tone by using a textual analysis tool based on the analyst report of selected BSE 100 firms' approach in tackling COVID-19’s impact. The relationship between the tone of the analyst report and stock market performance is examined. This empirical model also survives robustness analysis to establish the consistency of the findings. This study uses both frequentist statistics and Bayesian statistics approach.
Findings
The empirical result shows that tone has negative and significant influence on stock market performance. This study indicates that either analysts are not providing value-relevant and incremental information, which can reduce the stock market volatility during this pandemic situation or investors are not able to recognize the optimism of the information.
Practical implications
This study provides an interesting insight regarding retail investors' stock purchasing behavior during the crisis period. Hence, this study also lays out crucial managerial implications that can be followed by preparers while preparing corporate disclosure.
Originality/value
In the concern on pandemic and its impact on the stock market, this study sheds light on investors' preferences during the crisis period. This study uniquely focuses on analyst reports and investors' preference which has not been studied widely. To the best of the authors’ knowledge, this is the first study in the Indian context, which aims to understand retail investors’ investment preferences during a pandemic.
Details
Keywords
Mahmoud ELsayed and Amr Soliman
The purpose of this study is to estimate the linear regression parameters using two alternative techniques. First technique is to apply the generalized linear model (GLM) and the…
Abstract
Purpose
The purpose of this study is to estimate the linear regression parameters using two alternative techniques. First technique is to apply the generalized linear model (GLM) and the second technique is the Markov Chain Monte Carlo (MCMC) method.
Design/methodology/approach
In this paper, the authors adopted the incurred claims of Egyptian non-life insurance market as a dependent variable during a 10-year period. MCMC uses Gibbs sampling to generate a sample from a posterior distribution of a linear regression to estimate the parameters of interest. However, the authors used the R package to estimate the parameters of the linear regression using the above techniques.
Findings
These procedures will guide the decision-maker for estimating the reserve and set proper investment strategy.
Originality/value
In this paper, the authors will estimate the parameters of a linear regression model using MCMC method via R package. Furthermore, MCMC uses Gibbs sampling to generate a sample from a posterior distribution of a linear regression to estimate parameters to predict future claims. In the same line, these procedures will guide the decision-maker for estimating the reserve and set proper investment strategy.
Xiaoling Li and Shuang shuang Liu
For the large-scale power grid monitoring system equipment, its working environment is increasingly complex and the probability of fault or failure of the monitoring system is…
Abstract
Purpose
For the large-scale power grid monitoring system equipment, its working environment is increasingly complex and the probability of fault or failure of the monitoring system is gradually increasing. This paper proposes a fault classification algorithm based on Gaussian mixture model (GMM), which can complete the automatic classification of fault and the elimination of fault sources in the monitoring system.
Design/methodology/approach
The algorithm first defines the GMM and obtains the detection value of the fault classification through a method based on the causal Mason Young Tracy (MYT) decomposition under each normal distribution in the GMM. Then, the weight value of GMM is used to calculate weighted classification value of fault detection and separation, and by comparing the actual control limits with the classification result of GMM, the fault classification results are obtained.
Findings
The experiment on the defined non-thermostatic continuous stirred-tank reactor model shows that the algorithm proposed in this paper is superior to the traditional algorithm based on the causal MYT decomposition in fault detection and fault separation.
Originality/value
The proposed algorithm fundamentally solves the problem of fault detection and fault separation in large-scale systems and provides support for troubleshooting and identifying fault sources.
Details