Search results

1 – 10 of over 109000
Book part
Publication date: 30 December 2004

Ross R. Vickers

Constructing and evaluating behavioral science models is a complex process. Decisions must be made about which variables to include, which variables are related to each other, the…

Abstract

Constructing and evaluating behavioral science models is a complex process. Decisions must be made about which variables to include, which variables are related to each other, the functional forms of the relationships, and so on. The last 10 years have seen a substantial extension of the range of statistical tools available for use in the construction process. The progress in tool development has been accompanied by the publication of handbooks that introduce the methods in general terms (Arminger et al., 1995; Tinsley & Brown, 2000a). Each chapter in these handbooks cites a wide range of books and articles on specific analysis topics.

Details

The Science and Simulation of Human Performance
Type: Book
ISBN: 978-1-84950-296-2

Open Access
Article
Publication date: 13 April 2022

Florian Schuberth, Manuel E. Rademaker and Jörg Henseler

This study aims to examine the role of an overall model fit assessment in the context of partial least squares path modeling (PLS-PM). In doing so, it will explain when it is…

5878

Abstract

Purpose

This study aims to examine the role of an overall model fit assessment in the context of partial least squares path modeling (PLS-PM). In doing so, it will explain when it is important to assess the overall model fit and provides ways of assessing the fit of composite models. Moreover, it will resolve major concerns about model fit assessment that have been raised in the literature on PLS-PM.

Design/methodology/approach

This paper explains when and how to assess the fit of PLS path models. Furthermore, it discusses the concerns raised in the PLS-PM literature about the overall model fit assessment and provides concise guidelines on assessing the overall fit of composite models.

Findings

This study explains that the model fit assessment is as important for composite models as it is for common factor models. To assess the overall fit of composite models, researchers can use a statistical test and several fit indices known through structural equation modeling (SEM) with latent variables.

Research limitations/implications

Researchers who use PLS-PM to assess composite models that aim to understand the mechanism of an underlying population and draw statistical inferences should take the concept of the overall model fit seriously.

Practical implications

To facilitate the overall fit assessment of composite models, this study presents a two-step procedure adopted from the literature on SEM with latent variables.

Originality/value

This paper clarifies that the necessity to assess model fit is not a question of which estimator will be used (PLS-PM, maximum likelihood, etc). but of the purpose of statistical modeling. Whereas, the model fit assessment is paramount in explanatory modeling, it is not imperative in predictive modeling.

Details

European Journal of Marketing, vol. 57 no. 6
Type: Research Article
ISSN: 0309-0566

Keywords

Article
Publication date: 12 January 2015

Ângelo Márcio Oliveira Sant'Anna

The purpose of this paper is to propose a framework of decision making to aid practitioners in modeling and optimization experimental data for improvement quality of industrial…

Abstract

Purpose

The purpose of this paper is to propose a framework of decision making to aid practitioners in modeling and optimization experimental data for improvement quality of industrial processes, reinforcing idea that planning and conducting data modeling are as important as formal analysis.

Design/methodology/approach

The paper presents an application was carried out about the modeling of experimental data at mining company, with support at Catholic University from partnership projects. The literature seems to be more focussed on the data analysis than on providing a sequence of operational steps or decision support which would lead to the best regression model given for the problem that researcher is confronted with. The authors use the concept of statistical regression technique called generalized linear models.

Findings

The authors analyze the relevant case study in mining company, based on best statistical regression models. Starting from this analysis, the results of the industrial case study illustrates the strong relationship of the improvement process with the presented framework approach into practice. Moreover, the case study consolidating a fundamental advantage of regression models: modeling guided provides more knowledge about products, processes and technologies, even in unsuccessful case studies.

Research limitations/implications

The study advances in regression model for data modeling are applicable in several types of industrial processes and phenomena random. It is possible to find unsuccessful data modeling due to lack of knowledge of statistical technique.

Originality/value

An essential point is that the study is based on the feedback from practitioners and industrial managers, which makes the analyses and conclusions from practical points of view, without relevant theoretical knowledge of relationship among the process variables. Regression model has its own characteristics related to response variable and factors, and misspecification of the regression model or their components can yield inappropriate inferences and erroneous experimental results.

Article
Publication date: 22 July 2021

Mehdi Khashei and Fatemeh Chahkoutahi

The purpose of this paper is to propose an extensiveness intelligent hybrid model to short-term load electricity forecast that can simultaneously model the seasonal complicated…

Abstract

Purpose

The purpose of this paper is to propose an extensiveness intelligent hybrid model to short-term load electricity forecast that can simultaneously model the seasonal complicated nonlinear uncertain patterns in the data. For this purpose, a fuzzy seasonal version of the multilayer perceptrons (MLP) is developed.

Design/methodology/approach

In this paper, an extended fuzzy seasonal version of classic MLP is proposed using basic concepts of seasonal modeling and fuzzy logic. The fundamental goal behind the proposed model is to improve the modeling comprehensiveness of traditional MLP in such a way that they can simultaneously model seasonal and fuzzy patterns and structures, in addition to the regular nonseasonal and crisp patterns and structures.

Findings

Eventually, the effectiveness and predictive capability of the proposed model are examined and compared with its components and some other models. Empirical results of the electricity load forecasting indicate that the proposed model can achieve more accurate and also lower risk rather than classic MLP and some other fuzzy/nonfuzzy, seasonal nonseasonal, statistical/intelligent models.

Originality/value

One of the most appropriate modeling tools and widely used techniques for electricity load forecasting is artificial neural networks (ANNs). The popularity of such models comes from their unique advantages such as nonlinearity, universally, generality, self-adaptively and so on. However, despite all benefits of these methods, owing to the specific features of electricity markets and also simultaneously existing different patterns and structures in the electrical data sets, they are insufficient to achieve decided forecasts, lonely. The major weaknesses of ANNs for achieving more accurate, low-risk results are seasonality and uncertainty. In this paper, the ability of the modeling seasonal and uncertain patterns has been added to other unique capabilities of traditional MLP in complex nonlinear patterns modeling.

Article
Publication date: 1 March 1993

George T. Duncan

Future developments in methodology have the potential to improve management research and better couple it to management practice. These developments are on six fronts: (1…

Abstract

Future developments in methodology have the potential to improve management research and better couple it to management practice. These developments are on six fronts: (1) computer technology, (2) data capture and experimentation, (3) privacy, confidentiality, and data access, (4) causation, (5) modeling and simulation, and (6) Bayesian statistics. The potential of each is explored, and problems, both technical and administrative, in fulfilling this potential are identified. On the computer and communications front, the key elements are the use of relational database management systems, increased computing power for analysis purposes, and computer networking. On the privacy, confidentiality, and data access front, the key elements are new capabilities for data capture through real‐time surveillance, inferential disclosure threats in computer databases, the demand for more access to detailed data, and public concerns for privacy invasion. Management research is a search for causal mechanisms that can be investigated through empirical studies and that facilitate control of complex processes. In the modeling area, there will be (1) greater use of computing power, (2) less use of model‐independent statistical hypothesis testing, and (3) easier to use computer software for modeling and simulation. The Bayesian perspective of consistently expressing uncertainty through probability distributions will become more widely used in management research.

Details

The International Journal of Organizational Analysis, vol. 1 no. 3
Type: Research Article
ISSN: 1055-3185

Book part
Publication date: 30 August 2019

Percy K. Mistry and Michael D. Lee

Jeliazkov and Poirier (2008) analyze the daily incidence of violence during the Second Intifada in a statistical way using an analytical Bayesian implementation of a second-order…

Abstract

Jeliazkov and Poirier (2008) analyze the daily incidence of violence during the Second Intifada in a statistical way using an analytical Bayesian implementation of a second-order discrete Markov process. We tackle the same data and modeling problem from our perspective as cognitive scientists. First, we propose a psychological model of violence, based on a latent psychological construct we call “build up” that controls the retaliatory and repetitive violent behavior by both sides in the conflict. Build up is based on a social memory of recent violence and generates the probability and intensity of current violence. Our psychological model is implemented as a generative probabilistic graphical model, which allows for fully Bayesian inference using computational methods. We show that our model is both descriptively adequate, based on posterior predictive checks, and has good predictive performance. We then present a series of results that show how inferences based on the model can provide insight into the nature of the conflict. These inferences consider the base rates of violence in different periods of the Second Intifada, the nature of the social memory for recent violence, and the way repetitive versus retaliatory violent behavior affects each side in the conflict. Finally, we discuss possible extensions of our model and draw conclusions about the potential theoretical and methodological advantages of treating societal conflict as a cognitive modeling problem.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
ISBN: 978-1-78973-241-2

Keywords

Article
Publication date: 25 January 2013

Zhang ke

The purpose of this paper is to establish a random simulation method to compare the forecasting performance between grey prediction models, and between grey model and other kinds…

Abstract

Purpose

The purpose of this paper is to establish a random simulation method to compare the forecasting performance between grey prediction models, and between grey model and other kinds of prediction models. Then, the different performance of three grey models and linear regression prediction model is studied, based on the proposed method.

Design/methodology/approach

A random simulation method was proposed to test the modelling accuracy of grey prediction model. This method was enlightened by Monte Carlo simulation method. It regarded a class of sequences as population, and selected a large sample from population though random sampling. Then, sample sequences were modeled by grey prediction model. Through modeling error calculation, the average error of grey model for the sample was obtained. Finally, the grey model accuracy for this kind of problem was acquired by statistical inference testing model. Through the statistical significant test method, the modeling accuracy of grey models for the same problem can be compared. Also, accuracy difference between grey prediction model and regression analysis, support vector machine, neural network, and other forecasting methods can be also compared.

Findings

Though random simulation experiments, the following conclusion was obtained. First, grey model can be applied to the long sequence whose growth rate was less than 20 per cent, and the short sequence whose growth rate was less than 50 per cent. Second, GM(1,1) cannot be applied to a long sequence with high growth. Third, growth rate was a more important factor than growth length on modeling accuracy of GM(1,1). Fourth, when the sequence length was short, accuracy of GM(1,1) model was higher than linear regression. While the length of the sequence was more than 15, and the growth rate in [0‐10 per cent], two kinds of modeling error was not significantly different.

Practical implications

The method proposed in the paper can be used to compare the performance of different prediction models, and to select appropriate model for a prediction problem.

Originality/value

The paper succeeded in establishing an accuracy test method for grey models and other prediction models. It will standardize the grey modelling and contribute to application of grey models.

Details

Grey Systems: Theory and Application, vol. 3 no. 1
Type: Research Article
ISSN: 2043-9377

Keywords

Article
Publication date: 20 April 2015

Bo Zhao

The purpose of this paper is to establish three modeling methods (physical model, statistical model, and artificial neural network (ANN) model) and use it to predict the fiber…

Abstract

Purpose

The purpose of this paper is to establish three modeling methods (physical model, statistical model, and artificial neural network (ANN) model) and use it to predict the fiber diameter of spunbonding nonwovens from the process parameters.

Design/methodology/approach

The results show the physical model is based on the inherent physical principles, it can yield reasonably good prediction results and provide insight into the relationship between process parameters and fiber diameter.

Findings

By analyzing the results of the physical model, the effects of process parameters on fiber diameter can be predicted. The ANN model has good approximation capability and fast convergence rate, it can provide quantitative predictions of fiber diameter and yield more accurate and stable predictions than the statistical model.

Originality/value

The effects of process parameters on fiber diameter are also determined by the ANN model. Excellent agreement is obtained between these two modeling methods.

Details

International Journal of Clothing Science and Technology, vol. 27 no. 2
Type: Research Article
ISSN: 0955-6222

Keywords

Content available
Book part
Publication date: 27 December 2016

Abstract

Details

Bad to Good
Type: Book
ISBN: 978-1-78635-333-7

Article
Publication date: 25 July 2019

Yinhua Liu, Rui Sun and Sun Jin

Driven by the development in sensing techniques and information and communications technology, and their applications in the manufacturing system, data-driven quality control…

Abstract

Purpose

Driven by the development in sensing techniques and information and communications technology, and their applications in the manufacturing system, data-driven quality control methods play an essential role in the quality improvement of assembly products. This paper aims to review the development of data-driven modeling methods for process monitoring and fault diagnosis in multi-station assembly systems. Furthermore, the authors discuss the applications of the methods proposed and present suggestions for future studies in data mining for quality control in product assembly.

Design/methodology/approach

This paper provides an outline of data-driven process monitoring and fault diagnosis methods for reduction in variation. The development of statistical process monitoring techniques and diagnosis methods, such as pattern matching, estimation-based analysis and artificial intelligence-based diagnostics, is introduced.

Findings

A classification structure for data-driven process control techniques and the limitations of their applications in multi-station assembly processes are discussed. From the perspective of the engineering requirements of real, dynamic, nonlinear and uncertain assembly systems, future trends in sensing system location, data mining and data fusion techniques for variation reduction are suggested.

Originality/value

This paper reveals the development of process monitoring and fault diagnosis techniques, and their applications in variation reduction in multi-station assembly.

Details

Assembly Automation, vol. 39 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

1 – 10 of over 109000