Search results

1 – 10 of 272
Article
Publication date: 3 July 2023

James L. Sullivan, David Novak, Eric Hernandez and Nick Van Den Berg

This paper introduces a novel quality measure, the percent-within-distribution, or PWD, for acceptance and payment in a quality control/quality assurance (QC/QA) performance…

Abstract

Purpose

This paper introduces a novel quality measure, the percent-within-distribution, or PWD, for acceptance and payment in a quality control/quality assurance (QC/QA) performance specification (PS).

Design/methodology/approach

The new quality measure takes any sample size or distribution and uses a Bayesian updating process to re-estimate parameters of a design distribution as sample observations are fed through the algorithm. This methodology can be employed in a wide range of applications, but the authors demonstrate the use of the measure for a QC/QA PS with upper and lower bounds on 28-day compressive strength of in-place concrete for bridge decks.

Findings

The authors demonstrate the use of this new quality measure to illustrate how it addresses the shortcomings of the percent-within-limits (PWL), which is the current industry standard quality measure. The authors then use the PWD to develop initial pay factors through simulation regimes. The PWD is shown to function better than the PWL with realistic sample lots simulated to represent a variety of industry responses to a new QC/QA PS.

Originality/value

The analytical contribution of this work is the introduction of the new quality measure. However, the practical and managerial contributions of this work are of equal significance.

Details

International Journal of Quality & Reliability Management, vol. 41 no. 2
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 15 December 2023

Muhammad Arif Mahmood, Chioibasu Diana, Uzair Sajjad, Sabin Mihai, Ion Tiseanu and Andrei C. Popescu

Porosity is a commonly analyzed defect in the laser-based additive manufacturing processes owing to the enormous thermal gradient caused by repeated melting and solidification…

Abstract

Purpose

Porosity is a commonly analyzed defect in the laser-based additive manufacturing processes owing to the enormous thermal gradient caused by repeated melting and solidification. Currently, the porosity estimation is limited to powder bed fusion. The porosity estimation needs to be explored in the laser melting deposition (LMD) process, particularly analytical models that provide cost- and time-effective solutions compared to finite element analysis. For this purpose, this study aims to formulate two mathematical models for deposited layer dimensions and corresponding porosity in the LMD process.

Design/methodology/approach

In this study, analytical models have been proposed. Initially, deposited layer dimensions, including layer height, width and depth, were calculated based on the operating parameters. These outputs were introduced in the second model to estimate the part porosity. The models were validated with experimental data for Ti6Al4V depositions on Ti6Al4V substrate. A calibration curve (CC) was also developed for Ti6Al4V material and characterized using X-ray computed tomography. The models were also validated with the experimental results adopted from literature. The validated models were linked with the deep neural network (DNN) for its training and testing using a total of 6,703 computations with 1,500 iterations. Here, laser power, laser scanning speed and powder feeding rate were selected inputs, whereas porosity was set as an output.

Findings

The computations indicate that owing to the simultaneous inclusion of powder particulates, the powder elements use a substantial percentage of the laser beam energy for their melting, resulting in laser beam energy attenuation and reducing thermal value at the substrate. The primary operating parameters are directly correlated with the number of layers and total height in CC. Through X-ray computed tomography analyses, the number of layers showed a straightforward correlation with mean sphericity, while a converse relation was identified with the number, mean volume and mean diameter of pores. DNN and analytical models showed 2%–3% and 7%–9% mean absolute deviations, respectively, compared to the experimental results.

Originality/value

This research provides a unique solution for LMD porosity estimation by linking the developed analytical computational models with artificial neural networking. The presented framework predicts the porosity in the LMD-ed parts efficiently.

Article
Publication date: 8 January 2024

Alexander Cardazzi, Brad R. Humphreys and Kole Reddig

Professional sports teams employ highly paid managers and coaches to train players and make tactical and strategic team decisions. A large literature analyzes the impact of…

60

Abstract

Purpose

Professional sports teams employ highly paid managers and coaches to train players and make tactical and strategic team decisions. A large literature analyzes the impact of manager decisions on team outcomes. Empirical analysis of manager decisions requires a quantifiable proxy variable for manager decisions. Previous research focused on manager dismissals, tenure on teams, the number of substitutions made in games or the number of healthy players on rosters held out of games for rest, generally finding small positive impacts of manager decisions on team success.

Design/methodology/approach

The authors quantify manager decisions by developing a novel measure of game-specific coaching decisions: the Herfindahl–Hirschman Index (HHI) of playing-time across players on a team roster over the course of a season.

Findings

Evidence from two-way fixed effects regression models explaining observed variation in National Basketball Association team winning percentage over the 1999–2000 to 2018–2019 seasons show a significant association between managers’ allocation of playing time and team success. A one standard deviation change in playing-time HHI that reflects a flattened distribution of player talent is associated with between one and two additional wins per season, holding the talent of players on the team roster constant. Heterogeneity exists in the impact across teams with different player talent.

Originality/value

This is one of the first papers to examine playing-time concentration in the NBA. The results are important for understanding how managerial decisions about resource allocation lead to sustained competitive advantage. Linking coaching decisions to wins can help teams to better promote this core product.

Details

International Journal of Sports Marketing and Sponsorship, vol. 25 no. 2
Type: Research Article
ISSN: 1464-6668

Keywords

Book part
Publication date: 5 April 2024

Taining Wang and Daniel J. Henderson

A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production…

Abstract

A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production frontier is considered without log-transformation to prevent induced non-negligible estimation bias. Second, the model flexibility is improved via semiparameterization, where the technology is an unknown function of a set of environment variables. The technology function accounts for latent heterogeneity across individual units, which can be freely correlated with inputs, environment variables, and/or inefficiency determinants. Furthermore, the technology function incorporates a single-index structure to circumvent the curse of dimensionality. Third, distributional assumptions are eschewed on both stochastic noise and inefficiency for model identification. Instead, only the conditional mean of the inefficiency is assumed, which depends on related determinants with a wide range of choice, via a positive parametric function. As a result, technical efficiency is constructed without relying on an assumed distribution on composite error. The model provides flexible structures on both the production frontier and inefficiency, thereby alleviating the risk of model misspecification in production and efficiency analysis. The estimator involves a series based nonlinear least squares estimation for the unknown parameters and a kernel based local estimation for the technology function. Promising finite-sample performance is demonstrated through simulations, and the model is applied to investigate productive efficiency among OECD countries from 1970–2019.

Open Access
Article
Publication date: 29 January 2024

Miaoxian Guo, Shouheng Wei, Chentong Han, Wanliang Xia, Chao Luo and Zhijian Lin

Surface roughness has a serious impact on the fatigue strength, wear resistance and life of mechanical products. Realizing the evolution of surface quality through theoretical…

Abstract

Purpose

Surface roughness has a serious impact on the fatigue strength, wear resistance and life of mechanical products. Realizing the evolution of surface quality through theoretical modeling takes a lot of effort. To predict the surface roughness of milling processing, this paper aims to construct a neural network based on deep learning and data augmentation.

Design/methodology/approach

This study proposes a method consisting of three steps. Firstly, the machine tool multisource data acquisition platform is established, which combines sensor monitoring with machine tool communication to collect processing signals. Secondly, the feature parameters are extracted to reduce the interference and improve the model generalization ability. Thirdly, for different expectations, the parameters of the deep belief network (DBN) model are optimized by the tent-SSA algorithm to achieve more accurate roughness classification and regression prediction.

Findings

The adaptive synthetic sampling (ADASYN) algorithm can improve the classification prediction accuracy of DBN from 80.67% to 94.23%. After the DBN parameters were optimized by Tent-SSA, the roughness prediction accuracy was significantly improved. For the classification model, the prediction accuracy is improved by 5.77% based on ADASYN optimization. For regression models, different objective functions can be set according to production requirements, such as root-mean-square error (RMSE) or MaxAE, and the error is reduced by more than 40% compared to the original model.

Originality/value

A roughness prediction model based on multiple monitoring signals is proposed, which reduces the dependence on the acquisition of environmental variables and enhances the model's applicability. Furthermore, with the ADASYN algorithm, the Tent-SSA intelligent optimization algorithm is introduced to optimize the hyperparameters of the DBN model and improve the optimization performance.

Details

Journal of Intelligent Manufacturing and Special Equipment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2633-6596

Keywords

Book part
Publication date: 5 April 2024

Alecos Papadopoulos

The author develops a bilateral Nash bargaining model under value uncertainty and private/asymmetric information, combining ideas from axiomatic and strategic bargaining theory…

Abstract

The author develops a bilateral Nash bargaining model under value uncertainty and private/asymmetric information, combining ideas from axiomatic and strategic bargaining theory. The solution to the model leads organically to a two-tier stochastic frontier (2TSF) setup with intra-error dependence. The author presents two different statistical specifications to estimate the model, one that accounts for regressor endogeneity using copulas, the other able to identify separately the bargaining power from the private information effects at the individual level. An empirical application using a matched employer–employee data set (MEEDS) from Zambia and a second using another one from Ghana showcase the applied potential of the approach.

Article
Publication date: 16 February 2024

Mengyang Gao, Jun Wang and Ou Liu

Given the critical role of user-generated content (UGC) in e-commerce, exploring various aspects of UGC can aid in understanding user purchase intention and commodity…

Abstract

Purpose

Given the critical role of user-generated content (UGC) in e-commerce, exploring various aspects of UGC can aid in understanding user purchase intention and commodity recommendation. Therefore, this study investigates the impact of UGC on purchase decisions and proposes new recommendation models based on sentiment analysis, which are verified in Douban, one of the most popular UGC websites in China.

Design/methodology/approach

After verifying the relationship between various factors and product sales, this study proposes two models, collaborative filtering recommendation model based on sentiment (SCF) and hidden factors topics recommendation model based on sentiment (SHFT), by combining traditional collaborative filtering model (CF) and hidden factors topics model (HFT) with sentiment analysis.

Findings

The results indicate that sentiment significantly influences purchase intention. Furthermore, the proposed sentiment-based recommendation models outperform traditional CF and HFT in terms of mean absolute error (MAE) and root mean square error (RMSE). Moreover, the two models yield different outcomes for various product categories, providing actionable insights for organizers to implement more precise recommendation strategies.

Practical implications

The findings of this study advocate the incorporation of UGC sentimental factors into websites to heighten recommendation accuracy. Additionally, different recommendation strategies can be employed for different products types.

Originality/value

This study introduces a novel perspective to the recommendation algorithm field. It not only validates the impact of UGC sentiment on purchase intention but also evaluates the proposed models with real-world data. The study provides valuable insights for managerial decision-making aimed at enhancing recommendation systems.

Details

Industrial Management & Data Systems, vol. 124 no. 4
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 1 September 2023

Shaghayegh Abolmakarem, Farshid Abdi, Kaveh Khalili-Damghani and Hosein Didehkhani

This paper aims to propose an improved version of portfolio optimization model through the prediction of the future behavior of stock returns using a combined wavelet-based long…

100

Abstract

Purpose

This paper aims to propose an improved version of portfolio optimization model through the prediction of the future behavior of stock returns using a combined wavelet-based long short-term memory (LSTM).

Design/methodology/approach

First, data are gathered and divided into two parts, namely, “past data” and “real data.” In the second stage, the wavelet transform is proposed to decompose the stock closing price time series into a set of coefficients. The derived coefficients are taken as an input to the LSTM model to predict the stock closing price time series and the “future data” is created. In the third stage, the mean-variance portfolio optimization problem (MVPOP) has iteratively been run using the “past,” “future” and “real” data sets. The epsilon-constraint method is adapted to generate the Pareto front for all three runes of MVPOP.

Findings

The real daily stock closing price time series of six stocks from the FTSE 100 between January 1, 2000, and December 30, 2020, is used to check the applicability and efficacy of the proposed approach. The comparisons of “future,” “past” and “real” Pareto fronts showed that the “future” Pareto front is closer to the “real” Pareto front. This demonstrates the efficacy and applicability of proposed approach.

Originality/value

Most of the classic Markowitz-based portfolio optimization models used past information to estimate the associated parameters of the stocks. This study revealed that the prediction of the future behavior of stock returns using a combined wavelet-based LSTM improved the performance of the portfolio.

Details

Journal of Modelling in Management, vol. 19 no. 2
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 29 March 2024

Pratheek Suresh and Balaji Chakravarthy

As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a…

Abstract

Purpose

As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a dielectric fluid, has emerged as a promising alternative. Ensuring reliable operations in data centre applications requires the development of an effective control framework for immersion cooling systems, which necessitates the prediction of server temperature. While deep learning-based temperature prediction models have shown effectiveness, further enhancement is needed to improve their prediction accuracy. This study aims to develop a temperature prediction model using Long Short-Term Memory (LSTM) Networks based on recursive encoder-decoder architecture.

Design/methodology/approach

This paper explores the use of deep learning algorithms to predict the temperature of a heater in a two-phase immersion-cooled system using NOVEC 7100. The performance of recursive-long short-term memory-encoder-decoder (R-LSTM-ED), recursive-convolutional neural network-LSTM (R-CNN-LSTM) and R-LSTM approaches are compared using mean absolute error, root mean square error, mean absolute percentage error and coefficient of determination (R2) as performance metrics. The impact of window size, sampling period and noise within training data on the performance of the model is investigated.

Findings

The R-LSTM-ED consistently outperforms the R-LSTM model by 6%, 15.8% and 12.5%, and R-CNN-LSTM model by 4%, 11% and 12.3% in all forecast ranges of 10, 30 and 60 s, respectively, averaged across all the workloads considered in the study. The optimum sampling period based on the study is found to be 2 s and the window size to be 60 s. The performance of the model deteriorates significantly as the noise level reaches 10%.

Research limitations/implications

The proposed models are currently trained on data collected from an experimental setup simulating data centre loads. Future research should seek to extend the applicability of the models by incorporating time series data from immersion-cooled servers.

Originality/value

The proposed multivariate-recursive-prediction models are trained and tested by using real Data Centre workload traces applied to the immersion-cooled system developed in the laboratory.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Abstract

Details

Understanding Intercultural Interaction: An Analysis of Key Concepts, 2nd Edition
Type: Book
ISBN: 978-1-83753-438-8

1 – 10 of 272