Search results
1 – 10 of 424Janak Suthar, Jinil Persis and Ruchita Gupta
Foundry produces cast metal components and parts for various industries and drives manufacturing excellence all over the world. Assuring quality of these components and parts is…
Abstract
Purpose
Foundry produces cast metal components and parts for various industries and drives manufacturing excellence all over the world. Assuring quality of these components and parts is vital for the end product quality. The complexity in foundry operations increases with the complexity in designs, patterns and geometry and the quality parameters of the casting processes need to be monitored, evaluated and controlled to achieve expected quality levels.
Design/methodology/approach
The literature addresses quality improvement in foundry industry primarily focusing on surface roughness, mechanical properties, dimensional accuracy and defects in the cast parts and components which are often affected by numerous process variables. Primary data are collected from the experts working in sand and investment casting processes. The authors perform machine learning analysis of the data to model the quality parameters with appropriate process variables. Further, cluster analysis using k-means clustering method is performed to develop clusters of correlated process variables for sand and investment casting processes.
Findings
The authors identified primary process variables determining each quality parameter using machine learning approach. Quality parameters such as surface roughness, defects, mechanical properties and dimensional accuracy are represented by the identified sand-casting process variables accurately up to 83%, 83%, 100% and 83% and are represented by the identified investment-casting process variables accurately up to 100%, 67%, 67% and 100% respectively. Moreover, the prioritization of process variables in influencing the quality parameters is established which further helps the practitioners to monitor and control them within acceptable levels. Further the clusters of process variables help in analyzing their combined effect on quality parameters of casting products.
Originality/value
This study identified potential process variables and collected data from experts, researchers and practitioners on the effect of these on the quality aspects of cast products. While most of the previous studies focus on a very limited process variables for enhancing the quality characteristics of cast parts and components, this study represents each quality parameter as the function of influencing process variables which will enable the quality managers in Indian foundries to maintain capability and stability of casting processes. The models hence developed for both sand and investment casting for each quality parameter are validated with real life applications. Such studies are scarcely reported in the literature.
Details
Keywords
Qinxu Ding, Ding Ding, Yue Wang, Chong Guan and Bosheng Ding
The rapid rise of large language models (LLMs) has propelled them to the forefront of applications in natural language processing (NLP). This paper aims to present a comprehensive…
Abstract
Purpose
The rapid rise of large language models (LLMs) has propelled them to the forefront of applications in natural language processing (NLP). This paper aims to present a comprehensive examination of the research landscape in LLMs, providing an overview of the prevailing themes and topics within this dynamic domain.
Design/methodology/approach
Drawing from an extensive corpus of 198 records published between 1996 to 2023 from the relevant academic database encompassing journal articles, books, book chapters, conference papers and selected working papers, this study delves deep into the multifaceted world of LLM research. In this study, the authors employed the BERTopic algorithm, a recent advancement in topic modeling, to conduct a comprehensive analysis of the data after it had been meticulously cleaned and preprocessed. BERTopic leverages the power of transformer-based language models like bidirectional encoder representations from transformers (BERT) to generate more meaningful and coherent topics. This approach facilitates the identification of hidden patterns within the data, enabling authors to uncover valuable insights that might otherwise have remained obscure. The analysis revealed four distinct clusters of topics in LLM research: “language and NLP”, “education and teaching”, “clinical and medical applications” and “speech and recognition techniques”. Each cluster embodies a unique aspect of LLM application and showcases the breadth of possibilities that LLM technology has to offer. In addition to presenting the research findings, this paper identifies key challenges and opportunities in the realm of LLMs. It underscores the necessity for further investigation in specific areas, including the paramount importance of addressing potential biases, transparency and explainability, data privacy and security, and responsible deployment of LLM technology.
Findings
The analysis revealed four distinct clusters of topics in LLM research: “language and NLP”, “education and teaching”, “clinical and medical applications” and “speech and recognition techniques”. Each cluster embodies a unique aspect of LLM application and showcases the breadth of possibilities that LLM technology has to offer. In addition to presenting the research findings, this paper identifies key challenges and opportunities in the realm of LLMs. It underscores the necessity for further investigation in specific areas, including the paramount importance of addressing potential biases, transparency and explainability, data privacy and security, and responsible deployment of LLM technology.
Practical implications
This classification offers practical guidance for researchers, developers, educators, and policymakers to focus efforts and resources. The study underscores the importance of addressing challenges in LLMs, including potential biases, transparency, data privacy, and responsible deployment. Policymakers can utilize this information to shape regulations, while developers can tailor technology development based on the diverse applications identified. The findings also emphasize the need for interdisciplinary collaboration and highlight ethical considerations, providing a roadmap for navigating the complex landscape of LLM research and applications.
Originality/value
This study stands out as the first to examine the evolution of LLMs across such a long time frame and across such diversified disciplines. It provides a unique perspective on the key areas of LLM research, highlighting the breadth and depth of LLM’s evolution.
Details
Keywords
Machine learning is an algorithmic-based auto-learning mechanism that improves from its experiences. It makes use of a statistical learning method that trains and develops on its…
Abstract
Machine learning is an algorithmic-based auto-learning mechanism that improves from its experiences. It makes use of a statistical learning method that trains and develops on its own without the assistance of a person. Data, characteristics deduced from the data, and the model make up the three primary parts of a machine learning solution. Machine learning generates an algorithm from subsets of data that can utilise combinations of features and weights different from those obtained from basic principles. In this paper, an analysis of customer behaviour is predicted using different machine learning algorithms. The results of the algorithms are validated using python programming.
Details
Keywords
Umair Khan, William Pao, Karl Ezra Salgado Pilario, Nabihah Sallih and Muhammad Rehan Khan
Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime…
Abstract
Purpose
Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime identification.
Design/methodology/approach
A numerical two-phase flow model was validated against experimental data and was used to generate dynamic pressure signals for three different flow regimes. First, four distinct methods were used for feature extraction: discrete wavelet transform (DWT), empirical mode decomposition, power spectral density and the time series analysis method. Kernel Fisher discriminant analysis (KFDA) was used to simultaneously perform dimensionality reduction and machine learning (ML) classification for each set of features. Finally, the Shapley additive explanations (SHAP) method was applied to make the workflow explainable.
Findings
The results highlighted that the DWT + KFDA method exhibited the highest testing and training accuracy at 95.2% and 88.8%, respectively. Results also include a virtual flow regime map to facilitate the visualization of features in two dimension. Finally, SHAP analysis showed that minimum and maximum values extracted at the fourth and second signal decomposition levels of DWT are the best flow-distinguishing features.
Practical implications
This workflow can be applied to opaque pipes fitted with pressure sensors to achieve flow assurance and automatic monitoring of two-phase flow occurring in many process industries.
Originality/value
This paper presents a novel flow regime identification method by fusing dynamic pressure measurements with ML techniques. The authors’ novel DWT + KFDA method demonstrates superior performance for flow regime identification with explainability.
Details
Keywords
Single-shot multi-category clothing recognition and retrieval play a crucial role in online searching and offline settlement scenarios. Existing clothing recognition methods based…
Abstract
Purpose
Single-shot multi-category clothing recognition and retrieval play a crucial role in online searching and offline settlement scenarios. Existing clothing recognition methods based on RGBD clothing images often suffer from high-dimensional feature representations, leading to compromised performance and efficiency.
Design/methodology/approach
To address this issue, this paper proposes a novel method called Manifold Embedded Discriminative Feature Selection (MEDFS) to select global and local features, thereby reducing the dimensionality of the feature representation and improving performance. Specifically, by combining three global features and three local features, a low-dimensional embedding is constructed to capture the correlations between features and categories. The MEDFS method designs an optimization framework utilizing manifold mapping and sparse regularization to achieve feature selection. The optimization objective is solved using an alternating iterative strategy, ensuring convergence.
Findings
Empirical studies conducted on a publicly available RGBD clothing image dataset demonstrate that the proposed MEDFS method achieves highly competitive clothing classification performance while maintaining efficiency in clothing recognition and retrieval.
Originality/value
This paper introduces a novel approach for multi-category clothing recognition and retrieval, incorporating the selection of global and local features. The proposed method holds potential for practical applications in real-world clothing scenarios.
Details
Keywords
Yi Liu, Rui Ning, Mingxin Du, Shuanghe Yu and Yan Yan
The purpose of this paper is to propose an new online path planning method for porcine belly cutting. With the proliferation in demand for the automatic systems of pork…
Abstract
Purpose
The purpose of this paper is to propose an new online path planning method for porcine belly cutting. With the proliferation in demand for the automatic systems of pork production, the development of efficient and robust meat cutting algorithms are hot issues. The uncertain and dynamic nature of the online porcine belly cutting imposes a challenge for the robot to identify and cut efficiently and accurately. Based on the above challenges, an online porcine belly cutting method using 3D laser point cloud is proposed.
Design/methodology/approach
The robotic cutting system is composed of an industrial robotic manipulator, customized tools, a laser sensor and a PC.
Findings
Analysis of experimental results shows that by comparing with machine vision, laser sensor-based robot cutting has more advantages, and it can handle different carcass sizes.
Originality/value
An image pyramid method is used for dimensionality reduction of the 3D laser point cloud. From a detailed analysis of the outward and inward cutting errors, the outward cutting error is the limiting condition for reducing the segments by segmentation algorithm.
Details
Keywords
Nicola Castellano, Roberto Del Gobbo and Lorenzo Leto
The concept of productivity is central to performance management and decision-making, although it is complex and multifaceted. This paper aims to describe a methodology based on…
Abstract
Purpose
The concept of productivity is central to performance management and decision-making, although it is complex and multifaceted. This paper aims to describe a methodology based on the use of Big Data in a cluster analysis combined with a data envelopment analysis (DEA) that provides accurate and reliable productivity measures in a large network of retailers.
Design/methodology/approach
The methodology is described using a case study of a leading kitchen furniture producer. More specifically, Big Data is used in a two-step analysis prior to the DEA to automatically cluster a large number of retailers into groups that are homogeneous in terms of structural and environmental factors and assess a within-the-group level of productivity of the retailers.
Findings
The proposed methodology helps reduce the heterogeneity among the units analysed, which is a major concern in DEA applications. The data-driven factorial and clustering technique allows for maximum within-group homogeneity and between-group heterogeneity by reducing subjective bias and dimensionality, which is embedded with the use of Big Data.
Practical implications
The use of Big Data in clustering applied to productivity analysis can provide managers with data-driven information about the structural and socio-economic characteristics of retailers' catchment areas, which is important in establishing potential productivity performance and optimizing resource allocation. The improved productivity indexes enable the setting of targets that are coherent with retailers' potential, which increases motivation and commitment.
Originality/value
This article proposes an innovative technique to enhance the accuracy of productivity measures through the use of Big Data clustering and DEA. To the best of the authors’ knowledge, no attempts have been made to benefit from the use of Big Data in the literature on retail store productivity.
Details
Keywords
Since the beginning of 2020, economies faced many changes as a result of coronavirus disease 2019 (COVID-19) pandemic. The effect of COVID-19 on the Egyptian Exchange (EGX) is…
Abstract
Purpose
Since the beginning of 2020, economies faced many changes as a result of coronavirus disease 2019 (COVID-19) pandemic. The effect of COVID-19 on the Egyptian Exchange (EGX) is investigated in this research.
Design/methodology/approach
To explore the impact of COVID-19, three periods were considered: (1) 17 months before the spread of COVID-19 and the start of the lockdown, (2) 17 months after the spread of COVID-19 and the during the lockdown and (3) 34 months comprehending the whole period (before and during COVID-19). Due to the large number of variables that could be considered, dimensionality reduction method, such as the principal component analysis (PCA) is followed. This method helps in determining the most individual stocks contributing to the main EGX index (EGX 30). The PCA, also, addresses the multicollinearity between the variables under investigation. Additionally, a principal component regression (PCR) model is developed to predict the future behavior of the EGX 30.
Findings
The results demonstrate that the first three principal components (PCs) could be considered to explain 89%, 85%, and 88% of data variability at (1) before COVID-19, (2) during COVID-19 and (3) the whole period, respectively. Furthermore, sectors of food and beverage, basic resources and real estate have not been affected by the COVID-19. The resulted Principal Component Regression (PCR) model performs very well. This could be concluded by comparing the observed values of EGX 30 with the predicted ones (R-squared estimated as 0.99).
Originality/value
To the best of our knowledge, no research has been conducted to investigate the effect of the COVID-19 on the EGX following an unsupervised machine learning method.
Details
Keywords
The purpose of this study is to understand the predominant leadership style of school leaders in Abu Dhabi. The leadership style deployed by a school leader affects the…
Abstract
Purpose
The purpose of this study is to understand the predominant leadership style of school leaders in Abu Dhabi. The leadership style deployed by a school leader affects the performance of the school and its pupils. Methods for identifying the leadership style of school leaders in the UAE have varied, and it is difficult to conclude what the predominant leadership style is. Some studies have sought only to identify a specific leadership style, whilst others have focussed on a particular school type. Changes and improvements cannot be made without an understanding of the baseline leadership style.
Design/methodology/approach
The 36-item multifactor leadership questionnaire (MLQ)5x questionnaire (Bass and Avolio, 2004) is used to quantitatively understand the full range of school leaders’ leadership styles, with 167 respondents from across both public and private schools.
Findings
School leaders predominantly exhibited transformational leadership, practising transactional leadership less frequently and rarely using laissez-faire leadership. This is a positive finding for schools in the UAE; transformational leadership has been shown to result in improved subordinate and organisational performance. Differences between school leaders in public and private schools were tested and are discussed. Dimension reduction techniques were used to assess the structure of the 36-item MLQ5x but did not provide results that met minimum requirements for acceptability. Possible reasons for this are discussed.
Originality/value
To the best of the author’s knowledge, this paper is the first to fully explore and baseline an understanding of the predominant leadership style amongst school leaders in the UAE, identifying the full range of leadership styles – transformation, transactional and laissez-faire – in both public and private schools.
Details
Keywords
Qiangqiang Zhai, Zhao Liu, Zhouzhou Song and Ping Zhu
Kriging surrogate model has demonstrated a powerful ability to be applied to a variety of engineering challenges by emulating time-consuming simulations. However, when it comes to…
Abstract
Purpose
Kriging surrogate model has demonstrated a powerful ability to be applied to a variety of engineering challenges by emulating time-consuming simulations. However, when it comes to problems with high-dimensional input variables, it may be difficult to obtain a model with high accuracy and efficiency due to the curse of dimensionality. To meet this challenge, an improved high-dimensional Kriging modeling method based on maximal information coefficient (MIC) is developed in this work.
Design/methodology/approach
The hyperparameter domain is first derived and the dataset of hyperparameter and likelihood function is collected by Latin Hypercube Sampling. MIC values are innovatively calculated from the dataset and used as prior knowledge for optimizing hyperparameters. Then, an auxiliary parameter is introduced to establish the relationship between MIC values and hyperparameters. Next, the hyperparameters are obtained by transforming the optimized auxiliary parameter. Finally, to further improve the modeling accuracy, a novel local optimization step is performed to discover more suitable hyperparameters.
Findings
The proposed method is then applied to five representative mathematical functions with dimensions ranging from 20 to 100 and an engineering case with 30 design variables.
Originality/value
The results show that the proposed high-dimensional Kriging modeling method can obtain more accurate results than the other three methods, and it has an acceptable modeling efficiency. Moreover, the proposed method is also suitable for high-dimensional problems with limited sample points.
Details