Search results

1 – 10 of 468
Book part
Publication date: 24 April 2023

Yixiao Sun

The author develops and extends the asymptotic F- and t-test theory in linear regression models where the regressors could be deterministic trends, unit-root processes…

Abstract

The author develops and extends the asymptotic F- and t-test theory in linear regression models where the regressors could be deterministic trends, unit-root processes, near-unit-root processes, among others. The author considers both the exogenous case where the regressors and the regression error are independent and the endogenous case where they are correlated. In the former case, the author designs a new set of basis functions that are invariant to the parameter estimation uncertainty and uses them to construct a new series long-run variance estimator. The author shows that the F-test version of the Wald statistic and the t-statistic are asymptotically F and t distributed, respectively. In the latter case, the author shows that the asymptotic F and t theory is still possible, but one has to develop it in a pseudo-frequency domain. The F and t approximations are more accurate than the more commonly used chi-squared and normal approximations. The resulting F and t tests are also easy to implement – they can be implemented in exactly the same way as the F and t tests in a classical normal linear regression.

Article
Publication date: 7 February 2023

Eunji Kim, Jinwon An, Hyun-Chang Cho, Sungzoon Cho and Byeongeon Lee

The purpose of this paper is to identify the root cause of low yield problems in the semiconductor manufacturing process using sensor data continuously collected from…

Abstract

Purpose

The purpose of this paper is to identify the root cause of low yield problems in the semiconductor manufacturing process using sensor data continuously collected from manufacturing equipment and describe the process environment in the equipment.

Design/methodology/approach

This paper proposes a sensor data mining process based on the sequential modeling of random forests for low yield diagnosis. The process consists of sequential steps: problem definition, data preparation, excursion time and critical sensor identification, data visualization and root cause identification.

Findings

A case study is conducted using real-world data collected from a semiconductor manufacturer in South Korea to demonstrate the effectiveness of the diagnosis process. The proposed model successfully identified the excursion time and critical sensors previously identified by domain engineers using costly manual examination.

Originality/value

The proposed procedure helps domain engineers narrow down the excursion time and critical sensors from the massive sensor data. The procedure's outcome is highly interpretable, informative and easy to visualize.

Details

Data Technologies and Applications, vol. 57 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

Open Access
Article
Publication date: 29 July 2020

Mahmood Al-khassaweneh and Omar AlShorman

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including…

Abstract

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including electronic data communications and internet transactions. However, two important measures should be considered for any compression algorithm: the compression factor and the quality of the decompressed image. In this paper, we use Frei-Chen bases technique and the Modified Run Length Encoding (RLE) to compress images. The Frei-Chen bases technique is applied at the first stage in which the average subspace is applied to each 3 × 3 block. Those blocks with the highest energy are replaced by a single value that represents the average value of the pixels in the corresponding block. Even though Frei-Chen bases technique provides lossy compression, it maintains the main characteristics of the image. Additionally, the Frei-Chen bases technique enhances the compression factor, making it advantageous to use. In the second stage, RLE is applied to further increase the compression factor. The goal of using RLE is to enhance the compression factor without adding any distortion to the resultant decompressed image. Integrating RLE with Frei-Chen bases technique, as described in the proposed algorithm, ensures high quality decompressed images and high compression rate. The results of the proposed algorithms are shown to be comparable in quality and performance with other existing methods.

Details

Applied Computing and Informatics, vol. 20 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 29 July 2021

Aarathi S. and Vasundra S.

Pervasive analytics act as a prominent role in computer-aided prediction of non-communicating diseases. In the early stage, arrhythmia diagnosis detection helps prevent the cause…

Abstract

Purpose

Pervasive analytics act as a prominent role in computer-aided prediction of non-communicating diseases. In the early stage, arrhythmia diagnosis detection helps prevent the cause of death suddenly owing to heart failure or heart stroke. The arrhythmia scope can be identified by electrocardiogram (ECG) report.

Design/methodology/approach

The ECG report has been used extensively by several clinical experts. However, diagnosis accuracy has been dependent on clinical experience. For the prediction methods of computer-aided heart disease, both accuracy and sensitivity metrics play a remarkable part. Hence, the existing research contributions have optimized the machine-learning approaches to have a great significance in computer-aided methods, which perform predictive analysis of arrhythmia detection.

Findings

In reference to this, this paper determined a regression heuristics by tridimensional optimum features of ECG reports to perform pervasive analytics for computer-aided arrhythmia prediction. The intent of these reports is arrhythmia detection. From an empirical outcome, it has been envisioned that the project model of this contribution is more optimal and added a more advantage when compared to existing or contemporary approaches.

Originality/value

In reference to this, this paper determined a regression heuristics by tridimensional optimum features of ECG reports to perform pervasive analytics for computer-aided arrhythmia prediction. The intent of these reports is arrhythmia detection. From an empirical outcome, it has been envisioned that the project model of this contribution is more optimal and added a more advantage when compared to existing or contemporary approaches.

Details

International Journal of Pervasive Computing and Communications, vol. 20 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 30 August 2023

Hannan Amoozad Mahdiraji, Hojatallah Sharifpour Arabi, Moein Beheshti and Demetris Vrontis

This research aims to extract Industry 4.0 technological building blocks (TBBs) capable of value generation in collaborative consumption (CC) and the sharing economy (SE)…

Abstract

Purpose

This research aims to extract Industry 4.0 technological building blocks (TBBs) capable of value generation in collaborative consumption (CC) and the sharing economy (SE). Furthermore, by employing a mixed methodology, this research strives to analyse the relationship amongst TBBs and classify them based on their impact on CC.

Design/methodology/approach

Due to the importance of technology for the survival of collaborative consumption in the future, this study suggests a classification of the auxiliary and fundamental Industry 4.0 technologies and their current upgrades, such as the metaverse or non-fungible tokens (NFT). First, by applying a systematic literature review and thematic analysis (SLR-TA), the authors extracted the TBBs that impact on collaborative consumption and SE. Then, using the Bayesian best-worst method (BBWM), TBBs are weighted and classified using experts’ opinions. Eventually, a score function is proposed to measure organisations’ readiness level to adopt Industry 4.0 technologies.

Findings

The findings illustrated that virtual reality (VR) plays a vital role in CC and SE. Of the 11 TBBs identified in the CC and SE, VR was selected as the most determinant TBB and metaverse was recognised as the least important. Furthermore, digital twins, big data and VR were labelled as “fundamental”, and metaverse, augmented reality (AR), and additive manufacturing were stamped as “discretional”. Moreover, cyber-physical systems (CPSs) and artificial intelligence (AI) were classified as “auxiliary” technologies.

Originality/value

With an in-depth investigation, this research identifies TBBs of Industry 4.0 with the capability of value generation in CC and SE. To the authors’ knowledge, this is the first research that identifies and examines the TBBs of Industry 4.0 in the CC and SE sectors and examines them. Furthermore, a novel mixed method has identified, weighted and classified pertinent technologies. The score function that measures the readiness level of each company to adopt TBBs in CC and SE is a unique contribution.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 9 October 2023

Hannan Amoozad Mahdiraji, Hojatallah Sharifpour Arabi, Jose Arturo Garza-Reyes and Abdul Jabbar

Acquainting organisations regarding the concepts of Total Quality Management (TQM) and its implementation is one measure that effectively improves their global position and…

Abstract

Purpose

Acquainting organisations regarding the concepts of Total Quality Management (TQM) and its implementation is one measure that effectively improves their global position and performance. Kaizen is one of the concepts of TQM, which focuses on low-cost organisational transformational methods and often saves consuming significant resources (time, capital, etc.). Using Kaizen in organisational transformation sets efficient guidelines to improve processes agility and leanness and increase manufacturing productivity. Hence, this study aims to identify the key success factors in Kaizen projects and presents a score function that measures the readiness level of organisations to implement Kaizen projects.

Design/methodology/approach

A literature review first extracts the key success factors in Kaizen projects. Afterwards, the selected factors are screened via the fuzzy Delphi method using expert opinions from the manufacturing sector of an emerging economy. Subsequently, their importance is cross-examined by the Bayesian best–worst Method (BBWM). The BBWM is one of the most recent multiple criteria decision-making (MCDM) methods that lead to stable, dynamic and robust pairwise comparisons. After analysing the weights of the key factors, a score function is designed so that organisations can understand how much they are ready to launch Kaizen projects.

Findings

According to the findings, “Training and education” and “Employee attitude” played an important role in the success of Kaizen projects. The literature extracted 22 success factors of Kaizen projects, and 10 factors were eliminated through the fuzzy Delphi method. Twelve success factors in Kaizen projects were evaluated and investigated through the BBWM. Matching to this method, “Training and education” and “Employee attitude” weighed 0.119 and 0.112, relatively. Furthermore, “Support from senior management” was the least important factor.

Originality/value

To the best knowledge of the authors, this is the first research in which the success factors of Kaizen projects have been identified and analysed through an integrated multi-layer decision-making framework. Although some studies have investigated the key success factors of Kaizen projects and analysed them through statistical approaches, research that examines the success factors of Kaizen projects through MCDM methods is yet to be reported. Moreover, the score function that measures the level of readiness of each organisation for the successful implementation of Kaizen projects is a unique contribution to this research.

Details

The TQM Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1754-2731

Keywords

Article
Publication date: 22 June 2022

Gang Yao, Xiaojian Hu, Liangcheng Xu and Zhening Wu

Social media data from financial websites contain information related to enterprise credit risk. Mining valuable new features in social media data helps to improve prediction…

Abstract

Purpose

Social media data from financial websites contain information related to enterprise credit risk. Mining valuable new features in social media data helps to improve prediction performance. This paper proposes a credit risk prediction framework that integrates social media information to improve listed enterprise credit risk prediction in the supply chain.

Design/methodology/approach

The prediction framework includes four stages. First, social media information is obtained through web crawler technology. Second, text sentiment in social media information is mined through natural language processing. Third, text sentiment features are constructed. Finally, the new features are integrated with traditional features as input for models for credit risk prediction. This paper takes Chinese pharmaceutical enterprises as an example to test the prediction framework and obtain relevant management enlightenment.

Findings

The prediction framework can improve enterprise credit risk prediction performance. The prediction performance of text sentiment features in social media data is better than that of most traditional features. The time-weighted text sentiment feature has the best prediction performance in mining social media information.

Practical implications

The prediction framework is helpful for the credit decision-making of credit departments and the policy regulation of regulatory departments and is conducive to the sustainable development of enterprises.

Originality/value

The prediction framework can effectively mine social media information and obtain an excellent prediction effect of listed enterprise credit risk in the supply chain.

Article
Publication date: 27 October 2022

Morley Gunderson

The purpose of this paper is to review the literature on intersectionality and ascertain its potential for application to human resources (HR) research and practice. Particular…

Abstract

Purpose

The purpose of this paper is to review the literature on intersectionality and ascertain its potential for application to human resources (HR) research and practice. Particular attention is paid to its methodological issues involving how best to incorporate intersectionality into research designs, and its data issues involving the “curse of dimensionality” where there are too few observations in most datasets to deal with multiple intersecting categories.

Design/methodology/approach

The methodology involves reviewing the literature on intersectionality in its various dimensions: its conceptual underpinnings and meanings; its evolution as a concept; its application in various areas; its relationship to gender-based analysis plus (GBA+); its methodological issues and data requirements; its relationship to theory and qualitative as well as quantitative lines of research; and its potential applicability to research and practice in HR.

Findings

Intersectionality deals with how interdependent categories such as race, gender and disability intersect to affect outcomes. It is not how each of these factors has an independent or additive effect; rather, it is how they combine together in an interlocking fashion to have an interactive effect that is different from the sum of their individual effects. This gives rise to methodological and data complications that are outlined. Ways in which these complications have been dealt with in the literature are outlined, including interaction effects, separate equations for key groups, reducing data requirements, qualitative analysis and machine learning with Big Data.

Research limitations/implications

Intersectionality has not been dealt with in HR research or practice. In other fields, it tends to be dealt with only in a conceptual/theoretical fashion or qualitatively, likely reflecting the difficulties of applying it to quantitative research.

Practical implications

The wide gap between the theoretical concept of intersectionality and its practical application for purposes of prediction as well as causal analysis is outlined. Trade-offs are invariably involved in applying intersectionality to HR issues. Practical steps for dealing with those trade-offs in the quantitative analyses of HR issues are outlined.

Social implications

Intersectionality draws attention to the intersecting nature of multiple disadvantages or vulnerability. It highlights how they interact in a multiplicative and not simply additive fashion to affect various outcomes of individual and social importance.

Originality/value

To the best of the author’s knowledge, this is the first analysis of the potential applicability of the concept of intersectionality to research and practice in HR. It has obvious relevance for ascertaining intersectional categories as predictors and causal determinants of important outcomes in HR, especially given the growing availability of large personnel and digital datasets.

Details

International Journal of Manpower, vol. 44 no. 7
Type: Research Article
ISSN: 0143-7720

Keywords

Article
Publication date: 30 March 2023

Wilson Charles Chanhemo, Mustafa H. Mohsini, Mohamedi M. Mjahidi and Florence U. Rashidi

This study explores challenges facing the applicability of deep learning (DL) in software-defined networks (SDN) based campus networks. The study intensively explains the…

Abstract

Purpose

This study explores challenges facing the applicability of deep learning (DL) in software-defined networks (SDN) based campus networks. The study intensively explains the automation problem that exists in traditional campus networks and how SDN and DL can provide mitigating solutions. It further highlights some challenges which need to be addressed in order to successfully implement SDN and DL in campus networks to make them better than traditional networks.

Design/methodology/approach

The study uses a systematic literature review. Studies on DL relevant to campus networks have been presented for different use cases. Their limitations are given out for further research.

Findings

Following the analysis of the selected studies, it showed that the availability of specific training datasets for campus networks, SDN and DL interfacing and integration in production networks are key issues that must be addressed to successfully deploy DL in SDN-enabled campus networks.

Originality/value

This study reports on challenges associated with implementation of SDN and DL models in campus networks. It contributes towards further thinking and architecting of proposed SDN-based DL solutions for campus networks. It highlights that single problem-based solutions are harder to implement and unlikely to be adopted in production networks.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 16 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 1 March 2023

Zhirong Zhong, Heng Jiang, Jiachen Guo and Hongfu Zuo

The aero-engine array electrostatic monitoring technology (AEMT) can provide more and more accurate information about the direct product of the fault, and it is a novel condition…

Abstract

Purpose

The aero-engine array electrostatic monitoring technology (AEMT) can provide more and more accurate information about the direct product of the fault, and it is a novel condition monitoring technology that is expected to solve the problem of high false alarm rate of traditional electrostatic monitoring technology. However, aliasing of the array electrostatic signals often occurs, which will greatly affect the accuracy of the information identified by using the electrostatic sensor array. The purpose of this paper is to propose special solutions to the above problems.

Design/methodology/approach

In this paper, a method for de-aliasing of array electrostatic signals based on compressive sensing principle is proposed by taking advantage of the sparsity of the distribution of multiple pulse signals that originally constitute aliased signals in the time domain.

Findings

The proposed method is verified by finite element simulation experiments. The simulation experiments show that the proposed method can recover the original pulse signal with an accuracy of 96.0%; when the number of pulse signals does not exceed 5, the proposed method can recover the pulse peak with an average absolute error of less than 5.5%; and the recovered aliased signal time-domain waveform is very similar to the original aliased signal time-domain waveform, indicating that the proposed method is accurate.

Originality/value

The proposed method is one of the key technologies of AEMT.

Details

Aircraft Engineering and Aerospace Technology, vol. 95 no. 7
Type: Research Article
ISSN: 1748-8842

Keywords

1 – 10 of 468