Search results

1 – 10 of over 3000
Open Access
Article
Publication date: 25 July 2022

Fung Yuen Chin, Kong Hoong Lem and Khye Mun Wong

The amount of features in handwritten digit data is often very large due to the different aspects in personal handwriting, leading to high-dimensional data. Therefore, the…

1061

Abstract

Purpose

The amount of features in handwritten digit data is often very large due to the different aspects in personal handwriting, leading to high-dimensional data. Therefore, the employment of a feature selection algorithm becomes crucial for successful classification modeling, because the inclusion of irrelevant or redundant features can mislead the modeling algorithms, resulting in overfitting and decrease in efficiency.

Design/methodology/approach

The minimum redundancy and maximum relevance (mRMR) and the recursive feature elimination (RFE) are two frequently used feature selection algorithms. While mRMR is capable of identifying a subset of features that are highly relevant to the targeted classification variable, mRMR still carries the weakness of capturing redundant features along with the algorithm. On the other hand, RFE is flawed by the fact that those features selected by RFE are not ranked by importance, albeit RFE can effectively eliminate the less important features and exclude redundant features.

Findings

The hybrid method was exemplified in a binary classification between digits “4” and “9” and between digits “6” and “8” from a multiple features dataset. The result showed that the hybrid mRMR +  support vector machine recursive feature elimination (SVMRFE) is better than both the sole support vector machine (SVM) and mRMR.

Originality/value

In view of the respective strength and deficiency mRMR and RFE, this study combined both these methods and used an SVM as the underlying classifier anticipating the mRMR to make an excellent complement to the SVMRFE.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 2 January 2024

Kenta Ikeuchi, Kyoji Fukao and Cristiano Perugini

The authors' work aims to identify the employer-specific drivers of the college (or university) wage gap, which has been identified as one of the major determinants of the…

Abstract

Purpose

The authors' work aims to identify the employer-specific drivers of the college (or university) wage gap, which has been identified as one of the major determinants of the dynamics of overall wage and income inequality in the past decades. The authors focus on three employer-level features that can be associated with asymmetries in the employment relation orientation adopted for college and non-college-educated employees: (1) size, (2) the share of standard employment and (3) the pervasiveness of incentive pay schemes.

Design/methodology/approach

The authors' establishment-level analysis (data from the Basic Survey on Wage Structure (BSWS), 2005–2018) focusses on Japan, an economy characterised by many unique economic and institutional features relevant to the aims of the authors' analysis. The authors use an adjusted measure of firm-specific college wage premium, which is not biased by confounding individual and establishment-level factors and reflects unobservable characteristics of employees that determine the payment of a premium. The authors' empirical methods account for the complexity of the relationships they investigate, and the authors test their baseline outcomes with econometric approaches (propensity score methods) able to address crucial identification issues related to endogeneity and reverse causality.

Findings

The authors' findings indicate that larger establishment size, a larger share of regular workers and more pervasive implementation of IPSs for college workers tend to increase the college wage gap once all observable workers, job and establishment characteristics are controlled for. This evidence corroborates the authors' hypotheses that a larger establishment size, a higher share of regular workers and a more developed set-up of performance pay schemes for college workers are associated with a better capacity of employers to attract and keep highly educated employees with unobservable characteristics that justify a wage premium above average market levels. The authors provide empirical evidence on how three relevant establishment-level characteristics shape the heterogeneity of the (adjusted) college wage observed across organisations.

Originality/value

The authors' contribution to the existing knowledge is threefold. First, the authors combine the economics and management/organisation literature to develop new insights that underpin the authors' testable empirical hypotheses. This enables the authors to shed light on employer-level drivers of wage differentials (size, workforce composition, implementation of performance-pay schemes) related to many structural, institutional and strategic dimensions. The second contribution lies in the authors' measure of the “adjusted” college wage gap, which is calculated on the component of individual wages that differs between observationally identical workers in the same establishment. As such, the metric captures unobservable workers' characteristics that can generate a wage premium/penalty. Third, the authors provide empirical evidence on how three relevant establishment-level characteristics shape the heterogeneity of the (adjusted) college wage observed across organisations.

Details

International Journal of Manpower, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-7720

Keywords

Article
Publication date: 22 March 2024

Mohd Mustaqeem, Suhel Mustajab and Mahfooz Alam

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have…

Abstract

Purpose

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have proposed a novel hybrid approach that combines Gray Wolf Optimization with Feature Selection (GWOFS) and multilayer perceptron (MLP) for SDP. The GWOFS-MLP hybrid model is designed to optimize feature selection, ultimately enhancing the accuracy and efficiency of SDP. Gray Wolf Optimization, inspired by the social hierarchy and hunting behavior of gray wolves, is employed to select a subset of relevant features from an extensive pool of potential predictors. This study investigates the key challenges that traditional SDP approaches encounter and proposes promising solutions to overcome time complexity and the curse of the dimensionality reduction problem.

Design/methodology/approach

The integration of GWOFS and MLP results in a robust hybrid model that can adapt to diverse software datasets. This feature selection process harnesses the cooperative hunting behavior of wolves, allowing for the exploration of critical feature combinations. The selected features are then fed into an MLP, a powerful artificial neural network (ANN) known for its capability to learn intricate patterns within software metrics. MLP serves as the predictive engine, utilizing the curated feature set to model and classify software defects accurately.

Findings

The performance evaluation of the GWOFS-MLP hybrid model on a real-world software defect dataset demonstrates its effectiveness. The model achieves a remarkable training accuracy of 97.69% and a testing accuracy of 97.99%. Additionally, the receiver operating characteristic area under the curve (ROC-AUC) score of 0.89 highlights the model’s ability to discriminate between defective and defect-free software components.

Originality/value

Experimental implementations using machine learning-based techniques with feature reduction are conducted to validate the proposed solutions. The goal is to enhance SDP’s accuracy, relevance and efficiency, ultimately improving software quality assurance processes. The confusion matrix further illustrates the model’s performance, with only a small number of false positives and false negatives.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 15 January 2024

Faris Elghaish, Sandra Matarneh, Essam Abdellatef, Farzad Rahimian, M. Reza Hosseini and Ahmed Farouk Kineber

Cracks are prevalent signs of pavement distress found on highways globally. The use of artificial intelligence (AI) and deep learning (DL) for crack detection is increasingly…

Abstract

Purpose

Cracks are prevalent signs of pavement distress found on highways globally. The use of artificial intelligence (AI) and deep learning (DL) for crack detection is increasingly considered as an optimal solution. Consequently, this paper introduces a novel, fully connected, optimised convolutional neural network (CNN) model using feature selection algorithms for the purpose of detecting cracks in highway pavements.

Design/methodology/approach

To enhance the accuracy of the CNN model for crack detection, the authors employed a fully connected deep learning layers CNN model along with several optimisation techniques. Specifically, three optimisation algorithms, namely adaptive moment estimation (ADAM), stochastic gradient descent with momentum (SGDM), and RMSProp, were utilised to fine-tune the CNN model and enhance its overall performance. Subsequently, the authors implemented eight feature selection algorithms to further improve the accuracy of the optimised CNN model. These feature selection techniques were thoughtfully selected and systematically applied to identify the most relevant features contributing to crack detection in the given dataset. Finally, the authors subjected the proposed model to testing against seven pre-trained models.

Findings

The study's results show that the accuracy of the three optimisers (ADAM, SGDM, and RMSProp) with the five deep learning layers model is 97.4%, 98.2%, and 96.09%, respectively. Following this, eight feature selection algorithms were applied to the five deep learning layers to enhance accuracy, with particle swarm optimisation (PSO) achieving the highest F-score at 98.72. The model was then compared with other pre-trained models and exhibited the highest performance.

Practical implications

With an achieved precision of 98.19% and F-score of 98.72% using PSO, the developed model is highly accurate and effective in detecting and evaluating the condition of cracks in pavements. As a result, the model has the potential to significantly reduce the effort required for crack detection and evaluation.

Originality/value

The proposed method for enhancing CNN model accuracy in crack detection stands out for its unique combination of optimisation algorithms (ADAM, SGDM, and RMSProp) with systematic application of multiple feature selection techniques to identify relevant crack detection features and comparing results with existing pre-trained models.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 1 January 2024

Shahrzad Yaghtin and Joel Mero

Machine learning (ML) techniques are increasingly important in enabling business-to-business (B2B) companies to offer personalized services to business customers. On the other…

Abstract

Purpose

Machine learning (ML) techniques are increasingly important in enabling business-to-business (B2B) companies to offer personalized services to business customers. On the other hand, humans play a critical role in dealing with uncertain situations and the relationship-building aspects of a B2B business. Most existing studies advocating human-ML augmentation simply posit the concept without providing a detailed view of augmentation. Therefore, the purpose of this paper is to investigate how human involvement can practically augment ML capabilities to develop a personalized information system (PIS) for business customers.

Design/methodology/approach

The authors developed a research framework to create an integrated human-ML PIS for business customers. The PIS was then implemented in the energy sector. Next, the accuracy of the PIS was evaluated using customer feedback. To this end, precision, recall and F1 evaluation metrics were used.

Findings

The computed figures of precision, recall and F1 (respectively, 0.73, 0.72 and 0.72) were all above 0.5; thus, the accuracy of the model was confirmed. Finally, the study presents the research model that illustrates how human involvement can augment ML capabilities in different stages of creating the PIS including the business/market understanding, data understanding, data collection and preparation, model creation and deployment and model evaluation phases.

Originality/value

This paper offers novel insight into the less-known phenomenon of human-ML augmentation for marketing purposes. Furthermore, the study contributes to the B2B personalization literature by elaborating on how human experts can augment ML computing power to create a PIS for business customers.

Details

Journal of Business & Industrial Marketing, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0885-8624

Keywords

Open Access
Article
Publication date: 21 December 2023

Oladosu Oyebisi Oladimeji and Ayodeji Olusegun J. Ibitoye

Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the…

1245

Abstract

Purpose

Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the traditional methods, deep learning approaches have gained popularity in automating the diagnosis of brain tumors, offering the potential for more accurate and efficient results. Notably, attention-based models have emerged as an advanced, dynamically refining and amplifying model feature to further elevate diagnostic capabilities. However, the specific impact of using channel, spatial or combined attention methods of the convolutional block attention module (CBAM) for brain tumor classification has not been fully investigated.

Design/methodology/approach

To selectively emphasize relevant features while suppressing noise, ResNet50 coupled with the CBAM (ResNet50-CBAM) was used for the classification of brain tumors in this research.

Findings

The ResNet50-CBAM outperformed existing deep learning classification methods like convolutional neural network (CNN), ResNet-CBAM achieved a superior performance of 99.43%, 99.01%, 98.7% and 99.25% in accuracy, recall, precision and AUC, respectively, when compared to the existing classification methods using the same dataset.

Practical implications

Since ResNet-CBAM fusion can capture the spatial context while enhancing feature representation, it can be integrated into the brain classification software platforms for physicians toward enhanced clinical decision-making and improved brain tumor classification.

Originality/value

This research has not been published anywhere else.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 24 January 2024

Chung-Ming Lo

An increasing number of images are generated daily, and images are gradually becoming a search target. Content-based image retrieval (CBIR) is helpful for users to express their…

67

Abstract

Purpose

An increasing number of images are generated daily, and images are gradually becoming a search target. Content-based image retrieval (CBIR) is helpful for users to express their requirements using an image query. Nevertheless, determining whether the retrieval system can provide convenient operation and relevant retrieval results is challenging. A CBIR system based on deep learning features was proposed in this study to effectively search and navigate images in digital articles.

Design/methodology/approach

Convolutional neural networks (CNNs) were used as the feature extractors in the author's experiments. Using pretrained parameters, the training time and retrieval time were reduced. Different CNN features were extracted from the constructed image databases consisting of images taken from the National Palace Museum Journals Archive and were compared in the CBIR system.

Findings

DenseNet201 achieved the best performance, with a top-10 mAP of 89% and a query time of 0.14 s.

Practical implications

The CBIR homepage displayed image categories showing the content of the database and provided the default query images. After retrieval, the result showed the metadata of the retrieved images and links back to the original pages.

Originality/value

With the interface and retrieval demonstration, a novel image-based reading mode can be established via the CBIR and links to the original images and contextual descriptions.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 18 August 2023

Enas Hendawy, David G. McMillan, Zaki M. Sakr and Tamer Mohamed Shahwan

This paper aims to introduce a new perspective on long-term stock return predictability by focusing on the relative (individual and hybrid) informative power of a wide range of…

Abstract

Purpose

This paper aims to introduce a new perspective on long-term stock return predictability by focusing on the relative (individual and hybrid) informative power of a wide range of accounting (firm-related), technical and macroeconomic factors while considering the past performance of the stocks using machine learning algorithms.

Design/methodology/approach

The sample includes a panel data set of 94 non-financial firms listed in Egyptian Exchange 100 index from 2014: Q1 to 2019: Q4. Relativity has been investigated by comparing relevant factors’ individual and combined informative power and differentiating between losers and winners based on historical stock returns. To predict the quarterly stock returns, Gaussian process regression (GPR) has been used. The robustness of the results is examined through the out-of-sample test. This study also uses linear regression (LR) as a benchmark model.

Findings

The past performance and the presence of other predictors influence the informative power of relevant factors and hence their predictive ability. The out-of-sample results show a trade-off between GPR and LR with proven superiority to GPR in limited experiments. The individual informative power outperforms the hybrid power, in which macroeconomic indicators outperform the remaining sets of indicators for losers, while winners show mixed results in terms of various performance evaluation metrics. Prediction accuracy is generally higher for losers than for winners.

Practical implications

This study provides interesting insight into the dynamic nature of the predictor variables in terms of stock return predictability. Hence, this study also deepens the understanding of asset pricing in a way that directly contributes to practitioners’ portfolio diversification strategies.

Originality/value

In concern of the chaos of factors in the literature and its accompanying misleading conclusions, this study takes another look at the approach that studies stock return predictability. To the best of the authors’ knowledge, this is the first study in the Egyptian context that re-examines the predictive power of the previously discovered factors from a different perspective that highlights their relative nature.

Details

Journal of Financial Reporting and Accounting, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1985-2517

Keywords

Article
Publication date: 9 June 2023

Sidney Newton, Phillippa Carnemolla and Simon Darcy

The provision of an accessible and inclusive built environment is both a common regulatory requirement for architects and facilities managers, and a critical issue of equitable…

Abstract

Purpose

The provision of an accessible and inclusive built environment is both a common regulatory requirement for architects and facilities managers, and a critical issue of equitable access for people with disability. Post Occupancy Evaluation (POE) is key to ensuring appropriate building accessibility is provided and maintained. Improved Building Information Modelling (BIM) integration with Facilities Management (FM) will enable more effective POE over time. This study aims to define and demonstrate the practicability and utility of a particular configuration of emerging BIM and related digital technologies, applied in the field.

Design/methodology/approach

A field study approach is applied to investigate the practicability and utility of the technology configuration and POE procedures. A proposed technology configuration is applied to evaluate 21 accessible bathrooms across three university buildings in Sydney, Australia. First, a checklist of technical functionality for a POE of accessible bathrooms particular to the field study FM context is established. The checklist is based on a review of recent literature, relevant standards, best practice guidelines, expert opinions, and the organisational requirements. Then, a technical and procedural approach to POE and BIM integration with FM is defined and applied in the field. Finally, a quantitative analysis of the results is presented and discussed relative to both the particular and general FM contexts.

Findings

The use of low-cost BIM and related technologies can usefully be applied in the field to promote a more progressive integration of BIM with FM and provide enhanced baseline models for ongoing POE. A rudimentary risk assessment of key accessible bathroom features (in the context of this field study) identified that the Toilet: toilet rolls location is unsatisfactory across all bathrooms surveyed and represents an immediate and high-risk failing. Other high-risk issues highlighted in this study included: Approach: access; Entrance: door fittings and security; and Layout: hazards.

Practical implications

This study offers a blue-print for building practitioners to adopt and progressively integrate low-cost BIM and related technologies with extant FM systems. The study also promotes an improved approach to effective POE practice in general, and to the assessment of accessible bathrooms in particular.

Originality/value

Recent reviews highlight key barriers to BIM integration with FM and significant limitations to current POE practice. Proposals for BIM integration with FM tend to focus on the comprehensive use of BIM. This study demonstrates the practicability and utility of a more progressive approach to BIM adoption and integration with FM in general. The study is also novel in that it shows how low-cost BIM and related technologies can be used as a baseline reference for ongoing POE. Building practitioners can adopt and adapt the technology configuration and approach to support a range of POE applications. This field study has identified immediate and high-risk potential failings of the accessible bathrooms provided on one university campus in Sydney, Australia.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 13 February 2024

Sara El-Breshy, Ahmad E. Elhabashy, Hadi Fors and Asmaa Harfoush

With the emergence of the different Industry 4.0 technologies and the interconnectedness between the physical and cyber components within manufacturing systems, the manufacturing…

Abstract

Purpose

With the emergence of the different Industry 4.0 technologies and the interconnectedness between the physical and cyber components within manufacturing systems, the manufacturing environment is becoming more susceptible to unexpected disruptions, and manufacturing systems need to be even more resilient than before. Hence, the purpose of this work is to explore how does incorporating Industry 4.0 into current manufacturing systems affects (positively or negatively) its resiliency.

Design/methodology/approach

A Systematic Literature Review (SLR) was performed with a focus on studying the manufacturing system’s resilience when applying Industry 4.0 technologies. The SLR is composed of four phases, which are (1) questions formulation, (2) determining an adequate search strategy, (3) publications filtering and (4) analysis and interpretation.

Findings

From the SLR results’ analysis, four potential research opportunities are proposed related to conducting additional research within the research themes in this field, considering less studied Industry 4.0 technologies or more than one technology, investigating the impact of some technologies on manufacturing system’s resilience, exploring more avenues to incorporate resiliency to preserve the state of the system, and suggesting metrics to quantify the resilience of manufacturing systems.

Originality/value

Although there are a number of publications discussing the resiliency of manufacturing systems, none fully investigated this topic when different Industry 4.0 technologies have been considered. In addition to determining the current research state-of-art in this relatively new research area and identifying potential future research opportunities, the main value of this work is in providing insights about this research area across three different perspectives/streams: (1) Industry 4.0 technologies, (2) resiliency and (3) manufacturing systems and their intersections.

Details

Journal of Manufacturing Technology Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1741-038X

Keywords

1 – 10 of over 3000