Search results

1 – 10 of over 4000
Article
Publication date: 2 November 2023

Julaine Clunis

This paper aims to delve into the complexities of terminology mapping and annotation, particularly within the context of the COVID-19 pandemic. It underscores the criticality of…

Abstract

Purpose

This paper aims to delve into the complexities of terminology mapping and annotation, particularly within the context of the COVID-19 pandemic. It underscores the criticality of harmonizing clinical knowledge organization systems (KOS) through a cohesive clinical knowledge representation approach. Central to the study is the pursuit of a novel method for integrating emerging COVID-19-specific vocabularies with existing systems, focusing on simplicity, adaptability and minimal human intervention.

Design/methodology/approach

A design science research (DSR) methodology is used to guide the development of a terminology mapping and annotation workflow. The KNIME data analytics platform is used to implement and test the mapping and annotation techniques, leveraging its powerful data processing and analytics capabilities. The study incorporates specific ontologies relevant to COVID-19, evaluates mapping accuracy and tests performance against a gold standard.

Findings

The study demonstrates the potential of the developed solution to map and annotate specific KOS efficiently. This method effectively addresses the limitations of previous approaches by providing a user-friendly interface and streamlined process that minimizes the need for human intervention. Additionally, the paper proposes a reusable workflow tool that can streamline the mapping process. It offers insights into semantic interoperability issues in health care as well as recommendations for work in this space.

Originality/value

The originality of this study lies in its use of the KNIME data analytics platform to address the unique challenges posed by the COVID-19 pandemic in terminology mapping and annotation. The novel workflow developed in this study addresses known challenges by combining mapping and annotation processes specifically for COVID-19-related vocabularies. The use of DSR methodology and relevant ontologies with the KNIME tool further contribute to the study’s originality, setting it apart from previous research in the terminology mapping and annotation field.

Details

The Electronic Library , vol. 41 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 9 January 2024

Ning Chen, Zhenyu Zhang and An Chen

Consequence prediction is an emerging topic in safety management concerning the severity outcome of accidents. In practical applications, it is usually implemented through…

Abstract

Purpose

Consequence prediction is an emerging topic in safety management concerning the severity outcome of accidents. In practical applications, it is usually implemented through supervised learning methods; however, the evaluation of classification results remains a challenge. The previous studies mostly adopted simplex evaluation based on empirical and quantitative assessment strategies. This paper aims to shed new light on the comprehensive evaluation and comparison of diverse classification methods through visualization, clustering and ranking techniques.

Design/methodology/approach

An empirical study is conducted using 9 state-of-the-art classification methods on a real-world data set of 653 construction accidents in China for predicting the consequence with respect to 39 carefully featured factors and accident type. The proposed comprehensive evaluation enriches the interpretation of classification results from different perspectives. Furthermore, the critical factors leading to severe construction accidents are identified by analyzing the coefficients of a logistic regression model.

Findings

This paper identifies the critical factors that significantly influence the consequence of construction accidents, which include accident type (particularly collapse), improper accident reporting and handling (E21), inadequate supervision engineers (O41), no special safety department (O11), delayed or low-quality drawings (T11), unqualified contractor (C21), schedule pressure (C11), multi-level subcontracting (C22), lacking safety examination (S22), improper operation of mechanical equipment (R11) and improper construction procedure arrangement (T21). The prediction models and findings of critical factors help make safety intervention measures in a targeted way and enhance the experience of safety professionals in the construction industry.

Research limitations/implications

The empirical study using some well-known classification methods for forecasting the consequences of construction accidents provides some evidence for the comprehensive evaluation of multiple classifiers. These techniques can be used jointly with other evaluation approaches for a comprehensive understanding of the classification algorithms. Despite the limitation of specific methods used in the study, the presented methodology can be configured with other classification methods and performance metrics and even applied to other decision-making problems such as clustering.

Originality/value

This study sheds new light on the comprehensive comparison and evaluation of classification results through visualization, clustering and ranking techniques using an empirical study of consequence prediction of construction accidents. The relevance of construction accident type is discussed with the severity of accidents. The critical factors influencing the accident consequence are identified for the sake of taking prevention measures for risk reduction. The proposed method can be applied to other decision-making tasks where the evaluation is involved as an important component.

Details

Construction Innovation , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 13 November 2023

Ziyoung Park

This study aims to collect distributed knowledge organization systems (KOSs) from various domains, enrich each with meta information and link them to the multilingual KOS…

Abstract

Purpose

This study aims to collect distributed knowledge organization systems (KOSs) from various domains, enrich each with meta information and link them to the multilingual KOS registry, facilitating integrated search alongside KOSs from various languages and regions.

Design/methodology/approach

This research involved collecting and organizing KOS information through three primary steps. The initial phase involved finding KOSs from Web search results, supplemented by the Korea ON-line E-Procurement System (KONEPS) and the National R&D Integrated Notification Service. After obtaining these KOSs, they were enriched by structuring contextual meta information using Basic Register of Thesauri, Ontologies and Classification (BARTOC) metadata elements and established dedicated media wiki pages for each. Finally, the KOSs were linked to the multilingual KOS registry, BARTOC, ensuring seamless integration with KOSs from various languages and regions and creating connections between each registry entry and its associated KOS wiki page.

Findings

The research findings revealed several insights, as follows: (1) importance of a stable source for collecting KOS: no national body currently oversees KOS registration, underscoring the need for a systematic approach to collect dispersed KOSs. For Korean KOSs (K-KOSs), KONEPS and National R&D Integrated Notification Service are effective data sources. (2) Importance of enhanced metadata: merely collecting KOSs were not enough. Enhanced metadata bridges access gaps and dedicated wiki pages aid user identification and understanding. (3) Observations from multilingual registry uploads: When adding KOSs to a multilingual registry, similarities were observed across languages and regions. Recognizing this, the K-KOSs were linked with their international counterparts, fostering potential global collaboration.

Research limitations/implications

Due to the absence of a dedicated KOS registry agency, the study might have missed KOSs from certain fields or potentially over-collected from others. Furthermore, this study primarily focused on K-KOSs and their integration into the BARTOC registry, which might influence the methods and perspectives on collecting and establishing links among analogous KOSs in the registry.

Originality/value

This research pursued a stable method to detect KOS development and revisions across various fields. To facilitate this, we used the integrated e-procurement and R&D notification system and added meta information to aid in the identification and understanding of KOSs, which includes media wiki pages. Furthermore, link information was provided between the BARTOC registry and the Korean KOS websites and media wiki pages.

Details

The Electronic Library , vol. 41 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

Open Access
Article
Publication date: 23 July 2020

Rami Mustafa A. Mohammad

Spam emails classification using data mining and machine learning approaches has enticed the researchers' attention duo to its obvious positive impact in protecting internet…

1896

Abstract

Spam emails classification using data mining and machine learning approaches has enticed the researchers' attention duo to its obvious positive impact in protecting internet users. Several features can be used for creating data mining and machine learning based spam classification models. Yet, spammers know that the longer they will use the same set of features for tricking email users the more probably the anti-spam parties might develop tools for combating this kind of annoying email messages. Spammers, so, adapt by continuously reforming the group of features utilized for composing spam emails. For that reason, even though traditional classification methods possess sound classification results, they were ineffective for lifelong classification of spam emails duo to the fact that they might be prone to the so-called “Concept Drift”. In the current study, an enhanced model is proposed for ensuring lifelong spam classification model. For the evaluation purposes, the overall performance of the suggested model is contrasted against various other stream mining classification techniques. The results proved the success of the suggested model as a lifelong spam emails classification method.

Details

Applied Computing and Informatics, vol. 20 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 7 February 2023

Moreno Frau, Francesca Cabiddu, Luca Frigau, Przemysław Tomczyk and Francesco Mola

Previous research has studied interactive value formation (IVF) using resource- or practice-based approaches but has neglected the role of emotions. This article aims to show how…

1610

Abstract

Purpose

Previous research has studied interactive value formation (IVF) using resource- or practice-based approaches but has neglected the role of emotions. This article aims to show how emotions are correlated in problematic social media interactions and explore their role in IVF.

Design/methodology/approach

By combining a text mining algorithm, nonparametric Spearman's rho and thematic qualitative analysis in an explanatory sequential mixed-method design, the authors (1) categorize customers' comments as positive, neutral or negative; (2) pinpoint peaks of negative comments; (3) classify problematic interactions as detrimental, contradictory or conflictual; (4) identify customers' main positive (joy, trust and surprise) and negative emotions (anger, dissatisfaction, disgust, fear and sadness) and (5) correlate these emotions.

Findings

Despite several problematic social interactions, the same pattern of emotions appears but with different intensities. Additionally, value co-creation, value no-creation and value co-destruction co-occur in a context of problematic social interactions (peak of negative comments).

Originality/value

This study provides new insights into the effect of customers' emotions during IVF by studying the links between positive and negative emotions and their effects on different sorts of problematic social interactions.

Details

Journal of Research in Interactive Marketing, vol. 17 no. 5
Type: Research Article
ISSN: 2040-7122

Keywords

Article
Publication date: 6 October 2023

Vahide Bulut

Feature extraction from 3D datasets is a current problem. Machine learning is an important tool for classification of complex 3D datasets. Machine learning classification…

Abstract

Purpose

Feature extraction from 3D datasets is a current problem. Machine learning is an important tool for classification of complex 3D datasets. Machine learning classification techniques are widely used in various fields, such as text classification, pattern recognition, medical disease analysis, etc. The aim of this study is to apply the most popular classification and regression methods to determine the best classification and regression method based on the geodesics.

Design/methodology/approach

The feature vector is determined by the unit normal vector and the unit principal vector at each point of the 3D surface along with the point coordinates themselves. Moreover, different examples are compared according to the classification methods in terms of accuracy and the regression algorithms in terms of R-squared value.

Findings

Several surface examples are analyzed for the feature vector using classification (31 methods) and regression (23 methods) machine learning algorithms. In addition, two ensemble methods XGBoost and LightGBM are used for classification and regression. Also, the scores for each surface example are compared.

Originality/value

To the best of the author’s knowledge, this is the first study to analyze datasets based on geodesics using machine learning algorithms for classification and regression.

Details

Engineering Computations, vol. 40 no. 9/10
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 22 March 2024

Mohd Mustaqeem, Suhel Mustajab and Mahfooz Alam

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have…

Abstract

Purpose

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have proposed a novel hybrid approach that combines Gray Wolf Optimization with Feature Selection (GWOFS) and multilayer perceptron (MLP) for SDP. The GWOFS-MLP hybrid model is designed to optimize feature selection, ultimately enhancing the accuracy and efficiency of SDP. Gray Wolf Optimization, inspired by the social hierarchy and hunting behavior of gray wolves, is employed to select a subset of relevant features from an extensive pool of potential predictors. This study investigates the key challenges that traditional SDP approaches encounter and proposes promising solutions to overcome time complexity and the curse of the dimensionality reduction problem.

Design/methodology/approach

The integration of GWOFS and MLP results in a robust hybrid model that can adapt to diverse software datasets. This feature selection process harnesses the cooperative hunting behavior of wolves, allowing for the exploration of critical feature combinations. The selected features are then fed into an MLP, a powerful artificial neural network (ANN) known for its capability to learn intricate patterns within software metrics. MLP serves as the predictive engine, utilizing the curated feature set to model and classify software defects accurately.

Findings

The performance evaluation of the GWOFS-MLP hybrid model on a real-world software defect dataset demonstrates its effectiveness. The model achieves a remarkable training accuracy of 97.69% and a testing accuracy of 97.99%. Additionally, the receiver operating characteristic area under the curve (ROC-AUC) score of 0.89 highlights the model’s ability to discriminate between defective and defect-free software components.

Originality/value

Experimental implementations using machine learning-based techniques with feature reduction are conducted to validate the proposed solutions. The goal is to enhance SDP’s accuracy, relevance and efficiency, ultimately improving software quality assurance processes. The confusion matrix further illustrates the model’s performance, with only a small number of false positives and false negatives.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 8 September 2022

Manoj Palsodkar, Gunjan Yadav and Madhukar R. Nagare

The market's intense competition, the unpredictability of customer demands and technological advancements are compelling organizations to adopt new approaches, such as agile new…

Abstract

Purpose

The market's intense competition, the unpredictability of customer demands and technological advancements are compelling organizations to adopt new approaches, such as agile new product development (ANPD), which enables the introduction of new products to the market in a short span. The existing ANPD literature review articles are lacking in portraying recent developments, potential fields of adoption and the significance of ANPD in organizational development. The primary goal of this article is to investigate emerging aspects, current trends and conduct a meta-analysis using a systematic review of 177 ANPD articles published in peer-reviewed journals between 1998 and 2020.

Design/methodology/approach

The articles were categorized based on their year of publication, publishers, journals, authors, countries, universities, most cited articles, etc. The authors attempted to identify top journals, authors, most cited articles, enablers, barriers, performance metrics, etc. in the ANPD domain through the presented study.

Findings

The major themes of research articles, gaps and future trends are identified to assist academicians and ANPD practitioners. This study will benefit ANPD professionals by providing them with information on available literature and current ANPD trends.

Originality/value

Through meta-analysis, this study is one of the unique attempt to categorize ANPD articles to identify research gaps and highlight future research trends. A distinguishing feature of the presented study is the identification of active journals, publishers and authors, as well as enablers, barriers and performance metrics.

Details

Benchmarking: An International Journal, vol. 30 no. 9
Type: Research Article
ISSN: 1463-5771

Keywords

Open Access
Article
Publication date: 5 December 2023

Ali Zarifhonarvar

The study investigates the influence of ChatGPT on the labor market dynamics, aiming to provide a structured understanding of the changes induced by generative AI technologies.

2480

Abstract

Purpose

The study investigates the influence of ChatGPT on the labor market dynamics, aiming to provide a structured understanding of the changes induced by generative AI technologies.

Design/methodology/approach

An analysis of existing literature serves as the foundation for understanding the impact, while the supply and demand model helps assess the effects of ChatGPT. A text-mining approach is utilized to analyze the International Standard Occupation Classification, identifying occupations most susceptible to disruption by ChatGPT.

Findings

The study reveals that 32.8% of occupations could be fully impacted by ChatGPT, while 36.5% might experience a partial impact and 30.7% are likely to remain unaffected.

Research limitations/implications

While this study offers insights into the potential influence of ChatGPT and other generative AI services on the labor market, it is essential to note that these findings represent potential implications rather than realized labor market effects. Further research is needed to track actual changes in employment patterns and job market dynamics where these AI services are widely adopted.

Originality/value

This paper contributes to the field by systematically categorizing the level of impact on different occupations, providing a nuanced perspective on the short- and long-term implications of ChatGPT and similar generative AI services on the labor market.

Details

Journal of Electronic Business & Digital Economics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-4214

Keywords

Article
Publication date: 27 February 2023

Fatima-Zahrae Nakach, Hasnae Zerouaoui and Ali Idri

Histopathology biopsy imaging is currently the gold standard for the diagnosis of breast cancer in clinical practice. Pathologists examine the images at various magnifications to…

Abstract

Purpose

Histopathology biopsy imaging is currently the gold standard for the diagnosis of breast cancer in clinical practice. Pathologists examine the images at various magnifications to identify the type of tumor because if only one magnification is taken into account, the decision may not be accurate. This study explores the performance of transfer learning and late fusion to construct multi-scale ensembles that fuse different magnification-specific deep learning models for the binary classification of breast tumor slides.

Design/methodology/approach

Three pretrained deep learning techniques (DenseNet 201, MobileNet v2 and Inception v3) were used to classify breast tumor images over the four magnification factors of the Breast Cancer Histopathological Image Classification dataset (40×, 100×, 200× and 400×). To fuse the predictions of the models trained on different magnification factors, different aggregators were used, including weighted voting and seven meta-classifiers trained on slide predictions using class labels and the probabilities assigned to each class. The best cluster of the outperforming models was chosen using the Scott–Knott statistical test, and the top models were ranked using the Borda count voting system.

Findings

This study recommends the use of transfer learning and late fusion for histopathological breast cancer image classification by constructing multi-magnification ensembles because they perform better than models trained on each magnification separately.

Originality/value

The best multi-scale ensembles outperformed state-of-the-art integrated models and achieved an accuracy mean value of 98.82 per cent, precision of 98.46 per cent, recall of 100 per cent and F1-score of 99.20 per cent.

Details

Data Technologies and Applications, vol. 57 no. 5
Type: Research Article
ISSN: 2514-9288

Keywords

1 – 10 of over 4000