Search results

1 – 10 of 389
Article
Publication date: 29 March 2024

Sihao Li, Jiali Wang and Zhao Xu

The compliance checking of Building Information Modeling (BIM) models is crucial throughout the lifecycle of construction. The increasing amount and complexity of information…

Abstract

Purpose

The compliance checking of Building Information Modeling (BIM) models is crucial throughout the lifecycle of construction. The increasing amount and complexity of information carried by BIM models have made compliance checking more challenging, and manual methods are prone to errors. Therefore, this study aims to propose an integrative conceptual framework for automated compliance checking of BIM models, allowing for the identification of errors within BIM models.

Design/methodology/approach

This study first analyzed the typical building standards in the field of architecture and fire protection, and then the ontology of these elements is developed. Based on this, a building standard corpus is built, and deep learning models are trained to automatically label the building standard texts. The Neo4j is utilized for knowledge graph construction and storage, and a data extraction method based on the Dynamo is designed to obtain checking data files. After that, a matching algorithm is devised to express the logical rules of knowledge graph triples, resulting in automated compliance checking for BIM models.

Findings

Case validation results showed that this theoretical framework can achieve the automatic construction of domain knowledge graphs and automatic checking of BIM model compliance. Compared with traditional methods, this method has a higher degree of automation and portability.

Originality/value

This study introduces knowledge graphs and natural language processing technology into the field of BIM model checking and completes the automated process of constructing domain knowledge graphs and checking BIM model data. The validation of its functionality and usability through two case studies on a self-developed BIM checking platform.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Open Access
Article
Publication date: 12 December 2023

Laura Lucantoni, Sara Antomarioni, Filippo Emanuele Ciarapica and Maurizio Bevilacqua

The Overall Equipment Effectiveness (OEE) is considered a standard for measuring equipment productivity in terms of efficiency. Still, Artificial Intelligence solutions are rarely…

Abstract

Purpose

The Overall Equipment Effectiveness (OEE) is considered a standard for measuring equipment productivity in terms of efficiency. Still, Artificial Intelligence solutions are rarely used for analyzing OEE results and identifying corrective actions. Therefore, the approach proposed in this paper aims to provide a new rule-based Machine Learning (ML) framework for OEE enhancement and the selection of improvement actions.

Design/methodology/approach

Association Rules (ARs) are used as a rule-based ML method for extracting knowledge from huge data. First, the dominant loss class is identified and traditional methodologies are used with ARs for anomaly classification and prioritization. Once selected priority anomalies, a detailed analysis is conducted to investigate their influence on the OEE loss factors using ARs and Network Analysis (NA). Then, a Deming Cycle is used as a roadmap for applying the proposed methodology, testing and implementing proactive actions by monitoring the OEE variation.

Findings

The method proposed in this work has also been tested in an automotive company for framework validation and impact measuring. In particular, results highlighted that the rule-based ML methodology for OEE improvement addressed seven anomalies within a year through appropriate proactive actions: on average, each action has ensured an OEE gain of 5.4%.

Originality/value

The originality is related to the dual application of association rules in two different ways for extracting knowledge from the overall OEE. In particular, the co-occurrences of priority anomalies and their impact on asset Availability, Performance and Quality are investigated.

Details

International Journal of Quality & Reliability Management, vol. 41 no. 5
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 29 March 2024

Anil Kumar Goswami, Anamika Sinha, Meghna Goswami and Prashant Kumar

This study aims to extend and explore patterns and trends of research in the linkage of big data and knowledge management (KM) by identifying growth in terms of numbers of papers…

Abstract

Purpose

This study aims to extend and explore patterns and trends of research in the linkage of big data and knowledge management (KM) by identifying growth in terms of numbers of papers and current and emerging themes and to propose areas of future research.

Design/methodology/approach

The study was conducted by systematically extracting, analysing and synthesizing the literature related to linkage between big data and KM published in top-tier journals in Web of Science (WOS) and Scopus databases by exploiting bibliometric techniques along with theory, context, characteristics, methodology (TCCM) analysis.

Findings

The study unfolds four major themes of linkage between big data and KM research, namely (1) conceptual understanding of big data as an enabler for KM, (2) big data–based models and frameworks for KM, (3) big data as a predictor variable in KM context and (4) big data applications and capabilities. It also highlights TCCM of big data and KM research through which it integrates a few previously reported themes and suggests some new themes.

Research limitations/implications

This study extends advances in the previous reviews by adding a new time line, identifying new themes and helping in the understanding of complex and emerging field of linkage between big data and KM. The study outlines a holistic view of the research area and suggests future directions for flourishing in this research area.

Practical implications

This study highlights the role of big data in KM context resulting in enhancement of organizational performance and efficiency. A summary of existing literature and future avenues in this direction will help, guide and motivate managers to think beyond traditional data and incorporate big data into organizational knowledge infrastructure in order to get competitive advantage.

Originality/value

To the best of authors’ knowledge, the present study is the first study to go deeper into understanding of big data and KM research using bibliometric and TCCM analysis and thus adds a new theoretical perspective to existing literature.

Details

Benchmarking: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 6 February 2024

Somayeh Tamjid, Fatemeh Nooshinfard, Molouk Sadat Hosseini Beheshti, Nadjla Hariri and Fahimeh Babalhavaeji

The purpose of this study is to develop a domain independent, cost-effective, time-saving and semi-automated ontology generation framework that could extract taxonomic concepts…

Abstract

Purpose

The purpose of this study is to develop a domain independent, cost-effective, time-saving and semi-automated ontology generation framework that could extract taxonomic concepts from unstructured text corpus. In the human disease domain, ontologies are found to be extremely useful for managing the diversity of technical expressions in favour of information retrieval objectives. The boundaries of these domains are expanding so fast that it is essential to continuously develop new ontologies or upgrade available ones.

Design/methodology/approach

This paper proposes a semi-automated approach that extracts entities/relations via text mining of scientific publications. Text mining-based ontology (TmbOnt)-named code is generated to assist a user in capturing, processing and establishing ontology elements. This code takes a pile of unstructured text files as input and projects them into high-valued entities or relations as output. As a semi-automated approach, a user supervises the process, filters meaningful predecessor/successor phrases and finalizes the demanded ontology-taxonomy. To verify the practical capabilities of the scheme, a case study was performed to drive glaucoma ontology-taxonomy. For this purpose, text files containing 10,000 records were collected from PubMed.

Findings

The proposed approach processed over 3.8 million tokenized terms of those records and yielded the resultant glaucoma ontology-taxonomy. Compared with two famous disease ontologies, TmbOnt-driven taxonomy demonstrated a 60%–100% coverage ratio against famous medical thesauruses and ontology taxonomies, such as Human Disease Ontology, Medical Subject Headings and National Cancer Institute Thesaurus, with an average of 70% additional terms recommended for ontology development.

Originality/value

According to the literature, the proposed scheme demonstrated novel capability in expanding the ontology-taxonomy structure with a semi-automated text mining approach, aiming for future fully-automated approaches.

Details

The Electronic Library , vol. 42 no. 2
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 4 April 2024

Rita Sleiman, Quoc-Thông Nguyen, Sandra Lacaze, Kim-Phuc Tran and Sébastien Thomassey

We propose a machine learning based methodology to deal with data collected from a mobile application asking users their opinion regarding fashion products. Based on different…

Abstract

Purpose

We propose a machine learning based methodology to deal with data collected from a mobile application asking users their opinion regarding fashion products. Based on different machine learning techniques, the proposed approach relies on the data value chain principle to enrich data into knowledge, insights and learning experience.

Design/methodology/approach

Online interaction and the usage of social media have dramatically altered both consumers’ behaviors and business practices. Companies invest in social media platforms and digital marketing in order to increase their brand awareness and boost their sales. Especially for fashion retailers, understanding consumers’ behavior before launching a new collection is crucial to reduce overstock situations. In this study, we aim at providing retailers better understand consumers’ different assessments of newly introduced products.

Findings

By creating new product-related and user-related attributes, the proposed prediction model attends an average of 70.15% accuracy when evaluating the potential success of new future products during the design process of the collection. Results showed that by harnessing artificial intelligence techniques, along with social media data and mobile apps, new ways of interacting with clients and understanding their preferences are established.

Practical implications

From a practical point of view, the proposed approach helps businesses better target their marketing campaigns, localize their potential clients and adjust manufactured quantities.

Originality/value

The originality of the proposed approach lies in (1) the implementation of the data value chain principle to enhance the information of raw data collected from mobile apps and improve the prediction model performances, and (2) the combination consumer and product attributes to provide an accurate prediction of new fashion, products.

Details

International Journal of Clothing Science and Technology, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 17 November 2023

Ahmad Ebrahimi and Sara Mojtahedi

Warranty-based big data analysis has attracted a great deal of attention because of its key capabilities and role in improving product quality while minimizing costs. Information…

Abstract

Purpose

Warranty-based big data analysis has attracted a great deal of attention because of its key capabilities and role in improving product quality while minimizing costs. Information and details about particular parts (components) repair and replacement during the warranty term, usually stored in the after-sales service database, can be used to solve problems in a variety of sectors. Due to the small number of studies related to the complete analysis of parts failure patterns in the automotive industry in the literature, this paper focuses on discovering and assessing the impact of lesser-studied factors on the failure of auto parts in the warranty period from the after-sales data of an automotive manufacturer.

Design/methodology/approach

The interconnected method used in this study for analyzing failure patterns is formed by combining association rules (AR) mining and Bayesian networks (BNs).

Findings

This research utilized AR analysis to extract valuable information from warranty data, exploring the relationship between component failure, time and location. Additionally, BNs were employed to investigate other potential factors influencing component failure, which could not be identified using Association Rules alone. This approach provided a more comprehensive evaluation of the data and valuable insights for decision-making in relevant industries.

Originality/value

This study's findings are believed to be practical in achieving a better dissection and providing a comprehensive package that can be utilized to increase component quality and overcome cross-sectional solutions. The integration of these methods allowed for a wider exploration of potential factors influencing component failure, enhancing the validity and depth of the research findings.

Details

International Journal of Quality & Reliability Management, vol. 41 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 9 April 2024

Lu Wang, Jiahao Zheng, Jianrong Yao and Yuangao Chen

With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although…

Abstract

Purpose

With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although there are some models that can handle such problems well, there are still some shortcomings in some aspects. The purpose of this paper is to improve the accuracy of credit assessment models.

Design/methodology/approach

In this paper, three different stages are used to improve the classification performance of LSTM, so that financial institutions can more accurately identify borrowers at risk of default. The first approach is to use the K-Means-SMOTE algorithm to eliminate the imbalance within the class. In the second step, ResNet is used for feature extraction, and then two-layer LSTM is used for learning to strengthen the ability of neural networks to mine and utilize deep information. Finally, the model performance is improved by using the IDWPSO algorithm for optimization when debugging the neural network.

Findings

On two unbalanced datasets (category ratios of 700:1 and 3:1 respectively), the multi-stage improved model was compared with ten other models using accuracy, precision, specificity, recall, G-measure, F-measure and the nonparametric Wilcoxon test. It was demonstrated that the multi-stage improved model showed a more significant advantage in evaluating the imbalanced credit dataset.

Originality/value

In this paper, the parameters of the ResNet-LSTM hybrid neural network, which can fully mine and utilize the deep information, are tuned by an innovative intelligent optimization algorithm to strengthen the classification performance of the model.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 18 January 2024

Yelena Smirnova and Victoriano Travieso-Morales

The general data protection regulation (GDPR) was designed to address privacy challenges posed by globalisation and rapid technological advancements; however, its implementation…

Abstract

Purpose

The general data protection regulation (GDPR) was designed to address privacy challenges posed by globalisation and rapid technological advancements; however, its implementation has also introduced new hurdles for companies. This study aims to analyse and synthesise the existing literature that focuses on challenges of GDPR implementation in business enterprises, while also outlining the directions for future research.

Design/methodology/approach

The methodology of this review follows the preferred reporting items for systematic reviews and meta-analysis guidelines. It uses an extensive search strategy across Scopus and Web of Science databases, rigorously applying inclusion and exclusion criteria, yielding a detailed analysis of 16 selected studies that concentrate on GDPR implementation challenges in business organisations.

Findings

The findings indicate a predominant use of conceptual study methodologies in prior research, often limited to specific countries and technology-driven sectors. There is also an inclination towards exploring GDPR challenges within small and medium enterprises, while larger enterprises remain comparatively unexplored. Additionally, further investigation is needed to understand the implications of emerging technologies on GDPR compliance.

Research limitations/implications

This study’s limitations include reliance of the search strategy on two databases, potential exclusion of relevant research, limited existing literature on GDPR implementation challenges in business context and possible influence of diverse methodologies and contexts of previous studies on generalisability of the findings.

Originality/value

The originality of this review lies in its exclusive focus on analysing GDPR implementation challenges within the business context, coupled with a fresh categorisation of these challenges into technical, legal, organisational, and regulatory dimensions.

Details

International Journal of Law and Management, vol. 66 no. 3
Type: Research Article
ISSN: 1754-243X

Keywords

Article
Publication date: 25 March 2024

Hyoungjin Lee and Jeoung Yul Lee

This study examines how the characteristics of innovation knowledge exchanged among affiliate firms affect the ownership strategies adopted for their foreign subsidiaries.

Abstract

Purpose

This study examines how the characteristics of innovation knowledge exchanged among affiliate firms affect the ownership strategies adopted for their foreign subsidiaries.

Design/methodology/approach

This study employs a cross-classified multilevel model to examine a sample of 185 Korean manufacturing affiliates derived from 49 Chaebols engaged in international diversification, along with their 1,110 foreign manufacturing subsidiaries.

Findings

While exploratory innovation knowledge exchange lowers the affiliate's level of ownership in its foreign subsidiary, exploitative innovation knowledge exchange rather increases the affiliate's level of ownership in its foreign subsidiary.

Research limitations/implications

This study advances the literature on intrafirm knowledge exchange by highlighting it as a determinant of ownership strategies. The study further shows that the characteristics of knowledge exchanged at the affiliate level not only determine the ownership structure but also have the potential to shape the direction in which the subsidiary develops its competencies.

Practical implications

This study has practical implications for the managers of business group affiliates. The results suggest that managers should adapt their ownership strategies according to the type of knowledge exchanged at the affiliate level to achieve a balanced and synergistic effect on intraorganizational knowledge exchange.

Originality/value

Previous studies have extensively explored the performance implications related to knowledge exchange. However, there is a notable gap in understanding the mechanisms through which the value of knowledge transferred within an affiliate is realized. To address this gap, this study focuses on ownership strategy as a crucial factor and empirically examines how the characteristics of innovation knowledge exchanged among affiliate firms influence the ownership strategies adopted for their foreign subsidiaries. By investigating this relationship, this study provides valuable insights into the complex dynamics of knowledge exchange and its effect on ownership decisions within business group affiliates.

Details

Cross Cultural & Strategic Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5794

Keywords

Article
Publication date: 13 December 2023

Marina Proença, Bruna Cescatto Costa, Simone Regina Didonet, Ana Maria Machado Toaldo, Tomas Sparano Martins and José Roberto Frega

This study aims to investigate organizational learning, represented by the absorptive capacity, as a condition for the firm to learn about marketing data and make more informed…

Abstract

Purpose

This study aims to investigate organizational learning, represented by the absorptive capacity, as a condition for the firm to learn about marketing data and make more informed decisions. The authors also aimed to understand how the behavior of micro, small and medium enterprises (MSME) businesses differ in this scenario through a multilevel perspective.

Design/methodology/approach

Placing absorptive capacity as a mediator of the relationship between business analytics and rational marketing decisions, the authors analyzed data from 224 Brazilian retail companies using structural equation modeling estimated with partial least squares. To test the cross-level moderation effect, the authors also performed a multilevel analysis in RStudio.

Findings

The authors found a partial mediation of the absorptive capacity in the relation between business analytics and rational marketing decisions. The authors also discovered that, in the MSMEs firms’ group, even if smaller companies find it more difficult to use data, those that do may reap more benefits than larger ones. This is due to the influence of size in how firms handle information.

Research limitations/implications

The sample size, despite having shown to be consistent and valid, is considered small for a multilevel study. This suggests that our multilevel results should be viewed as suggestive, rather than conclusive, and subjected to further validation.

Practical implications

Rather than solely positioning business analytics as a tool for decision support, the authors’ analysis highlights the importance for firms to develop the absorptive capacity to enable ongoing acquisition, exploration and management of knowledge.

Social implications

MSMEs are of economic and social importance to most countries, especially developing ones. This research aimed to improve understanding of how this group of firms could transform knowledge into better decisions. The authors also highlight micro and small firms’ difficulties with the use of marketing data so that they can have more effective practices.

Originality/value

The research contributes to the understanding of organizational mechanisms to absorb and learn from the vast amount of current marketing information. Recognizing the relevance of MSMEs, a preliminary multilevel analysis was also conducted to comprehend differences within this group.

1 – 10 of 389