Search results
1 – 10 of 631Diego Espinosa Gispert, Ibrahim Yitmen, Habib Sadri and Afshin Taheri
The purpose of this research is to develop a framework of an ontology-based Asset Information Model (AIM) for a Digital Twin (DT) platform and enhance predictive maintenance…
Abstract
Purpose
The purpose of this research is to develop a framework of an ontology-based Asset Information Model (AIM) for a Digital Twin (DT) platform and enhance predictive maintenance practices in building facilities that could enable proactive and data-driven decision-making during the Operation and Maintenance (O&M) process.
Design/methodology/approach
A scoping literature review was accomplished to establish the theoretical foundation for the current investigation. A study on developing an ontology-based AIM for predictive maintenance in building facilities was conducted. Semi-structured interviews were conducted with industry professionals to gather qualitative data for ontology-based AIM framework validation and insights.
Findings
The research findings indicate that while the development of ontology faced challenges in defining missing entities and relations in the context of predictive maintenance, insights gained from the interviews enabled the establishment of a comprehensive framework for ontology-based AIM adoption in the Facility Management (FM) sector.
Practical implications
The proposed ontology-based AIM has the potential to enable proactive and data-driven decision-making during the process, optimizing predictive maintenance practices and ultimately enhancing energy efficiency and sustainability in the building industry.
Originality/value
The research contributes to a practical guide for ontology development processes and presents a framework of an Ontology-based AIM for a Digital Twin platform.
Details
Keywords
Weifei Hu, Tongzhou Zhang, Xiaoyu Deng, Zhenyu Liu and Jianrong Tan
Digital twin (DT) is an emerging technology that enables sophisticated interaction between physical objects and their virtual replicas. Although DT has recently gained significant…
Abstract
Digital twin (DT) is an emerging technology that enables sophisticated interaction between physical objects and their virtual replicas. Although DT has recently gained significant attraction in both industry and academia, there is no systematic understanding of DT from its development history to its different concepts and applications in disparate disciplines. The majority of DT literature focuses on the conceptual development of DT frameworks for a specific implementation area. Hence, this paper provides a state-of-the-art review of DT history, different definitions and models, and six types of key enabling technologies. The review also provides a comprehensive survey of DT applications from two perspectives: (1) applications in four product-lifecycle phases, i.e. product design, manufacturing, operation and maintenance, and recycling and (2) applications in four categorized engineering fields, including aerospace engineering, tunneling and underground engineering, wind engineering and Internet of things (IoT) applications. DT frameworks, characteristic components, key technologies and specific applications are extracted for each DT category in this paper. A comprehensive survey of the DT references reveals the following findings: (1) The majority of existing DT models only involve one-way data transfer from physical entities to virtual models and (2) There is a lack of consideration of the environmental coupling, which results in the inaccurate representation of the virtual components in existing DT models. Thus, this paper highlights the role of environmental factor in DT enabling technologies and in categorized engineering applications. In addition, the review discusses the key challenges and provides future work for constructing DTs of complex engineering systems.
Details
Keywords
Bedour M. Alshammari, Fairouz Aldhmour, Zainab M. AlQenaei and Haidar Almohri
There is a gap in knowledge about the Gulf Cooperation Council (GCC) because most studies are undertaken in countries outside the Gulf region – such as China, India, the US and…
Abstract
Purpose
There is a gap in knowledge about the Gulf Cooperation Council (GCC) because most studies are undertaken in countries outside the Gulf region – such as China, India, the US and Taiwan. The stock market contains rich, valuable and considerable data, and these data need careful analysis for good decisions to be made that can lead to increases in the efficiency of a business. Data mining techniques offer data processing tools and applications used to enhance decision-maker decisions. This study aims to predict the Kuwait stock market by applying big data mining.
Design/methodology/approach
The methodology used is quantitative techniques, which are mathematical and statistical models that describe a various array of the relationships of variables. Quantitative methods used to predict the direction of the stock market returns by using four techniques were implemented: logistic regression, decision trees, support vector machine and random forest.
Findings
The results are all variables statistically significant at the 5% level except gold price and oil price. Also, the variables that do not have an influence on the direction of the rate of return of Boursa Kuwait are money supply and gold price, unlike the Kuwait index, which has the highest coefficient. Furthermore, the height score of the variable that affects the direction of the rate of return is the firms, and the accuracy of the overall performance of the four models is nearly 50%.
Research limitations/implications
Some of the limitations identified for this study are as follows: (1) location limitation: Kuwait Stock Exchange; (2) time limitation: the amount of time available to accomplish the study, where the period was completed within the academic year 2019-2020 and the academic year 2020-2021. During 2020, the coronavirus pandemic (COVID-19), which was a major obstacle, occurred during data collection and analysis; (3) data limitation: The Kuwait Stock Exchange data were collected from May 2019 to March 2020, while the factors affecting the stock exchange data were collected in July 2020 due to the corona pandemic.
Originality/value
The study used new titles, variables and techniques such as using data mining to predict the Kuwait stock market. There are no adequate studies that predict the stock market by data mining in the GCC, especially in Kuwait. There is a gap in knowledge in the GCC as most studies are in foreign countries, such as China, India, the US and Taiwan.
Details
Keywords
Bachriah Fatwa Dhini, Abba Suganda Girsang, Unggul Utan Sufandi and Heny Kurniawati
The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes…
Abstract
Purpose
The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes essay scoring, which is conducted through two parameters, semantic and keyword similarities, using a SentenceTransformers pre-trained model that can construct the highest vector embedding. Combining these models is used to optimize the model with increasing accuracy.
Design/methodology/approach
The development of the model in the study is divided into seven stages: (1) data collection, (2) pre-processing data, (3) selected pre-trained SentenceTransformers model, (4) semantic similarity (sentence pair), (5) keyword similarity, (6) calculate final score and (7) evaluating model.
Findings
The multilingual paraphrase-multilingual-MiniLM-L12-v2 and distilbert-base-multilingual-cased-v1 models got the highest scores from comparisons of 11 pre-trained multilingual models of SentenceTransformers with Indonesian data (Dhini and Girsang, 2023). Both multilingual models were adopted in this study. A combination of two parameters is obtained by comparing the response of the keyword extraction responses with the rubric keywords. Based on the experimental results, proposing a combination can increase the evaluation results by 0.2.
Originality/value
This study uses discussion forum data from the general biology course in online learning at the open university for the 2020.2 and 2021.2 semesters. Forum discussion ratings are still manual. In this survey, the authors created a model that automatically calculates the value of discussion forums, which are essays based on the lecturer's answers moreover rubrics.
Details
Keywords
Haosen Liu, Youwei Wang, Xiabing Zhou, Zhengzheng Lou and Yangdong Ye
The railway signal equipment failure diagnosis is a vital element to keep the railway system operating safely. One of the most difficulties in signal equipment failure diagnosis…
Abstract
Purpose
The railway signal equipment failure diagnosis is a vital element to keep the railway system operating safely. One of the most difficulties in signal equipment failure diagnosis is the uncertainty of causality between the consequence and cause for the accident. The traditional method to solve this problem is based on Bayesian Network, which needs a rigid and independent assumption basis and prior probability knowledge but ignoring the semantic relationship in causality analysis. This paper aims to perform the uncertainty of causality in signal equipment failure diagnosis through a new way that emphasis on mining semantic relationships.
Design/methodology/approach
This study proposes a deterministic failure diagnosis (DFD) model based on the question answering system to implement railway signal equipment failure diagnosis. It includes the failure diagnosis module and deterministic diagnosis module. In the failure diagnosis module, this paper exploits the question answering system to recognise the cause of failure consequences. The question answering is composed of multi-layer neural networks, which extracts the position and part of speech features of text data from lower layers and acquires contextual features and interactive features of text data by Bi-LSTM and Match-LSTM, respectively, from high layers, subsequently generates the candidate failure cause set by proposed the enhanced boundary unit. In the second module, this study ranks the candidate failure cause set in the semantic matching mechanism (SMM), choosing the top 1st semantic matching degree as the deterministic failure causative factor.
Findings
Experiments on real data set railway maintenance signal equipment show that the proposed DFD model can implement the deterministic diagnosis of railway signal equipment failure. Comparing massive existing methods, the model achieves the state of art in the natural understanding semantic of railway signal equipment diagnosis domain.
Originality/value
It is the first time to use a question answering system executing signal equipment failure diagnoses, which makes failure diagnosis more intelligent than before. The EMU enables the DFD model to understand the natural semantic in long sequence contexture. Then, the SMM makes the DFD model acquire the certainty failure cause in the failure diagnosis of railway signal equipment.
Details
Keywords
Wang Zengqing, Zheng Yu Xie and Jiang Yiling
With the rapid development of railway-intelligent video technology, scene understanding is becoming more and more important. Semantic segmentation is a major part of scene…
Abstract
Purpose
With the rapid development of railway-intelligent video technology, scene understanding is becoming more and more important. Semantic segmentation is a major part of scene understanding. There is an urgent need for an algorithm with high accuracy and real-time to meet the current railway requirements for railway identification. In response to this demand, this paper aims to explore a variety of models, accurately locate and segment important railway signs based on the improved SegNeXt algorithm, supplement the railway safety protection system and improve the intelligent level of railway safety protection.
Design/methodology/approach
This paper studies the performance of existing models on RailSem19 and explores the defects of each model through performance so as to further explore an algorithm model dedicated to railway semantic segmentation. In this paper, the authors explore the optimal solution of SegNeXt model for railway scenes and achieve the purpose of this paper by improving the encoder and decoder structure.
Findings
This paper proposes an improved SegNeXt algorithm: first, it explores the performance of various models on railways, studies the problems of semantic segmentation on railways and then analyzes the specific problems. On the basis of retaining the original excellent MSCAN encoder of SegNeXt, multiscale information fusion is used to further extract detailed features such as multihead attention and mask, solving the problem of inaccurate segmentation of current objects by the original SegNeXt algorithm. The improved algorithm is of great significance for the segmentation and recognition of railway signs.
Research limitations/implications
The model constructed in this paper has advantages in the feature segmentation of distant small objects, but it still has the problem of segmentation fracture for the railway, which is not completely segmented. In addition, in the throat area, due to the complexity of the railway, the segmentation results are not accurate.
Social implications
The identification and segmentation of railway signs based on the improved SegNeXt algorithm in this paper is of great significance for the understanding of existing railway scenes, which can greatly improve the classification and recognition ability of railway small object features and can greatly improve the degree of railway security.
Originality/value
This article introduces an enhanced version of the SegNeXt algorithm, which aims to improve the accuracy of semantic segmentation on railways. The study begins by investigating the performance of different models in railway scenarios and identifying the challenges associated with semantic segmentation on this particular domain. To address these challenges, the proposed approach builds upon the strong foundation of the original SegNeXt algorithm, leveraging techniques such as multi-scale information fusion, multi-head attention, and masking to extract finer details and enhance feature representation. By doing so, the improved algorithm effectively resolves the issue of inaccurate object segmentation encountered in the original SegNeXt algorithm. This advancement holds significant importance for the accurate recognition and segmentation of railway signage.
Details
Keywords
Linzi Wang, Qiudan Li, Jingjun David Xu and Minjie Yuan
Mining user-concerned actionable and interpretable hot topics will help management departments fully grasp the latest events and make timely decisions. Existing topic models…
Abstract
Purpose
Mining user-concerned actionable and interpretable hot topics will help management departments fully grasp the latest events and make timely decisions. Existing topic models primarily integrate word embedding and matrix decomposition, which only generates keyword-based hot topics with weak interpretability, making it difficult to meet the specific needs of users. Mining phrase-based hot topics with syntactic dependency structure have been proven to model structure information effectively. A key challenge lies in the effective integration of the above information into the hot topic mining process.
Design/methodology/approach
This paper proposes the nonnegative matrix factorization (NMF)-based hot topic mining method, semantics syntax-assisted hot topic model (SSAHM), which combines semantic association and syntactic dependency structure. First, a semantic–syntactic component association matrix is constructed. Then, the matrix is used as a constraint condition to be incorporated into the block coordinate descent (BCD)-based matrix decomposition process. Finally, a hot topic information-driven phrase extraction algorithm is applied to describe hot topics.
Findings
The efficacy of the developed model is demonstrated on two real-world datasets, and the effects of dependency structure information on different topics are compared. The qualitative examples further explain the application of the method in real scenarios.
Originality/value
Most prior research focuses on keyword-based hot topics. Thus, the literature is advanced by mining phrase-based hot topics with syntactic dependency structure, which can effectively analyze the semantics. The development of syntactic dependency structure considering the combination of word order and part-of-speech (POS) is a step forward as word order, and POS are only separately utilized in the prior literature. Ignoring this synergy may miss important information, such as grammatical structure coherence and logical relations between syntactic components.
Details
Keywords
Sepehr Alizadehsalehi and Ibrahim Yitmen
The purpose of this research is to develop a generic framework of a digital twin (DT)-based automated construction progress monitoring through reality capture to extended reality…
Abstract
Purpose
The purpose of this research is to develop a generic framework of a digital twin (DT)-based automated construction progress monitoring through reality capture to extended reality (RC-to-XR).
Design/methodology/approach
IDEF0 data modeling method has been designed to establish an integration of reality capturing technologies by using BIM, DTs and XR for automated construction progress monitoring. Structural equation modeling (SEM) method has been used to test the proposed hypotheses and develop the skill model to examine the reliability, validity and contribution of the framework to understand the DRX model's effectiveness if implemented in real practice.
Findings
The research findings validate the positive impact and importance of utilizing technology integration in a logical framework such as DRX, which provides trustable, real-time, transparent and digital construction progress monitoring.
Practical implications
DRX system captures accurate, real-time and comprehensive data at construction stage, analyses data and information precisely and quickly, visualizes information and reports in a real scale environment, facilitates information flows and communication, learns from itself, historical data and accessible online data to predict future actions, provides semantic and digitalize construction information with analytical capabilities and optimizes decision-making process.
Originality/value
The research presents a framework of an automated construction progress monitoring system that integrates BIM, various reality capturing technologies, DT and XR technologies (VR, AR and MR), arraying the steps on how these technologies work collaboratively to create, capture, generate, analyze, manage and visualize construction progress data, information and reports.
Details
Keywords
Tim Gorichanaz, Jonathan Furner, Lai Ma, David Bawden, Lyn Robinson, Dominic Dixon, Ken Herold, Sille Obelitz Søe, Betsy Van der Veer Martens and Luciano Floridi
The purpose of this paper is to review and discuss Luciano Floridi’s 2019 book The Logic of Information: A Theory of Philosophy as Conceptual Design, the latest instalment in his…
Abstract
Purpose
The purpose of this paper is to review and discuss Luciano Floridi’s 2019 book The Logic of Information: A Theory of Philosophy as Conceptual Design, the latest instalment in his philosophy of information (PI) tetralogy, particularly with respect to its implications for library and information studies (LIS).
Design/methodology/approach
Nine scholars with research interests in philosophy and LIS read and responded to the book, raising critical and heuristic questions in the spirit of scholarly dialogue. Floridi responded to these questions.
Findings
Floridi’s PI, including this latest publication, is of interest to LIS scholars, and much insight can be gained by exploring this connection. It seems also that LIS has the potential to contribute to PI’s further development in some respects.
Research limitations/implications
Floridi’s PI work is technical philosophy for which many LIS scholars do not have the training or patience to engage with, yet doing so is rewarding. This suggests a role for translational work between philosophy and LIS.
Originality/value
The book symposium format, not yet seen in LIS, provides forum for sustained, multifaceted and generative dialogue around ideas.
Details
Keywords
Luca Rampini and Fulvio Re Cecconi
This study aims to introduce a new methodology for generating synthetic images for facility management purposes. The method starts by leveraging the existing 3D open-source BIM…
Abstract
Purpose
This study aims to introduce a new methodology for generating synthetic images for facility management purposes. The method starts by leveraging the existing 3D open-source BIM models and using them inside a graphic engine to produce a photorealistic representation of indoor spaces enriched with facility-related objects. The virtual environment creates several images by changing lighting conditions, camera poses or material. Moreover, the created images are labeled and ready to be trained in the model.
Design/methodology/approach
This paper focuses on the challenges characterizing object detection models to enrich digital twins with facility management-related information. The automatic detection of small objects, such as sockets, power plugs, etc., requires big, labeled data sets that are costly and time-consuming to create. This study proposes a solution based on existing 3D BIM models to produce quick and automatically labeled synthetic images.
Findings
The paper presents a conceptual model for creating synthetic images to increase the performance in training object detection models for facility management. The results show that virtually generated images, rather than an alternative to real images, are a powerful tool for integrating existing data sets. In other words, while a base of real images is still needed, introducing synthetic images helps augment the model’s performance and robustness in covering different types of objects.
Originality/value
This study introduced the first pipeline for creating synthetic images for facility management. Moreover, this paper validates this pipeline by proposing a case study where the performance of object detection models trained on real data or a combination of real and synthetic images are compared.
Details