Search results
1 – 10 of 651Adequate means for easily viewing, browsing and searching knowledge graphs (KGs) are a crucial, still limiting factor. Therefore, this paper aims to present virtual properties as…
Abstract
Purpose
Adequate means for easily viewing, browsing and searching knowledge graphs (KGs) are a crucial, still limiting factor. Therefore, this paper aims to present virtual properties as valuable user interface (UI) concept for ontologies and KGs able to improve these issues. Virtual properties provide shortcuts on a KG that can enrich the scope of a class with other information beyond its direct neighborhood.
Design/methodology/approach
Virtual properties can be defined as enhancements of shapes constraint language (SHACL) property shapes. Their values are computed on demand via protocol and RDF query language (SPARQL) queries. An approach is demonstrated that can help to identify suitable virtual property candidates. Virtual properties can be realized as integral functionality of generic, frame-based UIs, which can automatically provide views and masks for viewing and searching a KG.
Findings
The virtual property approach has been implemented at Bosch and is usable by more than 100,000 Bosch employees in a productive deployment, which proves the maturity and relevance of the approach for Bosch. It has successfully been demonstrated that virtual properties can significantly improve KG UIs by enriching the scope of a class with information beyond its direct neighborhood.
Originality/value
SHACL-defined virtual properties and their automatic identification are a novel concept. To the best of the author’s knowledge, no such approach has been established nor standardized so far.
Details
Keywords
Elaheh Hosseini, Kimiya Taghizadeh Milani and Mohammad Shaker Sabetnasab
This research aimed to visualize and analyze the co-word network and thematic clusters of the intellectual structure in the field of linked data during 1900–2021.
Abstract
Purpose
This research aimed to visualize and analyze the co-word network and thematic clusters of the intellectual structure in the field of linked data during 1900–2021.
Design/methodology/approach
This applied research employed a descriptive and analytical method, scientometric indicators, co-word techniques, and social network analysis. VOSviewer, SPSS, Python programming, and UCINet software were used for data analysis and network structure visualization.
Findings
The top ranks of the Web of Science (WOS) subject categorization belonged to various fields of computer science. Besides, the USA was the most prolific country. The keyword ontology had the highest frequency of co-occurrence. Ontology and semantic were the most frequent co-word pairs. In terms of the network structure, nine major topic clusters were identified based on co-occurrence, and 29 thematic clusters were identified based on hierarchical clustering. Comparisons between the two clustering techniques indicated that three clusters, namely semantic bioinformatics, knowledge representation, and semantic tools were in common. The most mature and mainstream thematic clusters were natural language processing techniques to boost modeling and visualization, context-aware knowledge discovery, probabilistic latent semantic analysis (PLSA), semantic tools, latent semantic indexing, web ontology language (OWL) syntax, and ontology-based deep learning.
Originality/value
This study adopted various techniques such as co-word analysis, social network analysis network structure visualization, and hierarchical clustering to represent a suitable, visual, methodical, and comprehensive perspective into linked data.
Details
Keywords
Abdelrahman M. Farouk and Rahimi A. Rahman
Implementing building information modeling (BIM) in construction projects offers many benefits. However, the use of BIM in project cost management is still limited. This study…
Abstract
Purpose
Implementing building information modeling (BIM) in construction projects offers many benefits. However, the use of BIM in project cost management is still limited. This study aims to review the current trends in the application of BIM in project cost management.
Design/methodology/approach
This study systematically reviews the literature on the application of BIM in project cost management. A total of 46 related articles were identified and analyzed using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses method.
Findings
Eighteen approaches to applying BIM in project cost management were identified. The approaches can be grouped into cost control and cost estimation. Also, BIM can be applied independently or integrated with other techniques. The integrated approaches for cost control include integration with genetic algorithms, Monte Carlo simulation, lean construction, integrated project delivery, neural network and value engineering. On the contrary, integrated approaches for cost estimation include integration with cost-plus pricing, discrepancy analysis, construction progress curves, estimation standards, algorithms, declarative mappings, life cycle sustainability assessment, ontology, Web-based frameworks and structured query language.
Originality/value
To the best of the authors’ knowledge, this study is the first to systematically review prior literature on the application of BIM in project cost management. As a result, the study provides a comprehensive understanding of the current state of the art and fills the literature gap. Researchers and industry professionals can use the study findings to increase the benefits of implementing BIM in construction projects.
Details
Keywords
Alexandre Coussa, Philippe Gugler and Jonathan Reidy
The purpose of this paper is to develop a comprehensive overview of green innovation (GI) in China, which is carried out by reviewing the evolution of GI from 2000 to 2019, and…
Abstract
Purpose
The purpose of this paper is to develop a comprehensive overview of green innovation (GI) in China, which is carried out by reviewing the evolution of GI from 2000 to 2019, and the main type of technology, actors and localizations. When appropriate, GI is compared to non-GI.
Design/methodology/approach
The study uses patent data from the European Patent Office database (PATSTAT); these data are processed to map trends and identify the main contributors to GI and the location of such innovation. The findings are then discussed and complemented with academic literature.
Findings
Key findings reveal an increasing divergence between GI and nongreen innovation after the 2008 crisis. It is also observed that solar energy appears to be the main component of GI in China, with a shift from photovoltaic thermal energy to solar photovoltaic energy after 2008. Other areas, such as waste management, greenhouse gases capture and climate change adaptation, are less innovative. Companies play an essential role in the development of all types of innovation. In terms of location, green patents are mainly filed in China’s three main megacities. The study also highlights the significant role of the Chinese state, which led policies shaping the trajectories and forms of GI.
Originality/value
This study expands knowledge on GI in China, highlighting its main specificities and the role of key actors. It provides to the reader a comprehensive picture of China’s green policies and innovation realities. The results can therefore be used to improve the understanding of GI evolution in China and facilitate the formulation of new research questions.
Details
Keywords
B. Maheswari and Rajganesh Nagarajan
A new Chatbot system is implemented to provide both voice-based and textual-based communication to address student queries without any delay. Initially, the input texts are…
Abstract
Purpose
A new Chatbot system is implemented to provide both voice-based and textual-based communication to address student queries without any delay. Initially, the input texts are gathered from the chat and then the gathered text is fed to pre-processing techniques like tokenization, stemming of words and removal of stop words. Then, the pre-processed data are given to the Natural Learning Process (NLP) for extracting the features, where the XLnet and Bidirectional Encoder Representations from Transformers (BERT) are utilized to extract the features. From these extracted features, the target-based fused feature pools are obtained. Then, the intent detection is carried out to extract the answers related to the user queries via Enhanced 1D-Convolutional Neural Networks with Long Short Term Memory (E1DCNN-LSTM) where the parameters are optimized using Position Averaging of Binary Emperor Penguin Optimizer with Colony Predation Algorithm (PA-BEPOCPA). Finally, the answers are extracted based on the intent of a particular student’s teaching materials like video, image or text. The implementation results are analyzed through different recently developed Chatbot detection models to validate the effectiveness of the newly developed model.
Design/methodology/approach
A smart model for the NLP is developed to help education-related institutions for an easy way of interaction between students and teachers with high prediction of accurate data for the given query. This research work aims to design a new educational Chatbot to assist the teaching-learning process with the NLP. The input data are gathered from the user through chats and given to the pre-processing stage, where tokenization, steaming of words and removal of stop words are used. The output data from the pre-processing stage is given to the feature extraction phase where XLnet and BERT are used. In this feature extraction, the optimal features are extracted using hybrid PA-BEPOCPA to maximize the correlation coefficient. The features from XLnet and features from BERT were given to target-based features fused pool to produce optimal features. Here, the best features are optimally selected using developed PA-BEPOCPA for maximizing the correlation among coefficients. The output of selected features is given to E1DCNN-LSTM for implementation of educational Chatbot with high accuracy and precision.
Findings
The investigation result shows that the implemented model achieves maximum accuracy of 57% more than Bidirectional long short-term memory (BiLSTM), 58% more than One Dimansional Convolutional Neural Network (1DCNN), 59% more than LSTM and 62% more than Ensemble for the given dataset.
Originality/value
The prediction accuracy was high in this proposed deep learning-based educational Chatbot system when compared with various baseline works.
Details
Keywords
Yi-Hung Liu, Sheng-Fong Chen and Dan-Wei (Marian) Wen
Online medical repositories provide a platform for users to share information and dynamically access abundant electronic health data. It is important to determine whether case…
Abstract
Purpose
Online medical repositories provide a platform for users to share information and dynamically access abundant electronic health data. It is important to determine whether case report information can assist the general public in appropriately managing their diseases. Therefore, this paper aims to introduce a novel deep learning-based method that allows non-professionals to make inquiries using ordinary vocabulary, retrieving the most relevant case reports for accurate and effective health information.
Design/methodology/approach
The dataset of case reports was collected from both the patient-generated research network and the digital medical journal repository. To enhance the accuracy of obtaining relevant case reports, the authors propose a retrieval approach that combines BERT and BiLSTM methods. The authors identified representative health-related case reports and analyzed the retrieval performance, as well as user judgments.
Findings
This study aims to provide the necessary functionalities to deliver relevant health case reports based on input from ordinary terms. The proposed framework includes features for health management, user feedback acquisition and ranking by weights to obtain the most pertinent case reports.
Originality/value
This study contributes to health information systems by analyzing patients' experiences and treatments with the case report retrieval model. The results of this study can provide immense benefit to the general public who intend to find treatment decisions and experiences from relevant case reports.
Details
Keywords
Na Xu, Yanxiang Liang, Chaoran Guo, Bo Meng, Xueqing Zhou, Yuting Hu and Bo Zhang
Safety management plays an important part in coal mine construction. Due to complex data, the implementation of the construction safety knowledge scattered in standards poses a…
Abstract
Purpose
Safety management plays an important part in coal mine construction. Due to complex data, the implementation of the construction safety knowledge scattered in standards poses a challenge. This paper aims to develop a knowledge extraction model to automatically and efficiently extract domain knowledge from unstructured texts.
Design/methodology/approach
Bidirectional encoder representations from transformers (BERT)-bidirectional long short-term memory (BiLSTM)-conditional random field (CRF) method based on a pre-training language model was applied to carry out knowledge entity recognition in the field of coal mine construction safety in this paper. Firstly, 80 safety standards for coal mine construction were collected, sorted out and marked as a descriptive corpus. Then, the BERT pre-training language model was used to obtain dynamic word vectors. Finally, the BiLSTM-CRF model concluded the entity’s optimal tag sequence.
Findings
Accordingly, 11,933 entities and 2,051 relationships in the standard specifications texts of this paper were identified and a language model suitable for coal mine construction safety management was proposed. The experiments showed that F1 values were all above 60% in nine types of entities such as security management. F1 value of this model was more than 60% for entity extraction. The model identified and extracted entities more accurately than conventional methods.
Originality/value
This work completed the domain knowledge query and built a Q&A platform via entities and relationships identified by the standard specifications suitable for coal mines. This paper proposed a systematic framework for texts in coal mine construction safety to improve efficiency and accuracy of domain-specific entity extraction. In addition, the pretraining language model was also introduced into the coal mine construction safety to realize dynamic entity recognition, which provides technical support and theoretical reference for the optimization of safety management platforms.
Details
Keywords
James Christopher Westland and Jian Mou
Internet search is a $120bn business that answers lists of search terms or keywords with relevant links to Internet webpages. Only a few companies have sufficient scale to compete…
Abstract
Purpose
Internet search is a $120bn business that answers lists of search terms or keywords with relevant links to Internet webpages. Only a few companies have sufficient scale to compete and thus economics of the process are paramount. This study aims to develop a detailed industry-specific modeling of the economics of internet search.
Design/methodology/approach
The current research develops a stochastic model of the process of Internet indexing, search and retrieval in order to predict expected costs and revenues of particular configurations and usages.
Findings
The models define behavior and economics of parameters that are not directly observable, where it is difficult to empirically determine the distributions and economics.
Originality/value
The model may be used to guide the economics of large search engine operations, including the advertising platforms that depend on them and largely fund them.
Details
Keywords
This paper aims to critically review the intersection of searching and learning among children in the context of voice-based conversational agents (VCAs). This study presents the…
Abstract
Purpose
This paper aims to critically review the intersection of searching and learning among children in the context of voice-based conversational agents (VCAs). This study presents the opportunities and challenges around reconfiguring current VCAs for children to facilitate human learning, generate diverse data to empower VCAs, and assess children’s learning from voice search interactions.
Design/methodology/approach
The scope of this paper includes children’s use of VCAs for learning purposes with an emphasis on conceptualizing their VCA use from search as learning perspectives. This study selects representative works from three areas of literature: children’s perceptions of digital devices, children’s learning and searching, and children’s search as learning. This study also includes conceptual papers and empirical studies focusing on children from 3 to 11 because this age spectrum covers a vital transitional phase in children’s ability to understand and use VCAs.
Findings
This study proposes the concept of child-centered voice search systems and provides design recommendations for imbuing contextual information, providing communication breakdown repair strategies, scaffolding information interactions, integrating emotional intelligence, and providing explicit feedback. This study presents future research directions for longitudinal and observational studies with more culturally diverse child participants.
Originality/value
This paper makes important contributions to the field of information and learning sciences and children’s searching as learning by proposing a new perspective where current VCAs are reconfigured as conversational voice search systems to enhance children’s learning.
Details
Keywords
Muhammad Saleem Sumbal and Quratulain Amber
Generative AI and more specifically ChatGPT has brought a revolution in the lives of people by providing them with required knowledge that it has learnt from an exponentially…
Abstract
Purpose
Generative AI and more specifically ChatGPT has brought a revolution in the lives of people by providing them with required knowledge that it has learnt from an exponentially large knowledge base. In this viewpoint, we are initiating the debate and offer the first step towards Generative AI based knowledge management systems in organizations.
Design/methodology/approach
This study is a viewpoint and develops a conceptual foundation using existing literature on how ChatGPT can enhance the KM capability based on Nonaka’s SECI model. It further supports the concept by collecting data from a public sector univesity in Hong Kong to strenghten our argument of ChatGPT mediated knowledge management system.
Findings
We posit that all four processes, that is Socialization, Externalization, Combination and Internalization can significantly improve when integrated with ChatGPT. ChatGPT users are, in general, satisfied with the use of ChatGPT being capable of facilitating knowledge generation and flow in organizations.
Research limitations/implications
The study provides a conceptual foundation to further the knowledge on how ChatGPT can be integrated within organizations to enhance the knowledge management capability of organizations. Further, it develops an understanding on how managers and executives can use ChatGPT for effective knowledge management through improving the four processes of Nonaka’s SECI model.
Originality/value
This is one of the earliest studies on the linkage of knowledge management with ChatGPT and lays a foundation for ChatGPT mediated knowledge management system in organizations.
Details