Search results
1 – 10 of over 1000Elaheh Hosseini, Kimiya Taghizadeh Milani and Mohammad Shaker Sabetnasab
This research aimed to visualize and analyze the co-word network and thematic clusters of the intellectual structure in the field of linked data during 1900–2021.
Abstract
Purpose
This research aimed to visualize and analyze the co-word network and thematic clusters of the intellectual structure in the field of linked data during 1900–2021.
Design/methodology/approach
This applied research employed a descriptive and analytical method, scientometric indicators, co-word techniques, and social network analysis. VOSviewer, SPSS, Python programming, and UCINet software were used for data analysis and network structure visualization.
Findings
The top ranks of the Web of Science (WOS) subject categorization belonged to various fields of computer science. Besides, the USA was the most prolific country. The keyword ontology had the highest frequency of co-occurrence. Ontology and semantic were the most frequent co-word pairs. In terms of the network structure, nine major topic clusters were identified based on co-occurrence, and 29 thematic clusters were identified based on hierarchical clustering. Comparisons between the two clustering techniques indicated that three clusters, namely semantic bioinformatics, knowledge representation, and semantic tools were in common. The most mature and mainstream thematic clusters were natural language processing techniques to boost modeling and visualization, context-aware knowledge discovery, probabilistic latent semantic analysis (PLSA), semantic tools, latent semantic indexing, web ontology language (OWL) syntax, and ontology-based deep learning.
Originality/value
This study adopted various techniques such as co-word analysis, social network analysis network structure visualization, and hierarchical clustering to represent a suitable, visual, methodical, and comprehensive perspective into linked data.
Details
Keywords
Khurram Shahzad and Shakeel Ahmad Khan
Major objective of the instant study was to investigate the factors affecting the adoption of integrated semantic digital libraries (SDLs). It attempted to find out the challenges…
Abstract
Purpose
Major objective of the instant study was to investigate the factors affecting the adoption of integrated semantic digital libraries (SDLs). It attempted to find out the challenges faced in implementing semantic technologies in digital libraries. This study also aimed to develop a framework to provide practical solutions to efficiently adopt semantic digital library systems to offer richer data and services.
Design/methodology/approach
To meet the formulated objectives of the study, a systematic literature review was conducted. The authors adhered to the “Preferred Reporting Items for the Systematic Review and Meta-analysis” (PRISMA) guidelines as a research method. The data were retrieved from different tools and databases. In total, 35 key studies were included for systematic review after having applied standard procedures.
Findings
The findings of the study indicated that SDLs are highly significant as they offered context-based information resources. Interoperability of the systems, advancement in bilateral transfer modules, machine-controlled indexing, and folksonomy were key factors in developing semantic digital libraries. The study identified five different types of challenges to build an integrated semantic digital library system. These challenges included ontologies and interoperability, development of a suitable model, diversity in language, lack of skilled human resources, and other technical issues.
Originality/value
This paper provided a framework that is based on practical solutions as a benchmark for policymakers to devise formal standards for the initiation to develop integrated semantic digital libraries.
Details
Keywords
The purpose of the paper is to propose a semantic model for describing open source software (OSS) in a machine–human understandable format. The model is extracted to support…
Abstract
Purpose
The purpose of the paper is to propose a semantic model for describing open source software (OSS) in a machine–human understandable format. The model is extracted to support source code reusing and revising as the two primary targets of OSS through a systematic review of related documents.
Design/methodology/approach
Conducting a systematic review, all the software reusing criteria are identified and introduced to the web of data by an ontology for OSS (O4OSS). The software semantic model introduced in this paper explores OSS through triple expressions in which the O4OSS properties are predicates.
Findings
This model improves the quality of web data by describing software in a structured machine–human readable profile, which is linked to the related data that was previously published on the web. Evaluating the OSS semantic model is accomplished through comparing it with previous approaches, comparing the software structured metadata with profile index of software in some well-known repositories, calculating the software retrieval rank and surveying domain experts.
Originality/value
Considering context-specific information and authority levels, the proposed software model would be applicable to any open and close software. Using this model to publish software provides an infrastructure of connected meaningful data and helps developers overcome some specific challenges. By navigating software data, many questions which can be answered only through reading multiple documents can be automatically responded on the web of data.
Details
Keywords
Debasis Majhi and Bhaskar Mukherjee
The purpose of this study is to identify the research fronts by analysing highly cited core papers adjusted with the age of a paper in library and information science (LIS) where…
Abstract
Purpose
The purpose of this study is to identify the research fronts by analysing highly cited core papers adjusted with the age of a paper in library and information science (LIS) where natural language processing (NLP) is being applied significantly.
Design/methodology/approach
By excavating international databases, 3,087 core papers that received at least 5% of the total citations have been identified. By calculating the average mean years of these core papers, and total citations received, a CPT (citation/publication/time) value was calculated in all 20 fronts to understand how a front is relatively receiving greater attention among peers within a course of time. One theme article has been finally identified from each of these 20 fronts.
Findings
Bidirectional encoder representations from transformers with CPT value 1.608 followed by sentiment analysis with CPT 1.292 received highest attention in NLP research. Columbia University New York, in terms of University, Journal of the American Medical Informatics Association, in terms of journals, USA followed by People Republic of China, in terms of country and Xu, H., University of Texas, in terms of author are the top in these fronts. It is identified that the NLP applications boost the performance of digital libraries and automated library systems in the digital environment.
Practical implications
Any research fronts that are identified in the findings of this paper may be used as a base for researchers who intended to perform extensive research on NLP.
Originality/value
To the best of the authors’ knowledge, the methodology adopted in this paper is the first of its kind where meta-analysis approach has been used for understanding the research fronts in sub field like NLP for a broad domain like LIS.
Details
Keywords
Ruan Wang, Jun Deng, Xinhui Guan and Yuming He
With the development of data mining technology, diverse and broader domain knowledge can be extracted automatically. However, the research on applying knowledge mapping and data…
Abstract
Purpose
With the development of data mining technology, diverse and broader domain knowledge can be extracted automatically. However, the research on applying knowledge mapping and data visualization techniques to genealogical data is limited. This paper aims to fill this research gap by providing a systematic framework and process guidance for practitioners seeking to uncover hidden knowledge from genealogy.
Design/methodology/approach
Based on a literature review of genealogy's current knowledge reasoning research, the authors constructed an integrated framework for knowledge inference and visualization application using a knowledge graph. Additionally, the authors applied this framework in a case study using “Manchu Clan Genealogy” as the data source.
Findings
The case study shows that the proposed framework can effectively decompose and reconstruct genealogy. It demonstrates the reasoning, discovery, and web visualization application process of implicit information in genealogy. It enhances the effective utilization of Manchu genealogy resources by highlighting the intricate relationships among people, places, and time entities.
Originality/value
This study proposed a framework for genealogy knowledge reasoning and visual analysis utilizing a knowledge graph, including five dimensions: the target layer, the resource layer, the data layer, the inference layer, and the application layer. It helps to gather the scattered genealogy information and establish a data network with semantic correlations while establishing reasoning rules to enable inference discovery and visualization of hidden relationships.
Details
Keywords
This study aims to develop a synthetic knowledge repository consisted of interrelated Web Ontology Language.
Abstract
Purpose
This study aims to develop a synthetic knowledge repository consisted of interrelated Web Ontology Language.
Design/methodology/approach
The ontology composes the main framework to categorize data of product life cycle with eco-design mode (PLC-EDM) and automatically infer specialists’ knowledge for data confirmation, eventually assisting the utilizations and generation of strategies toward decision-making
Findings
(i) utilization of a novel model with ontology mode for information reuse cross the different eco-design applications; (ii) generation of a sound platform toward life cycle evaluation; and (iii) implementation of the PLC-EDM model along the product generation process.
Research limitations/implications
It cannot substitute an evaluation tool of life cycle. Certainly, this model does not predict the “target and range” and/or the depiction of the “utility module” that are basic activities in life cycle assessments as characterized through the international organization for standardization regulations.
Practical implications
As portion of this framework, a prototype Web application is presented which is applied to produce, reuse and verify knowledge of product life cycle.
Social implications
By counting upon the ontology, the information conducted by the utilization is certainly semantically represented to promote the data sharing among various participants and tools. Besides, the data can be verified against possible faults by inferring over the ontology. Hence, a feasible way to a popular topic in the domain of eco-design applications extension in the industry.
Originality/value
The goals are: to lean on rigid modeling principles; and to promote the interoperability and diffusion of the ontology toward particular utilization demands.
Details
Keywords
Hossein Omrany, Amirhosein Ghaffarianhoseini, Ali Ghaffarianhoseini and Derek John Clements-Croome
This paper critically analysed 195 articles with the objectives of providing a clear understanding of the current City Information Modelling (CIM) implementations, identifying the…
Abstract
Purpose
This paper critically analysed 195 articles with the objectives of providing a clear understanding of the current City Information Modelling (CIM) implementations, identifying the main challenges hampering the uptake of CIM and providing recommendations for the future development of CIM.
Design/methodology/approach
This paper adopts the PRISMA method in order to perform the systematic literature review.
Findings
The results identified nine domains of CIM implementation including (1) natural disaster management, (2) urban building energy modelling, (3) urban facility management, (4) urban infrastructure management, (5) land administration systems, (6) improvement of urban microclimates, (7) development of digital twin and smart cities, (8) improvement of social engagement and (9) urban landscaping design. Further, eight challenges were identified that hinder the widespread employment of CIM including (1) reluctance towards CIM application, (2) data quality, (3) computing resources and storage inefficiency, (4) data integration between BIM and GIS and interoperability, (5) establishing a standardised workflow for CIM implementation, (6) synergy between all parties involved, (7) cybersecurity and intellectual property and (8) data management.
Originality/value
This is the first paper of its kind that provides a holistic understanding of the current implementation of CIM. The outcomes will benefit multiple target groups. First, urban planners and designers will be supplied with a status-quo understanding of CIM implementations. Second, this research introduces possibilities of CIM deployment for the governance of cities; hence the outcomes can be useful for policymakers. Lastly, the scientific community can use the findings of this study as a reference point to gain a comprehensive understanding of the field and contribute to the future development of CIM.
Details
Keywords
Abdelrahman M. Farouk and Rahimi A. Rahman
Implementing building information modeling (BIM) in construction projects offers many benefits. However, the use of BIM in project cost management is still limited. This study…
Abstract
Purpose
Implementing building information modeling (BIM) in construction projects offers many benefits. However, the use of BIM in project cost management is still limited. This study aims to review the current trends in the application of BIM in project cost management.
Design/methodology/approach
This study systematically reviews the literature on the application of BIM in project cost management. A total of 46 related articles were identified and analyzed using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses method.
Findings
Eighteen approaches to applying BIM in project cost management were identified. The approaches can be grouped into cost control and cost estimation. Also, BIM can be applied independently or integrated with other techniques. The integrated approaches for cost control include integration with genetic algorithms, Monte Carlo simulation, lean construction, integrated project delivery, neural network and value engineering. On the contrary, integrated approaches for cost estimation include integration with cost-plus pricing, discrepancy analysis, construction progress curves, estimation standards, algorithms, declarative mappings, life cycle sustainability assessment, ontology, Web-based frameworks and structured query language.
Originality/value
To the best of the authors’ knowledge, this study is the first to systematically review prior literature on the application of BIM in project cost management. As a result, the study provides a comprehensive understanding of the current state of the art and fills the literature gap. Researchers and industry professionals can use the study findings to increase the benefits of implementing BIM in construction projects.
Details
Keywords
Abid Iqbal, Khurram Shahzad, Shakeel Ahmad Khan and Muhammad Shahzad Chaudhry
The purpose of this study is to identify the relationship between artificial intelligence (AI) and fake news detection. It also intended to explore the negative effects of fake…
Abstract
Purpose
The purpose of this study is to identify the relationship between artificial intelligence (AI) and fake news detection. It also intended to explore the negative effects of fake news on society and to find out trending techniques for fake news detection.
Design/methodology/approach
“Preferred Reporting Items for the Systematic Review and Meta-Analysis” were applied as a research methodology for conducting the study. Twenty-five peer-reviewed, most relevant core studies were included to carry out a systematic literature review.
Findings
Findings illustrated that AI has a strong positive relationship with the detection of fake news. The study displayed that fake news caused emotional problems, threats to important institutions of the state and a bad impact on culture. Results of the study also revealed that big data analytics, fact-checking websites, automatic detection tools and digital literacy proved fruitful in identifying fake news.
Originality/value
The study offers theoretical implications for the researchers to further explore the area of AI in relation to fake news detection. It also provides managerial implications for educationists, IT experts and policymakers. This study is an important benchmark to control the generation and dissemination of fake news on social media platforms.
Details
Keywords
The curation of ontologies and knowledge graphs (KGs) is an essential task for industrial knowledge-based applications, as they rely on the contained knowledge to be correct and…
Abstract
Purpose
The curation of ontologies and knowledge graphs (KGs) is an essential task for industrial knowledge-based applications, as they rely on the contained knowledge to be correct and error-free. Often, a significant amount of a KG is curated by humans. Established validation methods, such as Shapes Constraint Language, Shape Expressions or Web Ontology Language, can detect wrong statements only after their materialization, which can be too late. Instead, an approach that avoids errors and adequately supports users is required.
Design/methodology/approach
For solving that problem, Property Assertion Constraints (PACs) have been developed. PACs extend the range definition of a property with additional logic expressed with SPARQL. For the context of a given instance and property, a tailored PAC query is dynamically built and triggered on the KG. It can determine all values that will result in valid property value assertions.
Findings
PACs can avoid the expansion of KGs with invalid property value assertions effectively, as their contained expertise narrows down the valid options a user can choose from. This simplifies the knowledge curation and, most notably, relieves users or machines from knowing and applying this expertise, but instead enables a computer to take care of it.
Originality/value
PACs are fundamentally different from existing approaches. Instead of detecting erroneous materialized facts, they can determine all semantically correct assertions before materializing them. This avoids invalid property value assertions and provides users an informed, purposeful assistance. To the author's knowledge, PACs are the only such approach.
Details