Search results
1 – 10 of 14Diego Espinosa Gispert, Ibrahim Yitmen, Habib Sadri and Afshin Taheri
The purpose of this research is to develop a framework of an ontology-based Asset Information Model (AIM) for a Digital Twin (DT) platform and enhance predictive maintenance…
Abstract
Purpose
The purpose of this research is to develop a framework of an ontology-based Asset Information Model (AIM) for a Digital Twin (DT) platform and enhance predictive maintenance practices in building facilities that could enable proactive and data-driven decision-making during the Operation and Maintenance (O&M) process.
Design/methodology/approach
A scoping literature review was accomplished to establish the theoretical foundation for the current investigation. A study on developing an ontology-based AIM for predictive maintenance in building facilities was conducted. Semi-structured interviews were conducted with industry professionals to gather qualitative data for ontology-based AIM framework validation and insights.
Findings
The research findings indicate that while the development of ontology faced challenges in defining missing entities and relations in the context of predictive maintenance, insights gained from the interviews enabled the establishment of a comprehensive framework for ontology-based AIM adoption in the Facility Management (FM) sector.
Practical implications
The proposed ontology-based AIM has the potential to enable proactive and data-driven decision-making during the process, optimizing predictive maintenance practices and ultimately enhancing energy efficiency and sustainability in the building industry.
Originality/value
The research contributes to a practical guide for ontology development processes and presents a framework of an Ontology-based AIM for a Digital Twin platform.
Details
Keywords
Miquel Centelles and Núria Ferran-Ferrer
Develop a comprehensive framework for assessing the knowledge organization systems (KOSs), including the taxonomy of Wikipedia and the ontologies of Wikidata, with a specific…
Abstract
Purpose
Develop a comprehensive framework for assessing the knowledge organization systems (KOSs), including the taxonomy of Wikipedia and the ontologies of Wikidata, with a specific focus on enhancing management and retrieval with a gender nonbinary perspective.
Design/methodology/approach
This study employs heuristic and inspection methods to assess Wikipedia’s KOS, ensuring compliance with international standards. It evaluates the efficiency of retrieving non-masculine gender-related articles using the Catalan Wikipedian category scheme, identifying limitations. Additionally, a novel assessment of Wikidata ontologies examines their structure and coverage of gender-related properties, comparing them to Wikipedia’s taxonomy for advantages and enhancements.
Findings
This study evaluates Wikipedia’s taxonomy and Wikidata’s ontologies, establishing evaluation criteria for gender-based categorization and exploring their structural effectiveness. The evaluation process suggests that Wikidata ontologies may offer a viable solution to address Wikipedia’s categorization challenges.
Originality/value
The assessment of Wikipedia categories (taxonomy) based on KOS standards leads to the conclusion that there is ample room for improvement, not only in matters concerning gender identity but also in the overall KOS to enhance search and retrieval for users. These findings bear relevance for the design of tools to support information retrieval on knowledge-rich websites, as they assist users in exploring topics and concepts.
Details
Keywords
Sofia Baroncini, Bruno Sartini, Marieke Van Erp, Francesca Tomasi and Aldo Gangemi
In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides…
Abstract
Purpose
In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides (art-)historians and Cultural Heritage professionals with a wealth of information to explore. Specifically, structured data about iconographical and iconological (icon) aspects, i.e. information about the subjects, concepts and meanings of artworks, are extremely valuable for the state-of-the-art of computational tools, e.g. content recognition through computer vision. Nevertheless, a data quality evaluation for art domains, fundamental for data reuse, is still missing. The purpose of this study is filling this gap with an overview of art-historical data quality in current KGs with a focus on the icon aspects.
Design/methodology/approach
This study’s analyses are based on established KG evaluation methodologies, adapted to the domain by addressing requirements from art historians’ theories. The authors first select several KGs according to Semantic Web principles. Then, the authors evaluate (1) their structures’ suitability to describe icon information through quantitative and qualitative assessment and (2) their content, qualitatively assessed in terms of correctness and completeness.
Findings
This study’s results reveal several issues on the current expression of icon information in KGs. The content evaluation shows that these domain-specific statements are generally correct but often not complete. The incompleteness is confirmed by the structure evaluation, which highlights the unsuitability of the KG schemas to describe icon information with the required granularity.
Originality/value
The main contribution of this work is an overview of the actual landscape of the icon information expressed in LOD. Therefore, it is valuable to cultural institutions by providing them a first domain-specific data quality evaluation. Since this study’s results suggest that the selected domain information is underrepresented in Semantic Web datasets, the authors highlight the need for the creation and fostering of such information to provide a more thorough art-historical dimension to LOD.
Details
Keywords
Edoardo Ramalli and Barbara Pernici
Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model…
Abstract
Purpose
Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model performance. Uncertainty inherently affects experiment measurements and is often missing in the available data sets due to its estimation cost. For similar reasons, experiments are very few compared to other data sources. Discarding experiments based on the missing uncertainty values would preclude the development of predictive models. Data profiling techniques are fundamental to assess data quality, but some data quality dimensions are challenging to evaluate without knowing the uncertainty. In this context, this paper aims to predict the missing uncertainty of the experiments.
Design/methodology/approach
This work presents a methodology to forecast the experiments’ missing uncertainty, given a data set and its ontological description. The approach is based on knowledge graph embeddings and leverages the task of link prediction over a knowledge graph representation of the experiments database. The validity of the methodology is first tested in multiple conditions using synthetic data and then applied to a large data set of experiments in the chemical kinetic domain as a case study.
Findings
The analysis results of different test case scenarios suggest that knowledge graph embedding can be used to predict the missing uncertainty of the experiments when there is a hidden relationship between the experiment metadata and the uncertainty values. The link prediction task is also resilient to random noise in the relationship. The knowledge graph embedding outperforms the baseline results if the uncertainty depends upon multiple metadata.
Originality/value
The employment of knowledge graph embedding to predict the missing experimental uncertainty is a novel alternative to the current and more costly techniques in the literature. Such contribution permits a better data quality profiling of scientific repositories and improves the development process of data-driven models based on scientific experiments.
Details
Keywords
Eyad Buhulaiga and Arnesh Telukdarie
Multinational business deliver value via multiple sites with similar operational capacities. The age of the Fourth Industrial Revolution (4IR) delivers significant opportunities…
Abstract
Purpose
Multinational business deliver value via multiple sites with similar operational capacities. The age of the Fourth Industrial Revolution (4IR) delivers significant opportunities for the deployment of digital tools for business optimization. Therefore, this study aims to study the Industry 4.0 implementation for multinationals.
Design/methodology/approach
The key objective of this research is multi-site systems integration using a reproducible, modular and standardized “Cyber Physical System (CPS) as-a-Service”.
Findings
A best practice reference architecture is adopted to guide the design and delivery of a pioneering CPS multi-site deployment. The CPS deployed is a cloud-based platform adopted to enable all manufacturing areas within a multinational energy and petrochemical company. A methodology is developed to quantify the system environmental and sustainability benefits focusing on reduced carbon dioxide (CO2) emissions and energy consumption. These results demonstrate the benefits of standardization, replication and digital enablement for multinational businesses.
Originality/value
The research illustrates the ability to design a single system, reproducible for multiple sites. This research also illustrates the beneficial impact of system reuse due to reduced environmental impact from lower CO2 emissions and energy consumption. The paper assists organizations in deploying complex systems while addressing multinational systems implementation constraints and standardization.
Details
Keywords
By reconsidering the concept of the historic environment, the aim of this study is to better understand how heritage is expressed by examining the networks within which the…
Abstract
Purpose
By reconsidering the concept of the historic environment, the aim of this study is to better understand how heritage is expressed by examining the networks within which the cultural performances of the historic environment take place. The goal is to move beyond a purely material expression and seek the expansion of the cultural dimension of the historic environment.
Design/methodology/approach
Conceptually, the historic environment is considered a valuable resource for heritage expression and exploration. The databases and records that house historic environment data are venerated and frequented entities for archeologists, but arguably less so for non-specialist users. In inventorying the historic environment, databases fulfill a major role in the planning process and asset management that is often considered to be more than just perfunctory. This paper approaches historic environment records (HERs) from an actor network perspective, particularizing the social foundation and relationships within the networks governing the historic environment and the environment's associated records.
Findings
The paper concludes that the performance of HERs from an actor-network perspective is a hegemonic process that is biased toward the supply and input to and from professional users. Furthermore, the paper provides a schematic for how many of the flaws in heritage transmission have come about.
Originality/value
The relevance here is largely belied by the fact that HERs as both public digital resources and as heritage networks were awaiting to be addressed in depth from a theoretical point of view.
Details
Keywords
Neema Florence Mosha and Patrick Ngulube
The study aims to investigate the utilisation of open research data repositories (RDRs) for storing and sharing research data in higher learning institutions (HLIs) in Tanzania.
Abstract
Purpose
The study aims to investigate the utilisation of open research data repositories (RDRs) for storing and sharing research data in higher learning institutions (HLIs) in Tanzania.
Design/methodology/approach
A survey research design was employed to collect data from postgraduate students at the Nelson Mandela African Institution of Science and Technology (NM-AIST) in Arusha, Tanzania. The data were collected and analysed quantitatively and qualitatively. A census sampling technique was employed to select the sample size for this study. The quantitative data were analysed using the Statistical Package for the Social Sciences (SPSS), whilst the qualitative data were analysed thematically.
Findings
Less than half of the respondents were aware of and were using open RDRs, including Zenodo, DataVerse, Dryad, OMERO, GitHub and Mendeley data repositories. More than half of the respondents were not willing to share research data and cited a lack of ownership after storing their research data in most of the open RDRs and data security. HILs need to conduct training on using trusted repositories and motivate postgraduate students to utilise open repositories (ORs). The challenges for underutilisation of open RDRs were a lack of policies governing the storage and sharing of research data and grant constraints.
Originality/value
Research data storage and sharing are of great interest to researchers in HILs to inform them to implement open RDRs to support these researchers. Open RDRs increase visibility within HILs and reduce research data loss, and research works will be cited and used publicly. This paper identifies the potential for additional studies focussed on this area.
Details
Keywords
Julián Monsalve-Pulido, Jose Aguilar, Edwin Montoya and Camilo Salazar
This article proposes an architecture of an intelligent and autonomous recommendation system to be applied to any virtual learning environment, with the objective of efficiently…
Abstract
This article proposes an architecture of an intelligent and autonomous recommendation system to be applied to any virtual learning environment, with the objective of efficiently recommending digital resources. The paper presents the architectural details of the intelligent and autonomous dimensions of the recommendation system. The paper describes a hybrid recommendation model that orchestrates and manages the available information and the specific recommendation needs, in order to determine the recommendation algorithms to be used. The hybrid model allows the integration of the approaches based on collaborative filter, content or knowledge. In the architecture, information is extracted from four sources: the context, the students, the course and the digital resources, identifying variables, such as individual learning styles, socioeconomic information, connection characteristics, location, etc. Tests were carried out for the creation of an academic course, in order to analyse the intelligent and autonomous capabilities of the architecture.
Details
Keywords
Mathew Moyo and Siviwe Bangani
The aim of this study was to determine data literacy (DL) training needs of researchers at South African public universities. The outcome of this study would assist librarians and…
Abstract
Purpose
The aim of this study was to determine data literacy (DL) training needs of researchers at South African public universities. The outcome of this study would assist librarians and researchers in developing a DL training programme which addressed identified needs.
Design/methodology/approach
A survey research method was used to gather data from researchers at these universities by convenience. Online questionnaires were distributed to public universities through library directors for further distribution to researchers.
Findings
The results indicate low levels of DL training at the respondent South African public universities with most researchers indicating that they had not received any formal training on DL. A few researchers indicated that they would welcome DL training.
Research limitations/implications
This study was exploratory in nature and data was received from eight universities, which is not representative of all the 26 public universities in South Africa. Nonetheless, the low DL confirmed by the majority in the realised sample is indicative of the need to further investigate the subject.
Practical implications
Librarians and research support personnel should collaborate on the development of DL training courses, workshops and materials used by researchers at institutions of higher learning to enhance DLs on campus.
Originality/value
This study may be novel in South Africa in investigating the DL training needs of researchers at several universities and contributes to the growing body of literature on research data management
Details
Keywords
Spam emails classification using data mining and machine learning approaches has enticed the researchers' attention duo to its obvious positive impact in protecting internet…
Abstract
Spam emails classification using data mining and machine learning approaches has enticed the researchers' attention duo to its obvious positive impact in protecting internet users. Several features can be used for creating data mining and machine learning based spam classification models. Yet, spammers know that the longer they will use the same set of features for tricking email users the more probably the anti-spam parties might develop tools for combating this kind of annoying email messages. Spammers, so, adapt by continuously reforming the group of features utilized for composing spam emails. For that reason, even though traditional classification methods possess sound classification results, they were ineffective for lifelong classification of spam emails duo to the fact that they might be prone to the so-called “Concept Drift”. In the current study, an enhanced model is proposed for ensuring lifelong spam classification model. For the evaluation purposes, the overall performance of the suggested model is contrasted against various other stream mining classification techniques. The results proved the success of the suggested model as a lifelong spam emails classification method.
Details