Search results

1 – 10 of over 1000
Open Access
Article
Publication date: 11 October 2023

Bachriah Fatwa Dhini, Abba Suganda Girsang, Unggul Utan Sufandi and Heny Kurniawati

The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes…

Abstract

Purpose

The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes essay scoring, which is conducted through two parameters, semantic and keyword similarities, using a SentenceTransformers pre-trained model that can construct the highest vector embedding. Combining these models is used to optimize the model with increasing accuracy.

Design/methodology/approach

The development of the model in the study is divided into seven stages: (1) data collection, (2) pre-processing data, (3) selected pre-trained SentenceTransformers model, (4) semantic similarity (sentence pair), (5) keyword similarity, (6) calculate final score and (7) evaluating model.

Findings

The multilingual paraphrase-multilingual-MiniLM-L12-v2 and distilbert-base-multilingual-cased-v1 models got the highest scores from comparisons of 11 pre-trained multilingual models of SentenceTransformers with Indonesian data (Dhini and Girsang, 2023). Both multilingual models were adopted in this study. A combination of two parameters is obtained by comparing the response of the keyword extraction responses with the rubric keywords. Based on the experimental results, proposing a combination can increase the evaluation results by 0.2.

Originality/value

This study uses discussion forum data from the general biology course in online learning at the open university for the 2020.2 and 2021.2 semesters. Forum discussion ratings are still manual. In this survey, the authors created a model that automatically calculates the value of discussion forums, which are essays based on the lecturer's answers moreover rubrics.

Details

Asian Association of Open Universities Journal, vol. 18 no. 3
Type: Research Article
ISSN: 1858-3431

Keywords

Article
Publication date: 14 November 2023

Shaodan Sun, Jun Deng and Xugong Qin

This paper aims to amplify the retrieval and utilization of historical newspapers through the application of semantic organization, all from the vantage point of a fine-grained…

Abstract

Purpose

This paper aims to amplify the retrieval and utilization of historical newspapers through the application of semantic organization, all from the vantage point of a fine-grained knowledge element perspective. This endeavor seeks to unlock the latent value embedded within newspaper contents while simultaneously furnishing invaluable guidance within methodological paradigms for research in the humanities domain.

Design/methodology/approach

According to the semantic organization process and knowledge element concept, this study proposes a holistic framework, including four pivotal stages: knowledge element description, extraction, association and application. Initially, a semantic description model dedicated to knowledge elements is devised. Subsequently, harnessing the advanced deep learning techniques, the study delves into the realm of entity recognition and relationship extraction. These techniques are instrumental in identifying entities within the historical newspaper contents and capturing the interdependencies that exist among them. Finally, an online platform based on Flask is developed to enable the recognition of entities and relationships within historical newspapers.

Findings

This article utilized the Shengjing Times·Changchun Compilation as the datasets for describing, extracting, associating and applying newspapers contents. Regarding knowledge element extraction, the BERT + BS consistently outperforms Bi-LSTM, CRF++ and even BERT in terms of Recall and F1 scores, making it a favorable choice for entity recognition in this context. Particularly noteworthy is the Bi-LSTM-Pro model, which stands out with the highest scores across all metrics, notably achieving an exceptional F1 score in knowledge element relationship recognition.

Originality/value

Historical newspapers transcend their status as mere artifacts, evolving into invaluable reservoirs safeguarding the societal and historical memory. Through semantic organization from a fine-grained knowledge element perspective, it can facilitate semantic retrieval, semantic association, information visualization and knowledge discovery services for historical newspapers. In practice, it can empower researchers to unearth profound insights within the historical and cultural context, broadening the landscape of digital humanities research and practical applications.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 8 January 2024

Morteza Mohammadi Ostani, Jafar Ebadollah Amoughin and Mohadeseh Jalili Manaf

This study aims to adjust Thesis-type properties on Schema.org using metadata models and standards (MS) (Bibframe, electronic thesis and dissertations [ETD]-MS, Common European…

Abstract

Purpose

This study aims to adjust Thesis-type properties on Schema.org using metadata models and standards (MS) (Bibframe, electronic thesis and dissertations [ETD]-MS, Common European Research Information Format [CERIF] and Dublin Core [DC]) to enrich the Thesis-type properties for better description and processing on the Web.

Design/methodology/approach

This study is applied, descriptive analysis in nature and is based on content analysis in terms of method. The research population consisted of elements and attributes of the metadata model and standards (Bibframe, ETD-MS, CERIF and DC) and Thesis-type properties in the Schema.org. The data collection tool was a researcher-made checklist, and the data collection method was structured observation.

Findings

The results show that the 65 Thesis-type properties and the two levels of Thing and CreativeWork as its parents on Schema.org that corresponds to the elements and attributes of related models and standards. In addition, 12 properties are special to the Thesis type for better comprehensive description and processing, and 27 properties are added to the CreativeWork type.

Practical implications

Enrichment and expansion of Thesis-type properties on Schema.org is one of the practical applications of the present study, which have enabled more comprehensive description and processing and increased access points and visibility for ETDs in the environment Web and digital libraries.

Originality/value

This study has offered some new Thesis type properties and CreativeWork levels on Schema.org. To the best of the authors’ knowledge, this is the first time this issue is investigated.

Details

Digital Library Perspectives, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5816

Keywords

Article
Publication date: 10 February 2023

Huiyong Wang, Ding Yang, Liang Guo and Xiaoming Zhang

Intent detection and slot filling are two important tasks in question comprehension of a question answering system. This study aims to build a joint task model with some…

Abstract

Purpose

Intent detection and slot filling are two important tasks in question comprehension of a question answering system. This study aims to build a joint task model with some generalization ability and benchmark its performance over other neural network models mentioned in this paper.

Design/methodology/approach

This study used a deep-learning-based approach for the joint modeling of question intent detection and slot filling. Meanwhile, the internal cell structure of the long short-term memory (LSTM) network was improved. Furthermore, the dataset Computer Science Literature Question (CSLQ) was constructed based on the Science and Technology Knowledge Graph. The datasets Airline Travel Information Systems, Snips (a natural language processing dataset of the consumer intent engine collected by Snips) and CSLQ were used for the empirical analysis. The accuracy of intent detection and F1 score of slot filling, as well as the semantic accuracy of sentences, were compared for several models.

Findings

The results showed that the proposed model outperformed all other benchmark methods, especially for the CSLQ dataset. This proves that the design of this study improved the comprehensive performance and generalization ability of the model to some extent.

Originality/value

This study contributes to the understanding of question sentences in a specific domain. LSTM was improved, and a computer literature domain dataset was constructed herein. This will lay the data and model foundation for the future construction of a computer literature question answering system.

Details

Data Technologies and Applications, vol. 57 no. 5
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 22 January 2024

Jun Liu, Junyuan Dong, Mingming Hu and Xu Lu

Existing Simultaneous Localization and Mapping (SLAM) algorithms have been relatively well developed. However, when in complex dynamic environments, the movement of the dynamic…

Abstract

Purpose

Existing Simultaneous Localization and Mapping (SLAM) algorithms have been relatively well developed. However, when in complex dynamic environments, the movement of the dynamic points on the dynamic objects in the image in the mapping can have an impact on the observation of the system, and thus there will be biases and errors in the position estimation and the creation of map points. The aim of this paper is to achieve more accurate accuracy in SLAM algorithms compared to traditional methods through semantic approaches.

Design/methodology/approach

In this paper, the semantic segmentation of dynamic objects is realized based on U-Net semantic segmentation network, followed by motion consistency detection through motion detection method to determine whether the segmented objects are moving in the current scene or not, and combined with the motion compensation method to eliminate dynamic points and compensate for the current local image, so as to make the system robust.

Findings

Experiments comparing the effect of detecting dynamic points and removing outliers are conducted on a dynamic data set of Technische Universität München, and the results show that the absolute trajectory accuracy of this paper's method is significantly improved compared with ORB-SLAM3 and DS-SLAM.

Originality/value

In this paper, in the semantic segmentation network part, the segmentation mask is combined with the method of dynamic point detection, elimination and compensation, which reduces the influence of dynamic objects, thus effectively improving the accuracy of localization in dynamic environments.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 2 November 2023

Julaine Clunis

This paper aims to delve into the complexities of terminology mapping and annotation, particularly within the context of the COVID-19 pandemic. It underscores the criticality of…

Abstract

Purpose

This paper aims to delve into the complexities of terminology mapping and annotation, particularly within the context of the COVID-19 pandemic. It underscores the criticality of harmonizing clinical knowledge organization systems (KOS) through a cohesive clinical knowledge representation approach. Central to the study is the pursuit of a novel method for integrating emerging COVID-19-specific vocabularies with existing systems, focusing on simplicity, adaptability and minimal human intervention.

Design/methodology/approach

A design science research (DSR) methodology is used to guide the development of a terminology mapping and annotation workflow. The KNIME data analytics platform is used to implement and test the mapping and annotation techniques, leveraging its powerful data processing and analytics capabilities. The study incorporates specific ontologies relevant to COVID-19, evaluates mapping accuracy and tests performance against a gold standard.

Findings

The study demonstrates the potential of the developed solution to map and annotate specific KOS efficiently. This method effectively addresses the limitations of previous approaches by providing a user-friendly interface and streamlined process that minimizes the need for human intervention. Additionally, the paper proposes a reusable workflow tool that can streamline the mapping process. It offers insights into semantic interoperability issues in health care as well as recommendations for work in this space.

Originality/value

The originality of this study lies in its use of the KNIME data analytics platform to address the unique challenges posed by the COVID-19 pandemic in terminology mapping and annotation. The novel workflow developed in this study addresses known challenges by combining mapping and annotation processes specifically for COVID-19-related vocabularies. The use of DSR methodology and relevant ontologies with the KNIME tool further contribute to the study’s originality, setting it apart from previous research in the terminology mapping and annotation field.

Details

The Electronic Library , vol. 41 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 29 March 2024

Sihao Li, Jiali Wang and Zhao Xu

The compliance checking of Building Information Modeling (BIM) models is crucial throughout the lifecycle of construction. The increasing amount and complexity of information…

Abstract

Purpose

The compliance checking of Building Information Modeling (BIM) models is crucial throughout the lifecycle of construction. The increasing amount and complexity of information carried by BIM models have made compliance checking more challenging, and manual methods are prone to errors. Therefore, this study aims to propose an integrative conceptual framework for automated compliance checking of BIM models, allowing for the identification of errors within BIM models.

Design/methodology/approach

This study first analyzed the typical building standards in the field of architecture and fire protection, and then the ontology of these elements is developed. Based on this, a building standard corpus is built, and deep learning models are trained to automatically label the building standard texts. The Neo4j is utilized for knowledge graph construction and storage, and a data extraction method based on the Dynamo is designed to obtain checking data files. After that, a matching algorithm is devised to express the logical rules of knowledge graph triples, resulting in automated compliance checking for BIM models.

Findings

Case validation results showed that this theoretical framework can achieve the automatic construction of domain knowledge graphs and automatic checking of BIM model compliance. Compared with traditional methods, this method has a higher degree of automation and portability.

Originality/value

This study introduces knowledge graphs and natural language processing technology into the field of BIM model checking and completes the automated process of constructing domain knowledge graphs and checking BIM model data. The validation of its functionality and usability through two case studies on a self-developed BIM checking platform.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 28 August 2023

Julian Warner

The article extends the distinction of semantic from syntactic labour to comprehend all forms of mental labour. It answers a critique from de Fremery and Buckland, which required…

Abstract

Purpose

The article extends the distinction of semantic from syntactic labour to comprehend all forms of mental labour. It answers a critique from de Fremery and Buckland, which required envisaging mental labour as a differentiated spectrum.

Design/methodology/approach

The paper adopts a discursive approach. It first reviews the significance and extensive diffusion of the distinction of semantic from syntactic labour. Second, it integrates semantic and syntactic labour along a vertical dimension within mental labour, indicating analogies in principle with, and differences in application from, the inherited distinction of intellectual from clerical labour. Third, it develops semantic labour to the very highest level, on a consistent principle of differentiation from syntactic labour. Finally, it reintegrates the understanding developed of semantic labour with syntactic labour, confirming that they can fully and informatively occupy mental labour.

Findings

The article further validates the distinction of semantic from syntactic labour. It enables to address Norbert Wiener's classic challenge of appropriately distributing activity between human and computer.

Research limitations/implications

The article transforms work in progress into knowledge for diffusion.

Practical implications

It has practical implications for determining what tasks to delegate to computational technology.

Social implications

The paper has social implications for the understanding of appropriate human and machine computational tasks and our own distinctive humanness.

Originality/value

The paper is highly original. Although based on preceding research, from the late 20th century, it is the first separately published full account of semantic and syntactic labour.

Details

Journal of Documentation, vol. 80 no. 3
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 3 August 2023

Yandong Hou, Zhengbo Wu, Xinghua Ren, Kaiwen Liu and Zhengquan Chen

High-resolution remote sensing images possess a wealth of semantic information. However, these images often contain objects of different sizes and distributions, which make the…

Abstract

Purpose

High-resolution remote sensing images possess a wealth of semantic information. However, these images often contain objects of different sizes and distributions, which make the semantic segmentation task challenging. In this paper, a bidirectional feature fusion network (BFFNet) is designed to address this challenge, which aims at increasing the accurate recognition of surface objects in order to effectively classify special features.

Design/methodology/approach

There are two main crucial elements in BFFNet. Firstly, the mean-weighted module (MWM) is used to obtain the key features in the main network. Secondly, the proposed polarization enhanced branch network performs feature extraction simultaneously with the main network to obtain different feature information. The authors then fuse these two features in both directions while applying a cross-entropy loss function to monitor the network training process. Finally, BFFNet is validated on two publicly available datasets, Potsdam and Vaihingen.

Findings

In this paper, a quantitative analysis method is used to illustrate that the proposed network achieves superior performance of 2–6%, respectively, compared to other mainstream segmentation networks from experimental results on two datasets. Complete ablation experiments are also conducted to demonstrate the effectiveness of the elements in the network. In summary, BFFNet has proven to be effective in achieving accurate identification of small objects and in reducing the effect of shadows on the segmentation process.

Originality/value

The originality of the paper is the proposal of a BFFNet based on multi-scale and multi-attention strategies to improve the ability to accurately segment high-resolution and complex remote sensing images, especially for small objects and shadow-obscured objects.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 17 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 13 October 2022

Yunis Ali Ahmed, Hafiz Muhammad Faisal Shehzad, Muhammad Mahboob Khurshid, Omayma Husain Abbas Hassan, Samah Abdelsalam Abdalla and Nashat Alrefai

Building information modelling (BIM) has transformed the traditional practices of the Architecture, Engineering and Construction (AEC) industry. BIM creates a collaborative…

Abstract

Purpose

Building information modelling (BIM) has transformed the traditional practices of the Architecture, Engineering and Construction (AEC) industry. BIM creates a collaborative digital representation of built environment data. Competitive advantage can be achieved with collaborative project delivery and rich information modelling. Despite the abundant benefits, BIM’s adoption in the AEC is susceptible to confrontation. A substantial impediment to BIM adoption often cited is data interoperability. Other facets of interoperability got limited attention. Other academic areas, including information systems, discuss the interoperability construct ahead of data interoperability. These interoperability factors have yet to be surveyed in the AEC industry. This study aims to investigate the effect of interoperability factors on BIM adoption and develop a comprehensive BIM adoption model.

Design/methodology/approach

The theoretical foundations of the proposed model are based on the European interoperability framework (EIF) and technology, organization, environment framework (TOE). Quantitative data collection from construction firms is gathered. The model has been thoroughly examined and validated using partial least squares structural equation modelling in SmartPLS software.

Findings

The study’s findings indicate that relative advantage, top management support, government support, organizational readiness and regulation support are determinants of BIM adoption. Financial constraints, complexity, lack of technical interoperability, semantic interoperability, organizational interoperability and uncertainty are barriers to BIM adoption. However, compatibility, competitive pressure and legal interoperability do not affect BIM adoption.

Practical implications

Finally, this study provides recommendations containing the essential technological, organizational, environmental and interoperability factors that AEC stakeholders can address to enhance BIM adoption.

Originality/value

To the best of the authors’ knowledge, this paper is one of the first studies to combine TOE and EIF in a single research model. This research provides empirical evidence for using the proposed model as a guide to promoting BIM adoption. As a result, the highlighted determinants can assist organizations in developing and executing successful policies that support BIM adoption in the AEC industry.

Details

Construction Innovation , vol. 24 no. 2
Type: Research Article
ISSN: 1471-4175

Keywords

1 – 10 of over 1000