Search results

1 – 10 of 412
Article
Publication date: 31 October 2023

Hong Zhou, Binwei Gao, Shilong Tang, Bing Li and Shuyu Wang

The number of construction dispute cases has maintained a high growth trend in recent years. The effective exploration and management of construction contract risk can directly…

Abstract

Purpose

The number of construction dispute cases has maintained a high growth trend in recent years. The effective exploration and management of construction contract risk can directly promote the overall performance of the project life cycle. The miss of clauses may result in a failure to match with standard contracts. If the contract, modified by the owner, omits key clauses, potential disputes may lead to contractors paying substantial compensation. Therefore, the identification of construction project contract missing clauses has heavily relied on the manual review technique, which is inefficient and highly restricted by personnel experience. The existing intelligent means only work for the contract query and storage. It is urgent to raise the level of intelligence for contract clause management. Therefore, this paper aims to propose an intelligent method to detect construction project contract missing clauses based on Natural Language Processing (NLP) and deep learning technology.

Design/methodology/approach

A complete classification scheme of contract clauses is designed based on NLP. First, construction contract texts are pre-processed and converted from unstructured natural language into structured digital vector form. Following the initial categorization, a multi-label classification of long text construction contract clauses is designed to preliminary identify whether the clause labels are missing. After the multi-label clause missing detection, the authors implement a clause similarity algorithm by creatively integrating the image detection thought, MatchPyramid model, with BERT to identify missing substantial content in the contract clauses.

Findings

1,322 construction project contracts were tested. Results showed that the accuracy of multi-label classification could reach 93%, the accuracy of similarity matching can reach 83%, and the recall rate and F1 mean of both can reach more than 0.7. The experimental results verify the feasibility of intelligently detecting contract risk through the NLP-based method to some extent.

Originality/value

NLP is adept at recognizing textual content and has shown promising results in some contract processing applications. However, the mostly used approaches of its utilization for risk detection in construction contract clauses predominantly are rule-based, which encounter challenges when handling intricate and lengthy engineering contracts. This paper introduces an NLP technique based on deep learning which reduces manual intervention and can autonomously identify and tag types of contractual deficiencies, aligning with the evolving complexities anticipated in future construction contracts. Moreover, this method achieves the recognition of extended contract clause texts. Ultimately, this approach boasts versatility; users simply need to adjust parameters such as segmentation based on language categories to detect omissions in contract clauses of diverse languages.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 14 June 2022

Gitaek Lee, Seonghyeon Moon and Seokho Chi

Contractors must check the provisions that may cause disputes in the specifications to manage project risks when bidding for a construction project. However, since the…

Abstract

Purpose

Contractors must check the provisions that may cause disputes in the specifications to manage project risks when bidding for a construction project. However, since the specification is mainly written regarding many national standards, determining which standard each section of the specification is derived from and whether the content is appropriate for the local site is a labor-intensive task. To develop an automatic reference section identification model that helps complete the specification review process in short bidding steps, the authors proposed a framework that integrates rules and machine learning algorithms.

Design/methodology/approach

The study begins by collecting 7,795 sections from construction specifications and the national standards from different countries. Then, the collected sections were retrieved for similar section pairs with syntactic rules generated by the construction domain knowledge. Finally, to improve the reliability and expandability of the section paring, the authors built a deep structured semantic model that increases the cosine similarity between documents dealing with the same topic by learning human-labeled similarity information.

Findings

The integrated model developed in this study showed 0.812, 0.898, and 0.923 levels of performance in NDCG@1, NDCG@5, and NDCG@10, respectively, confirming that the model can adequately select document candidates that require comparative analysis of clauses for practitioners.

Originality/value

The results contribute to more efficient and objective identification of potential disputes within the specifications by automatically providing practitioners with the reference section most relevant to the analysis target section.

Details

Engineering, Construction and Architectural Management, vol. 30 no. 9
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 29 November 2023

Hui Shi, Drew Hwang, Dazhi Chong and Gongjun Yan

Today’s in-demand skills may not be needed tomorrow. As companies are adopting a new group of technologies, they are in huge need of information technology (IT) professionals who…

25

Abstract

Purpose

Today’s in-demand skills may not be needed tomorrow. As companies are adopting a new group of technologies, they are in huge need of information technology (IT) professionals who can fill various IT positions with a mixture of technical and problem-solving skills. This study aims to adopt a sematic analysis approach to explore how the US Information Systems (IS) programs meet the challenges of emerging IT topics.

Design/methodology/approach

This study considers the application of a hybrid semantic analysis approach to the analysis of IS higher education programs in the USA. It proposes a semantic analysis framework and a semantic analysis algorithm to analyze and evaluate the context of the IS programs. To be more specific, the study uses digital transformation as a case study to examine the readiness of the IS programs in the USA to meet the challenges of digital transformation. First, this study developed a knowledge pool of 15 principles and 98 keywords from an extensive literature review on digital transformation. Second, this study collects 4,093 IS courses from 315 IS programs in the USA and 493,216 scientific publication records from the Web of Science Core Collection.

Findings

Using the knowledge pool and two collected data sets, the semantic analysis algorithm was implemented to compute a semantic similarity score (DxScore) between an IS course’s context and digital transformation. To present the credibility of the research results of this paper, the state ranking using the similarity scores and the state employment ranking were compared. The research results can be used by IS educators in the future in the process of updating the IS curricula. Regarding IT professionals in the industry, the results can provide insights into the training of their current/future employees.

Originality/value

This study explores the status of the IS programs in the USA by proposing a semantic analysis framework, using digital transformation as a case study to illustrate the application of the proposed semantic analysis framework, and developing a knowledge pool, a corpus and a course information collection.

Details

Information Discovery and Delivery, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-6247

Keywords

Open Access
Article
Publication date: 11 October 2023

Bachriah Fatwa Dhini, Abba Suganda Girsang, Unggul Utan Sufandi and Heny Kurniawati

The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes…

Abstract

Purpose

The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes essay scoring, which is conducted through two parameters, semantic and keyword similarities, using a SentenceTransformers pre-trained model that can construct the highest vector embedding. Combining these models is used to optimize the model with increasing accuracy.

Design/methodology/approach

The development of the model in the study is divided into seven stages: (1) data collection, (2) pre-processing data, (3) selected pre-trained SentenceTransformers model, (4) semantic similarity (sentence pair), (5) keyword similarity, (6) calculate final score and (7) evaluating model.

Findings

The multilingual paraphrase-multilingual-MiniLM-L12-v2 and distilbert-base-multilingual-cased-v1 models got the highest scores from comparisons of 11 pre-trained multilingual models of SentenceTransformers with Indonesian data (Dhini and Girsang, 2023). Both multilingual models were adopted in this study. A combination of two parameters is obtained by comparing the response of the keyword extraction responses with the rubric keywords. Based on the experimental results, proposing a combination can increase the evaluation results by 0.2.

Originality/value

This study uses discussion forum data from the general biology course in online learning at the open university for the 2020.2 and 2021.2 semesters. Forum discussion ratings are still manual. In this survey, the authors created a model that automatically calculates the value of discussion forums, which are essays based on the lecturer's answers moreover rubrics.

Details

Asian Association of Open Universities Journal, vol. 18 no. 3
Type: Research Article
ISSN: 1858-3431

Keywords

Article
Publication date: 21 June 2023

Debasis Majhi and Bhaskar Mukherjee

The purpose of this study is to identify the research fronts by analysing highly cited core papers adjusted with the age of a paper in library and information science (LIS) where…

Abstract

Purpose

The purpose of this study is to identify the research fronts by analysing highly cited core papers adjusted with the age of a paper in library and information science (LIS) where natural language processing (NLP) is being applied significantly.

Design/methodology/approach

By excavating international databases, 3,087 core papers that received at least 5% of the total citations have been identified. By calculating the average mean years of these core papers, and total citations received, a CPT (citation/publication/time) value was calculated in all 20 fronts to understand how a front is relatively receiving greater attention among peers within a course of time. One theme article has been finally identified from each of these 20 fronts.

Findings

Bidirectional encoder representations from transformers with CPT value 1.608 followed by sentiment analysis with CPT 1.292 received highest attention in NLP research. Columbia University New York, in terms of University, Journal of the American Medical Informatics Association, in terms of journals, USA followed by People Republic of China, in terms of country and Xu, H., University of Texas, in terms of author are the top in these fronts. It is identified that the NLP applications boost the performance of digital libraries and automated library systems in the digital environment.

Practical implications

Any research fronts that are identified in the findings of this paper may be used as a base for researchers who intended to perform extensive research on NLP.

Originality/value

To the best of the authors’ knowledge, the methodology adopted in this paper is the first of its kind where meta-analysis approach has been used for understanding the research fronts in sub field like NLP for a broad domain like LIS.

Details

Digital Library Perspectives, vol. 39 no. 3
Type: Research Article
ISSN: 2059-5816

Keywords

Article
Publication date: 20 July 2023

Elaheh Hosseini, Kimiya Taghizadeh Milani and Mohammad Shaker Sabetnasab

This research aimed to visualize and analyze the co-word network and thematic clusters of the intellectual structure in the field of linked data during 1900–2021.

Abstract

Purpose

This research aimed to visualize and analyze the co-word network and thematic clusters of the intellectual structure in the field of linked data during 1900–2021.

Design/methodology/approach

This applied research employed a descriptive and analytical method, scientometric indicators, co-word techniques, and social network analysis. VOSviewer, SPSS, Python programming, and UCINet software were used for data analysis and network structure visualization.

Findings

The top ranks of the Web of Science (WOS) subject categorization belonged to various fields of computer science. Besides, the USA was the most prolific country. The keyword ontology had the highest frequency of co-occurrence. Ontology and semantic were the most frequent co-word pairs. In terms of the network structure, nine major topic clusters were identified based on co-occurrence, and 29 thematic clusters were identified based on hierarchical clustering. Comparisons between the two clustering techniques indicated that three clusters, namely semantic bioinformatics, knowledge representation, and semantic tools were in common. The most mature and mainstream thematic clusters were natural language processing techniques to boost modeling and visualization, context-aware knowledge discovery, probabilistic latent semantic analysis (PLSA), semantic tools, latent semantic indexing, web ontology language (OWL) syntax, and ontology-based deep learning.

Originality/value

This study adopted various techniques such as co-word analysis, social network analysis network structure visualization, and hierarchical clustering to represent a suitable, visual, methodical, and comprehensive perspective into linked data.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 12 June 2023

Qinglong Li, Jaeseung Park and Jaekyeong Kim

The current study investigates the impact on perceived review helpfulness of the simultaneous processing of information from multiple cues with various central and peripheral cue…

Abstract

Purpose

The current study investigates the impact on perceived review helpfulness of the simultaneous processing of information from multiple cues with various central and peripheral cue combinations based on the elaboration likelihood model (ELM). Thus, the current study develops and tests hypotheses by analyzing real-world review data with a text mining approach in e-commerce to investigate how information consistency (rating inconsistency, review consistency and text similarity) influences perceived helpfulness. Moreover, the role of product type is examined in online consumer reviews of perceived helpfulness.

Design/methodology/approach

The current study collected 61,900 online reviews, including 600 products in six categories, from Amazon.com. Additionally, 51,927 reviews were filtered that received helpfulness votes, and then text mining and negative binomial regression were applied.

Findings

The current study found that rating inconsistency and text similarity negatively affect perceived helpfulness and that review consistency positively affects perceived helpfulness. Moreover, peripheral cues (rating inconsistency) positively affect perceived helpfulness in reviews of experience goods rather than search goods. However, there is a lack of evidence to demonstrate the hypothesis that product types moderate the effectiveness of central cues (review consistency and text similarity) on perceived helpfulness.

Originality/value

Previous studies have mainly focused on numerical and textual factors to investigate the effect on perceived helpfulness. Additionally, previous studies have independently confirmed the factors that affect perceived helpfulness. The current study investigated how information consistency affects perceived helpfulness and found that various combinations of cues significantly affect perceived helpfulness. This result contributes to the review helpfulness and ELM literature by identifying the impact on perceived helpfulness from a comprehensive perspective of consumer review and information consistency.

Details

Data Technologies and Applications, vol. 58 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 15 June 2023

Claire M. Mason, Haohui Chen, David Evans and Gavin Walker

This paper aims to demonstrate how skills taxonomies can be used in combination with machine learning to integrate diverse online datasets and reveal skills gaps. The purpose of…

Abstract

Purpose

This paper aims to demonstrate how skills taxonomies can be used in combination with machine learning to integrate diverse online datasets and reveal skills gaps. The purpose of this study is then to show how the skills gaps revealed by the integrated datasets can be used to achieve better labour market alignment, keep educational offerings up to date and assist graduates to communicate the value of their qualifications.

Design/methodology/approach

Using the ESCO taxonomy and natural language processing, this study captures skills data from three types of online data (job ads, course descriptions and resumes), allowing us to compare demand for skills and supply of skills for three different occupations.

Findings

This study illustrates three practical applications for the integrated data, showing how they can be used to help workers who are disrupted by technology to identify alternative career pathways, assist educators to identify gaps in their course offerings and support students to communicate the value of their training to employers.

Originality/value

This study builds upon existing applications of machine learning (detecting skills from a single dataset) by using the skills taxonomy to integrate three datasets. This study shows how these complementary, big datasets can be integrated to support greater alignment between the needs and offerings of educators, employers and job seekers.

Details

The International Journal of Information and Learning Technology, vol. 40 no. 4
Type: Research Article
ISSN: 2056-4880

Keywords

Article
Publication date: 18 August 2023

Gaurav Sarin, Pradeep Kumar and M. Mukund

Text classification is a widely accepted and adopted technique in organizations to mine and analyze unstructured and semi-structured data. With advancement of technological…

Abstract

Purpose

Text classification is a widely accepted and adopted technique in organizations to mine and analyze unstructured and semi-structured data. With advancement of technological computing, deep learning has become more popular among academicians and professionals to perform mining and analytical operations. In this work, the authors study the research carried out in field of text classification using deep learning techniques to identify gaps and opportunities for doing research.

Design/methodology/approach

The authors adopted bibliometric-based approach in conjunction with visualization techniques to uncover new insights and findings. The authors collected data of two decades from Scopus global database to perform this study. The authors discuss business applications of deep learning techniques for text classification.

Findings

The study provides overview of various publication sources in field of text classification and deep learning together. The study also presents list of prominent authors and their countries working in this field. The authors also presented list of most cited articles based on citations and country of research. Various visualization techniques such as word cloud, network diagram and thematic map were used to identify collaboration network.

Originality/value

The study performed in this paper helped to understand research gaps that is original contribution to body of literature. To best of the authors' knowledge, in-depth study in the field of text classification and deep learning has not been performed in detail. The study provides high value to scholars and professionals by providing them opportunities of research in this area.

Details

Benchmarking: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 20 September 2022

Jinzhu Zhang, Yue Liu, Linqi Jiang and Jialu Shi

This paper aims to propose a method for better discovering topic evolution path and semantic relationship from the perspective of patent entity extraction and semantic…

Abstract

Purpose

This paper aims to propose a method for better discovering topic evolution path and semantic relationship from the perspective of patent entity extraction and semantic representation. On the one hand, this paper identifies entities that have the same semantics but different expressions for accurate topic evolution path discovery. On the other hand, this paper reveals semantic relationships of topic evolution for better understanding what leads to topic evolution.

Design/methodology/approach

Firstly, a Bi-LSTM-CRF (bidirectional long short-term memory with conditional random field) model is designed for patent entity extraction and a representation learning method is constructed for patent entity representation. Secondly, a method based on knowledge outflow and inflow is proposed for discovering topic evolution path, by identifying and computing semantic common entities among topics. Finally, multiple semantic relationships among patent entities are pre-designed according to a specific domain, and then the semantic relationship among topics is identified through the proportion of different types of semantic relationships belonging to each topic.

Findings

In the field of UAV (unmanned aerial vehicle), this method identifies semantic common entities which have the same semantics but different expressions. In addition, this method better discovers topic evolution paths by comparison with a traditional method. Finally, this method identifies different semantic relationships among topics, which gives a detailed description for understanding and interpretation of topic evolution. These results prove that the proposed method is effective and useful. Simultaneously, this method is a preliminary study and still needs to be further investigated on other datasets using multiple emerging deep learning methods.

Originality/value

This work provides a new perspective for topic evolution analysis by considering semantic representation of patent entities. The authors design a method for discovering topic evolution paths by considering knowledge flow computed by semantic common entities, which can be easily extended to other patent mining-related tasks. This work is the first attempt to reveal semantic relationships among topics for a precise and detailed description of topic evolution.

Details

Aslib Journal of Information Management, vol. 75 no. 3
Type: Research Article
ISSN: 2050-3806

Keywords

1 – 10 of 412