Search results

1 – 10 of 238
Article
Publication date: 29 November 2023

Hui Shi, Drew Hwang, Dazhi Chong and Gongjun Yan

Today’s in-demand skills may not be needed tomorrow. As companies are adopting a new group of technologies, they are in huge need of information technology (IT) professionals who…

25

Abstract

Purpose

Today’s in-demand skills may not be needed tomorrow. As companies are adopting a new group of technologies, they are in huge need of information technology (IT) professionals who can fill various IT positions with a mixture of technical and problem-solving skills. This study aims to adopt a sematic analysis approach to explore how the US Information Systems (IS) programs meet the challenges of emerging IT topics.

Design/methodology/approach

This study considers the application of a hybrid semantic analysis approach to the analysis of IS higher education programs in the USA. It proposes a semantic analysis framework and a semantic analysis algorithm to analyze and evaluate the context of the IS programs. To be more specific, the study uses digital transformation as a case study to examine the readiness of the IS programs in the USA to meet the challenges of digital transformation. First, this study developed a knowledge pool of 15 principles and 98 keywords from an extensive literature review on digital transformation. Second, this study collects 4,093 IS courses from 315 IS programs in the USA and 493,216 scientific publication records from the Web of Science Core Collection.

Findings

Using the knowledge pool and two collected data sets, the semantic analysis algorithm was implemented to compute a semantic similarity score (DxScore) between an IS course’s context and digital transformation. To present the credibility of the research results of this paper, the state ranking using the similarity scores and the state employment ranking were compared. The research results can be used by IS educators in the future in the process of updating the IS curricula. Regarding IT professionals in the industry, the results can provide insights into the training of their current/future employees.

Originality/value

This study explores the status of the IS programs in the USA by proposing a semantic analysis framework, using digital transformation as a case study to illustrate the application of the proposed semantic analysis framework, and developing a knowledge pool, a corpus and a course information collection.

Details

Information Discovery and Delivery, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 20 July 2023

Elaheh Hosseini, Kimiya Taghizadeh Milani and Mohammad Shaker Sabetnasab

This research aimed to visualize and analyze the co-word network and thematic clusters of the intellectual structure in the field of linked data during 1900–2021.

Abstract

Purpose

This research aimed to visualize and analyze the co-word network and thematic clusters of the intellectual structure in the field of linked data during 1900–2021.

Design/methodology/approach

This applied research employed a descriptive and analytical method, scientometric indicators, co-word techniques, and social network analysis. VOSviewer, SPSS, Python programming, and UCINet software were used for data analysis and network structure visualization.

Findings

The top ranks of the Web of Science (WOS) subject categorization belonged to various fields of computer science. Besides, the USA was the most prolific country. The keyword ontology had the highest frequency of co-occurrence. Ontology and semantic were the most frequent co-word pairs. In terms of the network structure, nine major topic clusters were identified based on co-occurrence, and 29 thematic clusters were identified based on hierarchical clustering. Comparisons between the two clustering techniques indicated that three clusters, namely semantic bioinformatics, knowledge representation, and semantic tools were in common. The most mature and mainstream thematic clusters were natural language processing techniques to boost modeling and visualization, context-aware knowledge discovery, probabilistic latent semantic analysis (PLSA), semantic tools, latent semantic indexing, web ontology language (OWL) syntax, and ontology-based deep learning.

Originality/value

This study adopted various techniques such as co-word analysis, social network analysis network structure visualization, and hierarchical clustering to represent a suitable, visual, methodical, and comprehensive perspective into linked data.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 6 February 2023

Xiaobo Tang, Heshen Zhou and Shixuan Li

Predicting highly cited papers can enable an evaluation of the potential of papers and the early detection and determination of academic achievement value. However, most highly…

Abstract

Purpose

Predicting highly cited papers can enable an evaluation of the potential of papers and the early detection and determination of academic achievement value. However, most highly cited paper prediction studies consider early citation information, so predicting highly cited papers by publication is challenging. Therefore, the authors propose a method for predicting early highly cited papers based on their own features.

Design/methodology/approach

This research analyzed academic papers published in the Journal of the Association for Computing Machinery (ACM) from 2000 to 2013. Five types of features were extracted: paper features, journal features, author features, reference features and semantic features. Subsequently, the authors applied a deep neural network (DNN), support vector machine (SVM), decision tree (DT) and logistic regression (LGR), and they predicted highly cited papers 1–3 years after publication.

Findings

Experimental results showed that early highly cited academic papers are predictable when they are first published. The authors’ prediction models showed considerable performance. This study further confirmed that the features of references and authors play an important role in predicting early highly cited papers. In addition, the proportion of high-quality journal references has a more significant impact on prediction.

Originality/value

Based on the available information at the time of publication, this study proposed an effective early highly cited paper prediction model. This study facilitates the early discovery and realization of the value of scientific and technological achievements.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 25 April 2023

Atefeh Momeni, Mitra Pashootanizadeh and Marjan Kaedi

This study aims to determine the most similar set of recommendation books to the user selections in LibraryThing.

Abstract

Purpose

This study aims to determine the most similar set of recommendation books to the user selections in LibraryThing.

Design/methodology/approach

For this purpose, 30,000 tags related to History on the LibraryThing have been selected. Their tags and the tags of the related recommended books were extracted from three different recommendations sections on LibraryThing. Then, four similarity criteria of Jaccard coefficient, Cosine similarity, Dice coefficient and Pearson correlation coefficient were used to calculate the similarity between the tags. To determine the most similar recommended section, the best similarity criterion had to be determined first. So, a researcher-made questionnaire was provided to History experts.

Findings

The results showed that the Jaccard coefficient, with a frequency of 32.81, is the best similarity criterion from the point of view of History experts. Besides, the degree of similarity in LibraryThing recommendations section according to this criterion is equal to 0.256, in the section of books with similar library subjects and classifications is 0.163 and in the Member recommendations section is 0.152. Based on the findings of this study, the LibraryThing recommendations section has succeeded in introducing the most similar books to the selected book compared to the other two sections.

Originality/value

To the best of the authors’ knowledge, itis for the first time, three sections of LibraryThing recommendations are compared by four different similarity criteria to show which sections would be more beneficial for the user browsing. The results showed that machine recommendations work better than humans.

Details

Global Knowledge, Memory and Communication, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9342

Keywords

Article
Publication date: 31 October 2023

Hong Zhou, Binwei Gao, Shilong Tang, Bing Li and Shuyu Wang

The number of construction dispute cases has maintained a high growth trend in recent years. The effective exploration and management of construction contract risk can directly…

Abstract

Purpose

The number of construction dispute cases has maintained a high growth trend in recent years. The effective exploration and management of construction contract risk can directly promote the overall performance of the project life cycle. The miss of clauses may result in a failure to match with standard contracts. If the contract, modified by the owner, omits key clauses, potential disputes may lead to contractors paying substantial compensation. Therefore, the identification of construction project contract missing clauses has heavily relied on the manual review technique, which is inefficient and highly restricted by personnel experience. The existing intelligent means only work for the contract query and storage. It is urgent to raise the level of intelligence for contract clause management. Therefore, this paper aims to propose an intelligent method to detect construction project contract missing clauses based on Natural Language Processing (NLP) and deep learning technology.

Design/methodology/approach

A complete classification scheme of contract clauses is designed based on NLP. First, construction contract texts are pre-processed and converted from unstructured natural language into structured digital vector form. Following the initial categorization, a multi-label classification of long text construction contract clauses is designed to preliminary identify whether the clause labels are missing. After the multi-label clause missing detection, the authors implement a clause similarity algorithm by creatively integrating the image detection thought, MatchPyramid model, with BERT to identify missing substantial content in the contract clauses.

Findings

1,322 construction project contracts were tested. Results showed that the accuracy of multi-label classification could reach 93%, the accuracy of similarity matching can reach 83%, and the recall rate and F1 mean of both can reach more than 0.7. The experimental results verify the feasibility of intelligently detecting contract risk through the NLP-based method to some extent.

Originality/value

NLP is adept at recognizing textual content and has shown promising results in some contract processing applications. However, the mostly used approaches of its utilization for risk detection in construction contract clauses predominantly are rule-based, which encounter challenges when handling intricate and lengthy engineering contracts. This paper introduces an NLP technique based on deep learning which reduces manual intervention and can autonomously identify and tag types of contractual deficiencies, aligning with the evolving complexities anticipated in future construction contracts. Moreover, this method achieves the recognition of extended contract clause texts. Ultimately, this approach boasts versatility; users simply need to adjust parameters such as segmentation based on language categories to detect omissions in contract clauses of diverse languages.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 4 August 2023

Can Uzun and Raşit Eren Cangür

This study presents an ontological approach to assess the architectural outputs of generative adversarial networks. This paper aims to assess the performance of the generative…

Abstract

Purpose

This study presents an ontological approach to assess the architectural outputs of generative adversarial networks. This paper aims to assess the performance of the generative adversarial network in representing building knowledge.

Design/methodology/approach

The proposed ontological assessment consists of five steps. These are, respectively, creating an architectural data set, developing ontology for the architectural data set, training the You Only Look Once object detection with labels within the proposed ontology, training the StyleGAN algorithm with the images in the data set and finally, detecting the ontological labels and calculating the ontological relations of StyleGAN-generated pixel-based architectural images. The authors propose and calculate ontological identity and ontological inclusion metrics to assess the StyleGAN-generated ontological labels. This study uses 300 bay window images as an architectural data set for the ontological assessment experiments.

Findings

The ontological assessment provides semantic-based queries on StyleGAN-generated architectural images by checking the validity of the building knowledge representation. Moreover, this ontological validity reveals the building element label-specific failure and success rates simultaneously.

Originality/value

This study contributes to the assessment process of the generative adversarial networks through ontological validity checks rather than only conducting pixel-based similarity checks; semantic-based queries can introduce the GAN-generated, pixel-based building elements into the architecture, engineering and construction industry.

Details

Construction Innovation , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 25 April 2024

Long Zhao, Xiaoye Liu, Linxiang Li, Run Guo and Yang Chen

This study aims to realize efficient, fast and safe robot search task, the belief criteria decision-making approach is proposed to solve the object search task with an uncertain…

Abstract

Purpose

This study aims to realize efficient, fast and safe robot search task, the belief criteria decision-making approach is proposed to solve the object search task with an uncertain location.

Design/methodology/approach

The study formulates the robot search task as a partially observable Markov decision process, uses the semantic information to evaluate the belief state and designs the belief criteria decision-making approach. A cost function considering a trade-off among belief state, path length and movement effort is modelled to select the next best location in path planning.

Findings

The semantic information is successfully modelled and propagated, which can represent the belief of finding object. The belief criteria decision-making (BCDM) approach is evaluated in both Gazebo simulation platform and physical experiments. Compared to greedy, uniform and random methods, the performance index of path length and execution time is superior by BCDM approach.

Originality/value

The prior knowledge of robot working environment, especially semantic information, can be used for path planning to achieve efficient task execution in path length and execution time. The modelling and updating of environment information can lead a promising research topic to realize a more intelligent decision-making method for object search task.

Details

Robotic Intelligence and Automation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 4 January 2024

Zicheng Zhang

Advanced big data analysis and machine learning methods are concurrently used to unleash the value of the data generated by government hotline and help devise intelligent…

Abstract

Purpose

Advanced big data analysis and machine learning methods are concurrently used to unleash the value of the data generated by government hotline and help devise intelligent applications including automated process management, standard construction and more accurate dispatched orders to build high-quality government service platforms as more widely data-driven methods are in the process.

Design/methodology/approach

In this study, based on the influence of the record specifications of texts related to work orders generated by the government hotline, machine learning tools are implemented and compared to optimize classify dispatching tasks by performing exploratory studies on the hotline work order text, including linguistics analysis of text feature processing, new word discovery, text clustering and text classification.

Findings

The complexity of the content of the work order is reduced by applying more standardized writing specifications based on combining text grammar numerical features. So, order dispatch success prediction accuracy rate reaches 89.6 per cent after running the LSTM model.

Originality/value

The proposed method can help improve the current dispatching processes run by the government hotline, better guide staff to standardize the writing format of work orders, improve the accuracy of order dispatching and provide innovative support to the current mechanism.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 6 February 2024

Lin Xue and Feng Zhang

With the increasing number of Web services, correct and efficient classification of Web services is crucial to improve the efficiency of service discovery. However, existing Web…

Abstract

Purpose

With the increasing number of Web services, correct and efficient classification of Web services is crucial to improve the efficiency of service discovery. However, existing Web service classification approaches ignore the class overlap in Web services, resulting in poor accuracy of classification in practice. This paper aims to provide an approach to address this issue.

Design/methodology/approach

This paper proposes a label confusion and priori correction-based Web service classification approach. First, functional semantic representations of Web services descriptions are obtained based on BERT. Then, the ability of the model is enhanced to recognize and classify overlapping instances by using label confusion learning techniques; Finally, the predictive results are corrected based on the label prior distribution to further improve service classification effectiveness.

Findings

Experiments based on the ProgrammableWeb data set show that the proposed model demonstrates 4.3%, 3.2% and 1% improvement in Macro-F1 value compared to the ServeNet-BERT, BERT-DPCNN and CARL-NET, respectively.

Originality/value

This paper proposes a Web service classification approach for the overlapping categories of Web services and improve the accuracy of Web services classification.

Details

International Journal of Web Information Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 19 April 2024

Yingying Yu, Wencheng Su, Zhangping Lu, Guifeng Liu and Wenjing Ni

Spatial olfactory design in the library appears to be a practical approach to enhance the coordination between architectural spaces and user behaviors, shape immersive activity…

Abstract

Purpose

Spatial olfactory design in the library appears to be a practical approach to enhance the coordination between architectural spaces and user behaviors, shape immersive activity experiences and shape immersive activity experiences. Therefore, this study aims to explore the association between the olfactory elements of library space and users’ olfactory perception, providing a foundation for the practical design of olfactory space in libraries.

Design/methodology/approach

Using the olfactory perception semantic differential experiment method, this study collected feedback on the emotional experience of olfactory stimuli from 56 participants in an academic library. From the perspective of environmental psychology, the dimensions of pleasure, control and arousal of users’ olfactory perception in the academic library environment were semantically and emotionally described. In addition, the impact of fatigue state on users’ olfactory perception was analyzed through statistical methods to explore the impact path of individual physical differences on olfactory perception.

Findings

It was found that users’ olfactory perception in the academic library environment is likely semantically described from the dimensions of pleasure, arousal and control. These dimensions mutually influence users’ satisfaction with olfactory elements. Moreover, there is a close correlation between pleasure and satisfaction. In addition, fatigue states may impact users’ olfactory perception. Furthermore, users in a high-fatigue state may be more sensitive to the arousal of olfactory perception.

Originality/value

This article is an empirical exploration of users’ perception of the environmental odors in libraries. The experimental results of this paper may have practical implications for the construction of olfactory space in academic libraries.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

1 – 10 of 238