Search results

1 – 10 of over 2000
Article
Publication date: 31 August 2023

Faycal Touazi and Amel Boustil

The purpose of this paper is to address the need for new approaches in locating items that closely match user preference criteria due to the rise in data volume of knowledge bases…

Abstract

Purpose

The purpose of this paper is to address the need for new approaches in locating items that closely match user preference criteria due to the rise in data volume of knowledge bases resulting from Open Data initiatives. Specifically, the paper focuses on evaluating SPARQL qualitative preference queries over user preferences in SPARQL.

Design/methodology/approach

The paper outlines a novel approach for handling SPARQL preference queries by representing preferences through symbolic weights using the possibilistic logic (PL) framework. This approach allows for the management of symbolic weights without relying on numerical values, using a partial ordering system instead. The paper compares this approach with numerous other approaches, including those based on skylines, fuzzy sets and conditional preference networks.

Findings

The paper highlights the advantages of the proposed approach, which enables the representation of preference criteria through symbolic weights and qualitative considerations. This approach offers a more intuitive way to convey preferences and manage rankings.

Originality/value

The paper demonstrates the usefulness and originality of the proposed SPARQL language in the PL framework. The approach extends SPARQL by incorporating symbolic weights and qualitative preferences.

Details

International Journal of Web Information Systems, vol. 19 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 8 September 2023

Oussama Ayoub, Christophe Rodrigues and Nicolas Travers

This paper aims to manage the word gap in information retrieval (IR) especially for long documents belonging to specific domains. In fact, with the continuous growth of text data…

Abstract

Purpose

This paper aims to manage the word gap in information retrieval (IR) especially for long documents belonging to specific domains. In fact, with the continuous growth of text data that modern IR systems have to manage, existing solutions are needed to efficiently find the best set of documents for a given request. The words used to describe a query can differ from those used in related documents. Despite meaning closeness, nonoverlapping words are challenging for IR systems. This word gap becomes significant for long documents from specific domains.

Design/methodology/approach

To generate new words for a document, a deep learning (DL) masked language model is used to infer related words. Used DL models are pretrained on massive text data and carry common or specific domain knowledge to propose a better document representation.

Findings

The authors evaluate the approach of this study on specific IR domains with long documents to show the genericity of the proposed model and achieve encouraging results.

Originality/value

In this paper, to the best of the authors’ knowledge, an original unsupervised and modular IR system based on recent DL methods is introduced.

Details

International Journal of Web Information Systems, vol. 19 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 14 July 2022

Nishad A. and Sajimon Abraham

A wide number of technologies are currently in store to harness the challenges posed by pandemic situations. As such diseases transmit by way of person-to-person contact or by any…

Abstract

Purpose

A wide number of technologies are currently in store to harness the challenges posed by pandemic situations. As such diseases transmit by way of person-to-person contact or by any other means, the World Health Organization had recommended location tracking and tracing of people either infected or contacted with the patients as one of the standard operating procedures and has also outlined protocols for incident management. Government agencies use different inputs such as smartphone signals and details from the respondent to prepare the travel log of patients. Each and every event of their trace such as stay points, revisit locations and meeting points is important. More trained staffs and tools are required under the traditional system of contact tracing. At the time of the spiralling patient count, the time-bound tracing of primary and secondary contacts may not be possible, and there are chances of human errors as well. In this context, the purpose of this paper is to propose an algorithm called SemTraClus-Tracer, an efficient approach for computing the movement of individuals and analysing the possibility of pandemic spread and vulnerability of the locations.

Design/methodology/approach

Pandemic situations push the world into existential crises. In this context, this paper proposes an algorithm called SemTraClus-Tracer, an efficient approach for computing the movement of individuals and analysing the possibility of pandemic spread and vulnerability of the locations. By exploring the daily mobility and activities of the general public, the system identifies multiple levels of contacts with respect to an infected person and extracts semantic information by considering vital factors that can induce virus spread. It grades different geographic locations according to a measure called weightage of participation so that vulnerable locations can be easily identified. This paper gives directions on the advantages of using spatio-temporal aggregate queries for extracting general characteristics of social mobility. The system also facilitates room for the generation of various information by combing through the medical reports of the patients.

Findings

It is identified that context of movement is important; hence, the existing SemTraClus algorithm is modified by accounting for four important factors such as stay point, contact presence, stay time of primary contacts and waypoint severity. The priority level can be reconfigured according to the interest of authority. This approach reduces the overwhelming task of contact tracing. Different functionalities provided by the system are also explained. As the real data set is not available, experiments are conducted with similar data and results are shown for different types of journeys in different geographical locations. The proposed method efficiently handles computational movement and activity analysis by incorporating various relevant semantics of trajectories. The incorporation of cluster-based aggregate queries in the model do away with the computational headache of processing the entire mobility data.

Research limitations/implications

As the trajectory of patients is not available, the authors have used the standard data sets for experimentation, which serve the purpose.

Originality/value

This paper proposes a framework infrastructure that allows the emergency response team to grab multiple information based on the tracked mobility details of a patient and facilitates room for various activities for the mitigation of pandemics such as the prediction of hotspots, identification of stay locations and suggestion of possible locations of primary and secondary contacts, creation of clusters of hotspots and identification of nearby medical assistance. The system provides an efficient way of activity analysis by computing the mobility of people and identifying features of geographical locations where people travelled. While formulating the framework, the authors have reviewed many different implementation plans and protocols and arrived at the conclusion that the core strategy followed is more or less the same. For the sake of a reference model, the Indian scenario is adopted for defining the concepts.

Details

International Journal of Pervasive Computing and Communications, vol. 19 no. 4
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 18 May 2023

Rongen Yan, Depeng Dang, Hu Gao, Yan Wu and Wenhui Yu

Question answering (QA) answers the questions asked by people in the form of natural language. In the QA, due to the subjectivity of users, the questions they query have different…

Abstract

Purpose

Question answering (QA) answers the questions asked by people in the form of natural language. In the QA, due to the subjectivity of users, the questions they query have different expressions, which increases the difficulty of text retrieval. Therefore, the purpose of this paper is to explore new query rewriting method for QA that integrates multiple related questions (RQs) to form an optimal question. Moreover, it is important to generate a new dataset of the original query (OQ) with multiple RQs.

Design/methodology/approach

This study collects a new dataset SQuAD_extend by crawling the QA community and uses word-graph to model the collected OQs. Next, Beam search finds the best path to get the best question. To deeply represent the features of the question, pretrained model BERT is used to model sentences.

Findings

The experimental results show three outstanding findings. (1) The quality of the answers is better after adding the RQs of the OQs. (2) The word-graph that is used to model the problem and choose the optimal path is conducive to finding the best question. (3) Finally, BERT can deeply characterize the semantics of the exact problem.

Originality/value

The proposed method can use word-graph to construct multiple questions and select the optimal path for rewriting the question, and the quality of answers is better than the baseline. In practice, the research results can help guide users to clarify their query intentions and finally achieve the best answer.

Details

Data Technologies and Applications, vol. 58 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 15 May 2023

Dongsheng Li and Jun Li

Minimizing the impact on the surrounding environment and maximizing the use of production raw materials while ensuring that the relevant processes and services can be delivered…

Abstract

Purpose

Minimizing the impact on the surrounding environment and maximizing the use of production raw materials while ensuring that the relevant processes and services can be delivered within the specified time are the contents of enterprise supply chain management in the green financial system.

Design/methodology/approach

With the continuous development of China's economy and the continuous deepening of the concept of sustainable development, how to further upgrade the enterprise supply chain management is an urgent need to solve. How to maximize the utilization of resources in the supply chain needs to be realized from the whole process of raw material purchase, transportation and processing.

Findings

It was proved that digital twin technology had a partial intermediary role in the role of supply chain big data analysis capability on corporate finance, market, operation and other performance.

Originality/value

This paper focused on describing how digital twin technology could be applied to big data analysis of enterprise supply chain under the green financial system and proved its usability through experiments. The experimental results showed that the indirect effect of the path big data analysis capability digital twin technology enterprise financial performance was 0.378. The indirect effect of the path big data analysis capability digital twin technology enterprise market performance was 0.341. The indirect effect of the path big data analysis capability digital twin technology enterprise operational performance was 0.374.

Details

Kybernetes, vol. 53 no. 2
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 25 September 2023

José Félix Yagüe, Ignacio Huitzil, Carlos Bobed and Fernando Bobillo

There is an increasing interest in the use of knowledge graphs to represent real-world knowledge and a common need to manage imprecise knowledge in many real-world applications…

Abstract

Purpose

There is an increasing interest in the use of knowledge graphs to represent real-world knowledge and a common need to manage imprecise knowledge in many real-world applications. This paper aims to study approaches to solve flexible queries over knowledge graphs.

Design/methodology/approach

By introducing fuzzy logic in the query answering process, the authors are able to obtain a novel algorithm to solve flexible queries over knowledge graphs. This approach is implemented in the FUzzy Knowledge Graphs system, a software tool with an intuitive user-graphical interface.

Findings

This approach makes it possible to reuse semantic web standards (RDF, SPARQL and OWL 2) and builds a fuzzy layer on top of them. The application to a use case shows that the system can aggregate information in different ways by selecting different fusion operators and adapting to different user needs.

Originality/value

This approach is more general than similar previous works in the literature and provides a specific way to represent the flexible restrictions (using fuzzy OWL 2 datatypes).

Details

The Electronic Library , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 30 August 2023

Yi-Hung Liu, Sheng-Fong Chen and Dan-Wei (Marian) Wen

Online medical repositories provide a platform for users to share information and dynamically access abundant electronic health data. It is important to determine whether case…

Abstract

Purpose

Online medical repositories provide a platform for users to share information and dynamically access abundant electronic health data. It is important to determine whether case report information can assist the general public in appropriately managing their diseases. Therefore, this paper aims to introduce a novel deep learning-based method that allows non-professionals to make inquiries using ordinary vocabulary, retrieving the most relevant case reports for accurate and effective health information.

Design/methodology/approach

The dataset of case reports was collected from both the patient-generated research network and the digital medical journal repository. To enhance the accuracy of obtaining relevant case reports, the authors propose a retrieval approach that combines BERT and BiLSTM methods. The authors identified representative health-related case reports and analyzed the retrieval performance, as well as user judgments.

Findings

This study aims to provide the necessary functionalities to deliver relevant health case reports based on input from ordinary terms. The proposed framework includes features for health management, user feedback acquisition and ranking by weights to obtain the most pertinent case reports.

Originality/value

This study contributes to health information systems by analyzing patients' experiences and treatments with the case report retrieval model. The results of this study can provide immense benefit to the general public who intend to find treatment decisions and experiences from relevant case reports.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 12 September 2023

Wenjing Wu, Caifeng Wen, Qi Yuan, Qiulan Chen and Yunzhong Cao

Learning from safety accidents and sharing safety knowledge has become an important part of accident prevention and improving construction safety management. Considering the…

Abstract

Purpose

Learning from safety accidents and sharing safety knowledge has become an important part of accident prevention and improving construction safety management. Considering the difficulty of reusing unstructured data in the construction industry, the knowledge in it is difficult to be used directly for safety analysis. The purpose of this paper is to explore the construction of construction safety knowledge representation model and safety accident graph through deep learning methods, extract construction safety knowledge entities through BERT-BiLSTM-CRF model and propose a data management model of data–knowledge–services.

Design/methodology/approach

The ontology model of knowledge representation of construction safety accidents is constructed by integrating entity relation and logic evolution. Then, the database of safety incidents in the architecture, engineering and construction (AEC) industry is established based on the collected construction safety incident reports and related dispute cases. The construction method of construction safety accident knowledge graph is studied, and the precision of BERT-BiLSTM-CRF algorithm in information extraction is verified through comparative experiments. Finally, a safety accident report is used as an example to construct the AEC domain construction safety accident knowledge graph (AEC-KG), which provides visual query knowledge service and verifies the operability of knowledge management.

Findings

The experimental results show that the combined BERT-BiLSTM-CRF algorithm has a precision of 84.52%, a recall of 92.35%, and an F1 value of 88.26% in named entity recognition from the AEC domain database. The construction safety knowledge representation model and safety incident knowledge graph realize knowledge visualization.

Originality/value

The proposed framework provides a new knowledge management approach to improve the safety management of practitioners and also enriches the application scenarios of knowledge graph. On the one hand, it innovatively proposes a data application method and knowledge management method of safety accident report that integrates entity relationship and matter evolution logic. On the other hand, the legal adjudication dimension is innovatively added to the knowledge graph in the construction safety field as the basis for the postincident disposal measures of safety accidents, which provides reference for safety managers' decision-making in all aspects.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 24 April 2023

Priya Garg and Shivarama Rao K.

This paper aims to discuss the process of building a 24×7 reference platform for facilitating the farmers with the easy access of information at any time from any location. It…

Abstract

Purpose

This paper aims to discuss the process of building a 24×7 reference platform for facilitating the farmers with the easy access of information at any time from any location. It takes the text string as input and process it to respond with the desired result to the user.

Design/methodology/approach

An interactive Web-based chatbot named as AgriRef was developed using free version of Dialogflow. The intents were defined based on the conversation flow diagram. Furthermore, the application was integrated with website on local server and telegram application.

Findings

With this chatbot application, the farmers will able to get answers of their queries. It provides the human-like conversational interface to the farmers. It will also be useful for librarians of agricultural libraries to save time in answering common queries.

Originality/value

This paper describes the various steps involved in developing the chatbot application using Dialogflow.

Details

Library Hi Tech News, vol. 41 no. 2
Type: Research Article
ISSN: 0741-9058

Keywords

Open Access
Article
Publication date: 5 April 2023

Tomás Lopes and Sérgio Guerreiro

Testing business processes is crucial to assess the compliance of business process models with requirements. Automating this task optimizes testing efforts and reduces human error…

2884

Abstract

Purpose

Testing business processes is crucial to assess the compliance of business process models with requirements. Automating this task optimizes testing efforts and reduces human error while also providing improvement insights for the business process modeling activity. The primary purposes of this paper are to conduct a literature review of Business Process Model and Notation (BPMN) testing and formal verification and to propose the Business Process Evaluation and Research Framework for Enhancement and Continuous Testing (bPERFECT) framework, which aims to guide business process testing (BPT) research and implementation. Secondary objectives include (1) eliciting the existing types of testing, (2) evaluating their impact on efficiency and (3) assessing the formal verification techniques that complement testing.

Design/methodology/approach

The methodology used is based on Kitchenham's (2004) original procedures for conducting systematic literature reviews.

Findings

Results of this study indicate that three distinct business process model testing types can be found in the literature: black/gray-box, regression and integration. Testing and verification approaches differ in aspects such as awareness of test data, coverage criteria and auxiliary representations used. However, most solutions pose notable hindrances, such as BPMN element limitations, that lead to limited practicality.

Research limitations/implications

The databases selected in the review protocol may have excluded relevant studies on this topic. More databases and gray literature could also be considered for inclusion in this review.

Originality/value

Three main originality aspects are identified in this study as follows: (1) the classification of process model testing types, (2) the future trends foreseen for BPMN model testing and verification and (3) the bPERFECT framework for testing business processes.

Details

Business Process Management Journal, vol. 29 no. 8
Type: Research Article
ISSN: 1463-7154

Keywords

1 – 10 of over 2000