Search results

1 – 10 of over 3000
Article
Publication date: 28 February 2023

Meike Huber, Dhruv Agarwal and Robert H. Schmitt

The determination of the measurement uncertainty is relevant for all measurement processes. In production engineering, the measurement uncertainty needs to be known to avoid…

Abstract

Purpose

The determination of the measurement uncertainty is relevant for all measurement processes. In production engineering, the measurement uncertainty needs to be known to avoid erroneous decisions. However, its determination is associated to high effort due to the expertise and expenditure that is needed for modelling measurement processes. Once a measurement model is developed, it cannot necessarily be used for any other measurement process. In order to make an existing model useable for other measurement processes and thus to reduce the effort for the determination of the measurement uncertainty, a procedure for the migration of measurement models has to be developed.

Design/methodology/approach

This paper presents an approach to migrate measurement models from an old process to a new “similar” process. In this approach, the authors first define “similarity” of two processes mathematically and then use it to give a first estimate of the measurement uncertainty of the similar measurement process and develop different learning strategies. A trained machine-learning model is then migrated to a similar measurement process without having to perform an equal size of experiments.Similarity assessment and model migration

Findings

The authors’ findings show that the proposed similarity assessment and model migration strategy can be used for reducing the effort for measurement uncertainty determination. They show that their method can be applied to a real pair of similar measurement processes, i.e. two computed tomography scans. It can be shown that, when applying the proposed method, a valid estimation of uncertainty and valid model even when using less data, i.e. less effort, can be built.

Originality/value

The proposed strategy can be applied to any two measurement processes showing a particular “similarity” and thus reduces the effort in estimating measurement uncertainties and finding valid measurement models.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 10
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 31 October 2023

Hong Zhou, Binwei Gao, Shilong Tang, Bing Li and Shuyu Wang

The number of construction dispute cases has maintained a high growth trend in recent years. The effective exploration and management of construction contract risk can directly…

Abstract

Purpose

The number of construction dispute cases has maintained a high growth trend in recent years. The effective exploration and management of construction contract risk can directly promote the overall performance of the project life cycle. The miss of clauses may result in a failure to match with standard contracts. If the contract, modified by the owner, omits key clauses, potential disputes may lead to contractors paying substantial compensation. Therefore, the identification of construction project contract missing clauses has heavily relied on the manual review technique, which is inefficient and highly restricted by personnel experience. The existing intelligent means only work for the contract query and storage. It is urgent to raise the level of intelligence for contract clause management. Therefore, this paper aims to propose an intelligent method to detect construction project contract missing clauses based on Natural Language Processing (NLP) and deep learning technology.

Design/methodology/approach

A complete classification scheme of contract clauses is designed based on NLP. First, construction contract texts are pre-processed and converted from unstructured natural language into structured digital vector form. Following the initial categorization, a multi-label classification of long text construction contract clauses is designed to preliminary identify whether the clause labels are missing. After the multi-label clause missing detection, the authors implement a clause similarity algorithm by creatively integrating the image detection thought, MatchPyramid model, with BERT to identify missing substantial content in the contract clauses.

Findings

1,322 construction project contracts were tested. Results showed that the accuracy of multi-label classification could reach 93%, the accuracy of similarity matching can reach 83%, and the recall rate and F1 mean of both can reach more than 0.7. The experimental results verify the feasibility of intelligently detecting contract risk through the NLP-based method to some extent.

Originality/value

NLP is adept at recognizing textual content and has shown promising results in some contract processing applications. However, the mostly used approaches of its utilization for risk detection in construction contract clauses predominantly are rule-based, which encounter challenges when handling intricate and lengthy engineering contracts. This paper introduces an NLP technique based on deep learning which reduces manual intervention and can autonomously identify and tag types of contractual deficiencies, aligning with the evolving complexities anticipated in future construction contracts. Moreover, this method achieves the recognition of extended contract clause texts. Ultimately, this approach boasts versatility; users simply need to adjust parameters such as segmentation based on language categories to detect omissions in contract clauses of diverse languages.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 19 January 2024

Ping Huang, Haitao Ding, Hong Chen, Jianwei Zhang and Zhenjia Sun

The growing availability of naturalistic driving datasets (NDDs) presents a valuable opportunity to develop various models for autonomous driving. However, while current NDDs…

Abstract

Purpose

The growing availability of naturalistic driving datasets (NDDs) presents a valuable opportunity to develop various models for autonomous driving. However, while current NDDs include data on vehicles with and without intended driving behavior changes, they do not explicitly demonstrate a type of data on vehicles that intend to change their driving behavior but do not execute the behaviors because of safety, efficiency, or other factors. This missing data is essential for autonomous driving decisions. This study aims to extract the driving data with implicit intentions to support the development of decision-making models.

Design/methodology/approach

According to Bayesian inference, drivers who have the same intended changes likely share similar influencing factors and states. Building on this principle, this study proposes an approach to extract data on vehicles that intended to execute specific behaviors but failed to do so. This is achieved by computing driving similarities between the candidate vehicles and benchmark vehicles with incorporation of the standard similarity metrics, which takes into account information on the surrounding vehicles' location topology and individual vehicle motion states. By doing so, the method enables a more comprehensive analysis of driving behavior and intention.

Findings

The proposed method is verified on the Next Generation SIMulation dataset (NGSim), which confirms its ability to reveal similarities between vehicles executing similar behaviors during the decision-making process in nature. The approach is also validated using simulated data, achieving an accuracy of 96.3 per cent in recognizing vehicles with specific driving behavior intentions that are not executed.

Originality/value

This study provides an innovative approach to extract driving data with implicit intentions and offers strong support to develop data-driven decision-making models for autonomous driving. With the support of this approach, the development of autonomous vehicles can capture more real driving experience from human drivers moving towards a safer and more efficient future.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 23 October 2023

Gorakh Nath and Abhay Maurya

The purpose of the present article is to obtain the similarity solution for the shock wave generated by a piston propagating in a self-gravitating nonideal gas under the impact of…

Abstract

Purpose

The purpose of the present article is to obtain the similarity solution for the shock wave generated by a piston propagating in a self-gravitating nonideal gas under the impact of azimuthal magnetic field for adiabatic and isothermal flows.

Design/methodology/approach

The Lie group theoretic method given by Sophus Lie is used to obtain the similarity solution in the present article.

Findings

Similarity solution with exponential law shock path is obtained for both ideal and nonideal gas cases. The effects on the flow variables, density ratio at the shock front and shock strength by the variation of the shock Cowling number, adiabatic index of the gas, gravitational parameter and nonidealness parameter are investigated. The shock strength decreases with an increase in the shock Cowling number, nonidealness parameter and adiabatic index, whereas the strength of the shock wave increases with an increase in gravitational parameter.

Originality/value

Propagation of shock wave with spherical geometry in a self-gravitating nonideal gas under the impact of azimuthal magnetic field for adiabatic and isothermal flows has not been studied by any author using the Lie group theoretic method.

Details

Engineering Computations, vol. 40 no. 9/10
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 28 February 2023

V. Senthil Kumaran and R. Latha

The purpose of this paper is to provide adaptive access to learning resources in the digital library.

Abstract

Purpose

The purpose of this paper is to provide adaptive access to learning resources in the digital library.

Design/methodology/approach

A novel method using ontology-based multi-attribute collaborative filtering is proposed. Digital libraries are those which are fully automated and all resources are in digital form and access to the information available is provided to a remote user as well as a conventional user electronically. To satisfy users' information needs, a humongous amount of newly created information is published electronically in digital libraries. While search applications are improving, it is still difficult for the majority of users to find relevant information. For better service, the framework should also be able to adapt queries to search domains and target learners.

Findings

This paper improves the accuracy and efficiency of predicting and recommending personalized learning resources in digital libraries. To facilitate a personalized digital learning environment, the authors propose a novel method using ontology-supported collaborative filtering (CF) recommendation system. The objective is to provide adaptive access to learning resources in the digital library. The proposed model is based on user-based CF which suggests learning resources for students based on their course registration, preferences for topics and digital libraries. Using ontological framework knowledge for semantic similarity and considering multiple attributes apart from learners' preferences for the learning resources improve the accuracy of the proposed model.

Research limitations/implications

The results of this work majorly rely on the developed ontology. More experiments are to be conducted with other domain ontologies.

Practical implications

The proposed approach is integrated into Nucleus, a Learning Management System (https://nucleus.amcspsgtech.in). The results are of interest to learners, academicians, researchers and developers of digital libraries. This work also provides insights into the ontology for e-learning to improve personalized learning environments.

Originality/value

This paper computes learner similarity and learning resources similarity based on ontological knowledge, feedback and ratings on the learning resources. The predictions for the target learner are calculated and top N learning resources are generated by the recommendation engine using CF.

Article
Publication date: 18 May 2023

Rongen Yan, Depeng Dang, Hu Gao, Yan Wu and Wenhui Yu

Question answering (QA) answers the questions asked by people in the form of natural language. In the QA, due to the subjectivity of users, the questions they query have different…

Abstract

Purpose

Question answering (QA) answers the questions asked by people in the form of natural language. In the QA, due to the subjectivity of users, the questions they query have different expressions, which increases the difficulty of text retrieval. Therefore, the purpose of this paper is to explore new query rewriting method for QA that integrates multiple related questions (RQs) to form an optimal question. Moreover, it is important to generate a new dataset of the original query (OQ) with multiple RQs.

Design/methodology/approach

This study collects a new dataset SQuAD_extend by crawling the QA community and uses word-graph to model the collected OQs. Next, Beam search finds the best path to get the best question. To deeply represent the features of the question, pretrained model BERT is used to model sentences.

Findings

The experimental results show three outstanding findings. (1) The quality of the answers is better after adding the RQs of the OQs. (2) The word-graph that is used to model the problem and choose the optimal path is conducive to finding the best question. (3) Finally, BERT can deeply characterize the semantics of the exact problem.

Originality/value

The proposed method can use word-graph to construct multiple questions and select the optimal path for rewriting the question, and the quality of answers is better than the baseline. In practice, the research results can help guide users to clarify their query intentions and finally achieve the best answer.

Details

Data Technologies and Applications, vol. 58 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 19 January 2023

Mitali Desai, Rupa G. Mehta and Dipti P. Rana

Scholarly communications, particularly, questions and answers (Q&A) present on digital scholarly platforms provide a new avenue to gain knowledge. However, several studies have…

Abstract

Purpose

Scholarly communications, particularly, questions and answers (Q&A) present on digital scholarly platforms provide a new avenue to gain knowledge. However, several studies have raised a concern about the content anomalies in these Q&A and suggested a proper validation before utilizing them in scholarly applications such as influence analysis and content-based recommendation systems. The content anomalies are referred as disinformation in this research. The purpose of this research is firstly, to assess scholarly communications in order to identify disinformation and secondly, to help scholarly platforms determine the scholars who probably disseminate such disinformation. These scholars are referred as the probable sources of disinformation.

Design/methodology/approach

To identify disinformation, the proposed model deduces (1) content redundancy and contextual redundancy in questions (2) contextual nonrelevance in answers with respect to the questions and (3) quality of answers with respect to the expertise of the answering scholars. Then, the model determines the probable sources of disinformation using the statistical analysis.

Findings

The model is evaluated on ResearchGate (RG) data. Results suggest that the model efficiently identifies disinformation from scholarly communications and accurately detects the probable sources of disinformation.

Practical implications

Different platforms with communication portals can use this model as a regulatory mechanism to restrict the prorogation of disinformation. Scholarly platforms can use this model to generate an accurate influence assessment mechanism and also relevant recommendations for their scholars.

Originality/value

The existing studies majorly deal with validating the answers using statistical measures. The proposed model focuses on questions as well as answers and performs a contextual analysis using an advanced word embedding technique.

Details

Kybernetes, vol. 53 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

Book part
Publication date: 9 November 2023

Michał Bernardelli and Mariusz Próchniak

The comparison between economic growth and the character of monetary policy is one of the most frequently studied issues in policymaking. However, the number of studies…

Abstract

Research Background

The comparison between economic growth and the character of monetary policy is one of the most frequently studied issues in policymaking. However, the number of studies incorporating a dynamic time warping approach to analyse the similarity of macroeconomic variables is relatively small.

The Purpose of the Chapter

The study aims at assessing the mutual similarity among various variables representing the financial sector (including the monetary policy by the central bank) and the real sector (e.g. economic growth, industrial production, household consumption expenditure), as well as cross-similarity between both sectors.

Methodology

The analysis is based on the dynamic time warping (DTW) method, which allows for capturing various dimensions of changes of considered variables. This method is almost non-existent in the literature to compare financial and economic time series. The application of this method constitutes the main area of value added of the research. The analysis includes five variables representing the financial sector and five from the real sector. The study covers four countries: Czechia, Hungary, Poland and Romania and the 2010–2022 period (quarterly data).

Findings

The results show that variables representing the financial sector, including those reflecting monetary policy, are weakly correlated with each other, whereas the variables representing the real economy have a solid mutual similarity. As regards individual variables, for example, GDP fluctuations show relatively substantial similarity to ROE fluctuations – especially in Czechia and Hungary. In the case of Hungary and Romania, CAR fluctuations are consistent with GDP fluctuations. In the case of Poland and Hungary, there is a relatively strong similarity between the economy's monetisation and economic growth. Comparing the individual countries, two clusters of countries can be identified. One cluster includes Poland and Czechia, while another covers Hungary and Romania.

Details

Modeling Economic Growth in Contemporary Poland
Type: Book
ISBN: 978-1-83753-655-9

Keywords

Open Access
Article
Publication date: 11 October 2023

Bachriah Fatwa Dhini, Abba Suganda Girsang, Unggul Utan Sufandi and Heny Kurniawati

The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes…

Abstract

Purpose

The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes essay scoring, which is conducted through two parameters, semantic and keyword similarities, using a SentenceTransformers pre-trained model that can construct the highest vector embedding. Combining these models is used to optimize the model with increasing accuracy.

Design/methodology/approach

The development of the model in the study is divided into seven stages: (1) data collection, (2) pre-processing data, (3) selected pre-trained SentenceTransformers model, (4) semantic similarity (sentence pair), (5) keyword similarity, (6) calculate final score and (7) evaluating model.

Findings

The multilingual paraphrase-multilingual-MiniLM-L12-v2 and distilbert-base-multilingual-cased-v1 models got the highest scores from comparisons of 11 pre-trained multilingual models of SentenceTransformers with Indonesian data (Dhini and Girsang, 2023). Both multilingual models were adopted in this study. A combination of two parameters is obtained by comparing the response of the keyword extraction responses with the rubric keywords. Based on the experimental results, proposing a combination can increase the evaluation results by 0.2.

Originality/value

This study uses discussion forum data from the general biology course in online learning at the open university for the 2020.2 and 2021.2 semesters. Forum discussion ratings are still manual. In this survey, the authors created a model that automatically calculates the value of discussion forums, which are essays based on the lecturer's answers moreover rubrics.

Details

Asian Association of Open Universities Journal, vol. 18 no. 3
Type: Research Article
ISSN: 1858-3431

Keywords

Article
Publication date: 18 January 2024

Yahan Xiong and Xiaodong Fu

Users often struggle to select choosing among similar online services. To help them make informed decisions, it is important to establish a service reputation measurement…

Abstract

Purpose

Users often struggle to select choosing among similar online services. To help them make informed decisions, it is important to establish a service reputation measurement mechanism. User-provided feedback ratings serve as a primary source of information for this mechanism, and ensuring the credibility of user feedback is crucial for a reliable reputation measurement. Most of the previous studies use passive detection to identify false feedback without creating incentives for honest reporting. Therefore, this study aims to develop a reputation measure for online services that can provide incentives for users to report honestly.

Design/methodology/approach

In this paper, the authors present a method that uses a peer prediction mechanism to evaluate user credibility, which evaluates users’ credibility with their reports by applying the strictly proper scoring rule. Considering the heterogeneity among users, the authors measure user similarity, identify similar users as peers to assess credibility and calculate service reputation using an improved expectation-maximization algorithm based on user credibility.

Findings

Theoretical analysis and experimental results verify that the proposed method motivates truthful reporting, effectively identifies malicious users and achieves high service rating accuracy.

Originality/value

The proposed method has significant practical value in evaluating the authenticity of user feedback and promoting honest reporting.

Details

International Journal of Web Information Systems, vol. 20 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of over 3000