Search results

1 – 10 of over 55000
Article
Publication date: 3 June 2019

Bilal Hawashin, Shadi Alzubi, Tarek Kanan and Ayman Mansour

This paper aims to propose a new efficient semantic recommender method for Arabic content.

Abstract

Purpose

This paper aims to propose a new efficient semantic recommender method for Arabic content.

Design/methodology/approach

Three semantic similarities were proposed to be integrated with the recommender system to improve its ability to recommend based on the semantic aspect. The proposed similarities are CHI-based semantic similarity, singular value decomposition (SVD)-based semantic similarity and Arabic WordNet-based semantic similarity. These similarities were compared with the existing similarities used by recommender systems from the literature.

Findings

Experiments show that the proposed semantic method using CHI-based similarity and using SVD-based similarity are more efficient than the existing methods on Arabic text in term of accuracy and execution time.

Originality/value

Although many previous works proposed recommender system methods for English text, very few works concentrated on Arabic Text. The field of Arabic Recommender Systems is largely understudied in the literature. Aside from this, there is a vital need to consider the semantic relationships behind user preferences to improve the accuracy of the recommendations. The contributions of this work are the following. First, as many recommender methods were proposed for English text and have never been tested on Arabic text, this work compares the performance of these widely used methods on Arabic text. Second, it proposes a novel semantic recommender method for Arabic text. As this method uses semantic similarity, three novel base semantic similarities were proposed and evaluated. Third, this work would direct the attention to more studies in this understudied topic in the literature.

Article
Publication date: 1 March 2003

Hassan M. Selim, Reda M.S. Abdel Aal and Araby I. Mahdi

This paper introduces a modified single linkage clustering heuristic (MOD‐SC). The proposed MOD‐SLC objective is to test the application of Baroni‐Urban and Buser (BUB) similarity

Abstract

This paper introduces a modified single linkage clustering heuristic (MOD‐SC). The proposed MOD‐SLC objective is to test the application of Baroni‐Urban and Buser (BUB) similarity coefficient to the manufacturing cell formation (MCF) problem instead of Jaccard’s similarity coefficient. The MOD‐SLC has been compared and evaluated against three cluster formation‐based heuristics for MCF. The three heuristics are: the single linkage clustering, enhanced rank order clustering, and direct clustering algorithm. The MCF methods considered in this comparative and evaluative study belong to the cluster formation approach of solving the MCF problem. The comparison and evaluation are performed using four published performance measures. A total of 25 published and ten hypothetical and randomly generated problem data sets are used in the proposed evaluative study. Results analysis is carried out to test and validate the proposed BUB based MOD‐SLC. Finally the pros and cons of each method are stated and discussed.

Details

Integrated Manufacturing Systems, vol. 14 no. 2
Type: Research Article
ISSN: 0957-6061

Keywords

Article
Publication date: 6 May 2014

Jin Zhang and Marcia Lei Zeng

– The purpose of this paper is to introduce a new similarity method to gauge the differences between two subject hierarchical structures.

Abstract

Purpose

The purpose of this paper is to introduce a new similarity method to gauge the differences between two subject hierarchical structures.

Design/methodology/approach

In the proposed similarity measure, nodes on two hierarchical structures are projected onto a two-dimensional space, respectively, and both structural similarity and subject similarity of nodes are considered in the similarity between the two hierarchical structures. The extent to which the structural similarity impacts on the similarity can be controlled by adjusting a parameter. An experiment was conducted to evaluate soundness of the measure. Eight experts whose research interests were information retrieval and information organization participated in the study. Results from the new measure were compared with results from the experts.

Findings

The evaluation shows strong correlations between the results from the new method and the results from the experts. It suggests that the similarity method achieved satisfactory results.

Practical implications

Hierarchical structures that are found in subject directories, taxonomies, classification systems, and other classificatory structures play an extremely important role in information organization and information representation. Measuring the similarity between two subject hierarchical structures allows an accurate overarching understanding of the degree to which the two hierarchical structures are similar.

Originality/value

Both structural similarity and subject similarity of nodes were considered in the proposed similarity method, and the extent to which the structural similarity impacts on the similarity can be adjusted. In addition, a new evaluation method for a hierarchical structure similarity was presented.

Details

Journal of Documentation, vol. 70 no. 3
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 21 June 2021

Bufei Xing, Haonan Yin, Zhijun Yan and Jiachen Wang

The purpose of this paper is to propose a new approach to retrieve similar questions in online health communities to improve the efficiency of health information retrieval and…

Abstract

Purpose

The purpose of this paper is to propose a new approach to retrieve similar questions in online health communities to improve the efficiency of health information retrieval and sharing.

Design/methodology/approach

This paper proposes a hybrid approach to combining domain knowledge similarity and topic similarity to retrieve similar questions in online health communities. The domain knowledge similarity can evaluate the domain distance between different questions. And the topic similarity measures questions’ relationship base on the extracted latent topics.

Findings

The experiment results show that the proposed method outperforms the baseline methods.

Originality/value

This method conquers the problem of word mismatch and considers the named entities included in questions, which most of existing studies did not.

Details

International Journal of Crowd Science, vol. 5 no. 2
Type: Research Article
ISSN: 2398-7294

Keywords

Article
Publication date: 11 July 2019

M. Priya and Aswani Kumar Ch.

The purpose of this paper is to merge the ontologies that remove the redundancy and improve the storage efficiency. The count of ontologies developed in the past few eras is…

Abstract

Purpose

The purpose of this paper is to merge the ontologies that remove the redundancy and improve the storage efficiency. The count of ontologies developed in the past few eras is noticeably very high. With the availability of these ontologies, the needed information can be smoothly attained, but the presence of comparably varied ontologies nurtures the dispute of rework and merging of data. The assessment of the existing ontologies exposes the existence of the superfluous information; hence, ontology merging is the only solution. The existing ontology merging methods focus only on highly relevant classes and instances, whereas somewhat relevant classes and instances have been simply dropped. Those somewhat relevant classes and instances may also be useful or relevant to the given domain. In this paper, we propose a new method called hybrid semantic similarity measure (HSSM)-based ontology merging using formal concept analysis (FCA) and semantic similarity measure.

Design/methodology/approach

The HSSM categorizes the relevancy into three classes, namely highly relevant, moderate relevant and least relevant classes and instances. To achieve high efficiency in merging, HSSM performs both FCA part and the semantic similarity part.

Findings

The experimental results proved that the HSSM produced better results compared with existing algorithms in terms of similarity distance and time. An inconsistency check can also be done for the dissimilar classes and instances within an ontology. The output ontology will have set of highly relevant and moderate classes and instances as well as few least relevant classes and instances that will eventually lead to exhaustive ontology for the particular domain.

Practical implications

In this paper, a HSSM method is proposed and used to merge the academic social network ontologies; this is observed to be an extremely powerful methodology compared with other former studies. This HSSM approach can be applied for various domain ontologies and it may deliver a novel vision to the researchers.

Originality/value

The HSSM is not applied for merging the ontologies in any former studies up to the knowledge of authors.

Details

Library Hi Tech, vol. 38 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 15 February 2021

Zhongjun Tang, Tingting Wang, Junfu Cui, Zhongya Han and Bo He

Because of short life cycle and fluctuating greatly in total sales volumes (TSV), it is difficult to accumulate enough sales data and mine an attribute set reflecting the common…

366

Abstract

Purpose

Because of short life cycle and fluctuating greatly in total sales volumes (TSV), it is difficult to accumulate enough sales data and mine an attribute set reflecting the common needs of all consumers for a kind of experiential product with short life cycle (EPSLC). Methods for predicting TSV of long-life-cycle products may not be suitable for EPSLC. Furthermore, point prediction cannot obtain satisfactory prediction results because information available before production is inadequate. Thus, this paper aims at proposing and verifying a novel interval prediction method (IPM).

Design/methodology/approach

Because interval prediction may satisfy requirements of preproduction investment decision-making, interval prediction was adopted, and then the prediction difficult was converted into a classification problem. The classification was designed by comparing similarities in attribute relationship patterns between a new EPSLC and existing product groups. The product introduction may be written or obtained before production and thus was designed as primary source information. IPM was verified by using data of crime movies released in China from 2013 to 2017.

Findings

The IPM is valid, which uses product introduction as input, classifies existing products into three groups with different TSV intervals, mines attribute relationship patterns using content and association analyses and compares similarities in attribute relationship patterns – to predict TSV interval of a new EPSLC before production.

Originality/value

Different from other studies, the IPM uses product introduction to mine attribute relationship patterns and compares similarities in attribute relationship patterns to predict the interval values. It has a strong applicability in data content and structure and may realize rolling prediction.

Article
Publication date: 21 May 2018

Dongmei Han, Wen Wang, Suyuan Luo, Weiguo Fan and Songxin Wang

This paper aims to apply vector space model (VSM)-PCR model to compute the similarity of Fault zone ontology semantics, which verified the feasibility and effectiveness of the…

Abstract

Purpose

This paper aims to apply vector space model (VSM)-PCR model to compute the similarity of Fault zone ontology semantics, which verified the feasibility and effectiveness of the application of VSM-PCR method in uncertainty mapping of ontologies.

Design/methodology/approach

The authors first define the concept of uncertainty ontology and then propose the method of ontology mapping. The proposed method fully considers the properties of ontology in measuring the similarity of concept. It expands the single VSM of concept meaning or instance set to the “meaning, properties, instance” three-dimensional VSM and uses membership degree or correlation to express the level of uncertainty.

Findings

It provides a relatively better accuracy which verified the feasibility and effectiveness of VSM-PCR method in treating the uncertainty mapping of ontology.

Research limitations/implications

The future work will focus on exploring the similarity measure and combinational methods in every dimension.

Originality/value

This paper presents an uncertain mapping method of ontology concept based on three-dimensional combination weighted VSM, namely, VSM-PCR. It expands the single VSM of concept meaning or instance set to the “meaning, properties, instance” three-dimensional VSM. The model uses membership degree or correlation which is used to express the degree of uncertainty; as a result, a three-dimensional VSM is obtained. The authors finally provide an example to verify the feasibility and effectiveness of VSM-PCR method in treating the uncertainty mapping of ontology.

Details

Information Discovery and Delivery, vol. 46 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 28 February 2023

Meike Huber, Dhruv Agarwal and Robert H. Schmitt

The determination of the measurement uncertainty is relevant for all measurement processes. In production engineering, the measurement uncertainty needs to be known to avoid…

Abstract

Purpose

The determination of the measurement uncertainty is relevant for all measurement processes. In production engineering, the measurement uncertainty needs to be known to avoid erroneous decisions. However, its determination is associated to high effort due to the expertise and expenditure that is needed for modelling measurement processes. Once a measurement model is developed, it cannot necessarily be used for any other measurement process. In order to make an existing model useable for other measurement processes and thus to reduce the effort for the determination of the measurement uncertainty, a procedure for the migration of measurement models has to be developed.

Design/methodology/approach

This paper presents an approach to migrate measurement models from an old process to a new “similar” process. In this approach, the authors first define “similarity” of two processes mathematically and then use it to give a first estimate of the measurement uncertainty of the similar measurement process and develop different learning strategies. A trained machine-learning model is then migrated to a similar measurement process without having to perform an equal size of experiments.Similarity assessment and model migration

Findings

The authors’ findings show that the proposed similarity assessment and model migration strategy can be used for reducing the effort for measurement uncertainty determination. They show that their method can be applied to a real pair of similar measurement processes, i.e. two computed tomography scans. It can be shown that, when applying the proposed method, a valid estimation of uncertainty and valid model even when using less data, i.e. less effort, can be built.

Originality/value

The proposed strategy can be applied to any two measurement processes showing a particular “similarity” and thus reduces the effort in estimating measurement uncertainties and finding valid measurement models.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 10
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 17 December 2021

Farouq Alhourani, Jean Essila and Bernie Farkas

The purpose of this paper is to develop an efficient and effective preventive maintenance (PM) plan that considers machines’ maintenance needs in addition to their reliability…

Abstract

Purpose

The purpose of this paper is to develop an efficient and effective preventive maintenance (PM) plan that considers machines’ maintenance needs in addition to their reliability factor.

Design/methodology/approach

Similarity coefficient method in group technology (GT) philosophy is used. Machines’ reliability factor is considered to develop virtual machine cells based on their need for maintenance according to the type of failures they encounter.

Findings

Using similarity coefficient method in GT philosophy for PM planning results in grouping machines based on their common failures and maintenance needs. Using machines' reliability factor makes the plan more efficient since machines will be maintained at the same time intervals and when their maintenance is due. This helps to schedule a standard and efficient maintenance process where maintenance material, tools and labor are scheduled accordingly.

Practical implications

The proposed procedure will assist maintenance managers in developing an efficient and effective PM plans. These maintenance plans provide better inventory management for the maintenance materials and tools needed using the developed virtual machine cells.

Originality/value

This paper presents a new procedure to implement PM using the similarity coefficient method in GT. A new similarity coefficient equation that considers machines reliability is developed. Also a clustering algorithm that calculates the similarity between machine groups and form virtual machine cells is developed. A numerical example adopted from the literature is solved to demonstrate the proposed heuristic method.

Details

Journal of Quality in Maintenance Engineering, vol. 29 no. 1
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 31 October 2023

Hong Zhou, Binwei Gao, Shilong Tang, Bing Li and Shuyu Wang

The number of construction dispute cases has maintained a high growth trend in recent years. The effective exploration and management of construction contract risk can directly…

Abstract

Purpose

The number of construction dispute cases has maintained a high growth trend in recent years. The effective exploration and management of construction contract risk can directly promote the overall performance of the project life cycle. The miss of clauses may result in a failure to match with standard contracts. If the contract, modified by the owner, omits key clauses, potential disputes may lead to contractors paying substantial compensation. Therefore, the identification of construction project contract missing clauses has heavily relied on the manual review technique, which is inefficient and highly restricted by personnel experience. The existing intelligent means only work for the contract query and storage. It is urgent to raise the level of intelligence for contract clause management. Therefore, this paper aims to propose an intelligent method to detect construction project contract missing clauses based on Natural Language Processing (NLP) and deep learning technology.

Design/methodology/approach

A complete classification scheme of contract clauses is designed based on NLP. First, construction contract texts are pre-processed and converted from unstructured natural language into structured digital vector form. Following the initial categorization, a multi-label classification of long text construction contract clauses is designed to preliminary identify whether the clause labels are missing. After the multi-label clause missing detection, the authors implement a clause similarity algorithm by creatively integrating the image detection thought, MatchPyramid model, with BERT to identify missing substantial content in the contract clauses.

Findings

1,322 construction project contracts were tested. Results showed that the accuracy of multi-label classification could reach 93%, the accuracy of similarity matching can reach 83%, and the recall rate and F1 mean of both can reach more than 0.7. The experimental results verify the feasibility of intelligently detecting contract risk through the NLP-based method to some extent.

Originality/value

NLP is adept at recognizing textual content and has shown promising results in some contract processing applications. However, the mostly used approaches of its utilization for risk detection in construction contract clauses predominantly are rule-based, which encounter challenges when handling intricate and lengthy engineering contracts. This paper introduces an NLP technique based on deep learning which reduces manual intervention and can autonomously identify and tag types of contractual deficiencies, aligning with the evolving complexities anticipated in future construction contracts. Moreover, this method achieves the recognition of extended contract clause texts. Ultimately, this approach boasts versatility; users simply need to adjust parameters such as segmentation based on language categories to detect omissions in contract clauses of diverse languages.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

1 – 10 of over 55000