Search results

1 – 10 of over 59000
Article
Publication date: 9 September 2014

Maayan Zhitomirsky-Geffet and Judit Bar-Ilan

Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal…

Abstract

Purpose

Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations.

Design/methodology/approach

Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies’ semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies.

Findings

To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies.

Research limitations/implications

This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research.

Practical implications

This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results.

Originality/value

To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.

Details

Aslib Journal of Information Management, vol. 66 no. 5
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 2 January 2019

Ke Zhang, Hao Gui, Zhifeng Luo and Danyang Li

Laser navigation without a reflector does not require setup of reflector markers at the scene and thus has the advantages of free path setting and flexible change. This technology…

Abstract

Purpose

Laser navigation without a reflector does not require setup of reflector markers at the scene and thus has the advantages of free path setting and flexible change. This technology has attracted wide attention in recent years and shows great potential in the field of automatic logistics, including map building and locating in real-time according to the environment. This paper aims to focus on the application of feature matching for map building.

Design/methodology/approach

First, an improved linear binary relation algorithm was proposed to calculate the local similarity of the feature line segments, and the matching degree matrix of feature line segments between two adjacent maps was established. Further, rough matching for the two maps was performed, and both the initial rotation matrix and the translation vector for the adjacent map matching were obtained. Then, to improve the rotation matrix, a region search optimization algorithm was proposed, which took the initial rotation matrix as the starting point and searched gradually along a lower error-of-objective function until the error sequence was nonmonotonic. Finally, the random-walk method was proposed to optimize the translation vector by iterating until the error-objective function reached the minimum.

Findings

The experimental results show that the final matching error was controlled within 10 mm after both rotation and translation optimization. Also, the algorithm of map matching and optimization proposed in this paper can realize accurately the feature matching of a laser navigation map and basically meet the real-time navigation and positioning requirements for an automated-guided robot.

Originality/value

A linear binary relation algorithm was proposed, and the local similarity between line segments is calculated on the basis of the binary relation. The hill-climbing region search algorithm and the random-walk algorithm were proposed to optimize the rotation matrix and the translation vector, respectively. This algorithm has been applied to industrial production.

Details

Industrial Robot: the international journal of robotics research and application, vol. 46 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 31 October 2018

Sangwan Kim

The purpose of this paper is to investigate whether revenue-expense matching is inversely associated with cost of capital and information asymmetry, respectively, in the equity…

Abstract

Purpose

The purpose of this paper is to investigate whether revenue-expense matching is inversely associated with cost of capital and information asymmetry, respectively, in the equity markets.

Design/methodology/approach

This paper uses a firm-specific measure of revenue-expense matching consistent with Dichev and Tang (2008). To obtain a proxy for cost of equity, this paper uses the average ex ante implied cost of capital estimate calculated from analysts’ forecast data, which are based on the Feltham–Ohlson residual income valuation framework. In additional tests, this paper uses the probability of informed trades (PIN) as a proxy for information asymmetry among equity investors. This paper employs both OLS and fractional logit regression models to test main predictions.

Findings

This paper documents that firms with high revenue-expense matching enjoy a lower cost of capital, supporting the direct impact of high matching on cost of capital by increasing the precision of public information signals. Further, matching of contemporaneous revenues and expenses is inversely associated with information asymmetry, suggesting that the indirect impact of high matching on cost of capital through its impact on information asymmetry is also plausible.

Originality/value

Although an extensive body of literature has established a link between various disclosure/earnings properties and cost of capital, this research is the first to establish a link between matching and cost of capital. This paper fills the void in the literature by showing that revenue-expense matching – a fundamental property of accounting earnings – affects equity investors’ required rate of returns.

Details

Managerial Finance, vol. 44 no. 11
Type: Research Article
ISSN: 0307-4358

Keywords

Article
Publication date: 7 July 2014

Maayan Zhitomirsky-Geffet and Eden Shalom Erez

Ontologies are defined as consensual formal conceptualisation of shared knowledge. However, the explicit overlap between diverse ontologies is usually very low since they are…

Abstract

Purpose

Ontologies are defined as consensual formal conceptualisation of shared knowledge. However, the explicit overlap between diverse ontologies is usually very low since they are typically constructed by different experts. Hence, the purpose of this paper is to suggest to exploit “wisdom of crowds” to assess the maximal potential for inter-ontology agreement on controversial domains.

Design/methodology/approach

The authors propose a scheme where independent ontology users can explicitly express their opinions on the specified set of ontologies. The collected user opinions are further employed as features for machine classification algorithm to distinguish between the consensual ontological relations and the controversial ones. In addition, the authors devised new evaluation methods to measure the reliability and accuracy of the presented scheme.

Findings

The accuracy of the relation classification (90 per cent) and the reliability of user agreement annotations were quite high (over 90 per cent). These results indicate a fair ability of the scheme to learn the maximal set of consensual relations out of the specified set of diverse ontologies.

Research limitations/implications

The data sets and the group of participants in our experiments were of limited size and thus the presented results are promising but cannot be generalised at this stage of research.

Practical implications

A diversity of opinions expressed by different ontologies has to be resolved in order to digitise many domains of knowledge (e.g. cultural heritage, folklore, medicine, economy, religion, history, art). This work presents a methodology to formally represent this diverse knowledge in a rich semantic scheme where there is a need to distinguish between the commonly shared and the controversial relations.

Originality/value

To the best of the knowledge this is a first proposal to consider crowd-based evaluation and classification of ontological relations to maximise the inter-ontology agreement.

Details

Online Information Review, vol. 38 no. 5
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 29 May 2009

Ling Zhang, Ting Nie and Yongtai Luo

With the development of China's economy, more and more Chinese researchers in HR field try to explore suitable policies and practices from China's realities. Researchers have…

2997

Abstract

Purpose

With the development of China's economy, more and more Chinese researchers in HR field try to explore suitable policies and practices from China's realities. Researchers have spent considerable efforts to identify means of using human resource management practices to effectively utilize human capital. At the same time, it has been well recognized that organizational justice plays a critical role in effective management of employees' attitude and behaviors. The purpose of this paper is to demonstrate a framework for matching organizational justice and employment mode.

Design/methodology/approach

Quantitative research method is used in this study. Base on literature review of organizational justice, HR architecture social exchange and so on. The study tries to find out the relations between organizational justice and employment mode.

Findings

The study integrates these two seemingly disparate streams of research, and put forwards a framework for matching organizational justice and employment mode. Different groups of employees are managed differently and may require different organizational justice styles, and organizational justice styles should be consistent with the underlying objectives and psychological contracts underlying different employment modes.

Originality/value

The study tries to make organizational justice strategies match with employment modes and it is an attempt to use organizational justice to manage different employee groups from contingent and deploying perspective.

Details

Journal of Technology Management in China, vol. 4 no. 2
Type: Research Article
ISSN: 1746-8779

Keywords

Article
Publication date: 2 June 2020

Zhongxiang Zhou, Liang Ji, Rong Xiong and Yue Wang

In robot programming by demonstration (PbD) of small parts assembly tasks, the accuracy of parts poses estimated by vision-based techniques in demonstration stage is far from…

Abstract

Purpose

In robot programming by demonstration (PbD) of small parts assembly tasks, the accuracy of parts poses estimated by vision-based techniques in demonstration stage is far from enough to ensure a successful execution. This paper aims to develop an inference method to improve the accuracy of poses and assembly relations between parts by integrating visual observation with computer-aided design (CAD) model.

Design/methodology/approach

In this paper, the authors propose a spatial information inference method called probabilistic assembly graph with optional CAD model, shorted as PAGC*, to achieve this task. Then an assembly relation extraction method from CAD model is designed, where different assembly relation descriptions in CAD model are summarized into two fundamental relations that are colinear and coplanar. The relation similarity, distance similarity and rotation similarity are adopted as the similar part matching criterions between the CAD model and the observation. The knowledge of part in CAD is used to correct that of the corresponding part in observation. The likelihood maximization estimation is used to infer the accurate poses and assembly relations based on the probabilistic assembly graph.

Findings

In the experiments, both simulated data and real-world data are applied to evaluate the performance of the PAGC* model. The experimental results show the superiority of PAGC* in accuracy compared with assembly graph (AG) and probabilistic assembly graph without CAD model (PAG).

Originality/value

The paper provides a new approach to get the accurate pose of each part in demonstration stage of the robot PbD system. By integrating information from visual observation with prior knowledge from CAD model, PAGC* ensures the success in execution stage of the PbD system.

Details

Assembly Automation, vol. 40 no. 5
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 9 February 2015

Bhaskar Sinha, Somnath Chandra and Megha Garg

The purpose of this explorative research study is to focus on the implementation of semantic Web technology on agriculture domain of e-governance data. The study contributes to an…

1145

Abstract

Purpose

The purpose of this explorative research study is to focus on the implementation of semantic Web technology on agriculture domain of e-governance data. The study contributes to an understanding of problems and difficulties in implantations of unstructured and unformatted unique datasets of multilingual local language-based electronic dictionary (IndoWordnet).

Design/methodology/approach

An approach to an implementation in the perspective of conceptual logical concept to realization of agriculture-based terms and terminology extracted from linked multilingual IndoWordNet while maintaining the support and specification of the World Wide Web Consortium (W3C) standard of semantic Web technology to generate ontology and uniform unicode structured datasets.

Findings

The findings reveal the fact about partial support of extraction of terms, relations and concepts while linking to IndoWordNet, resulting in the form of SynSets, lexical relations of Words and relations between themselves. This helped in generation of ontology, hierarchical modeling and creation of structured metadata datasets.

Research limitations/implications

IndoWordNet has limitations, as it is not fully revised version due to diversified cultural base in India, and the new version is yet to be released in due time span. As mentioned in Section 5, implications of these ideas and experiments will have good impact in doing more exploration and better applications using such wordnet.

Practical implications

Language developer tools and frameworks have been used to get tagged annotated raw data processed and get intermediate results, which provides as a source for the generation of ontology and dynamic metadata.

Social implications

The results are expected to be applied for other e-governance applications. Better use of applications in social and government departments.

Originality/value

The authors have worked out experimental facts and raw information source datasets, revealing satisfactory results such as SynSets, sensecount, semantic and lexical relations, class concepts hierarchy and other related output, which helped in developing ontology of domain interest and, hence, creation of a dynamic metadata which can be globally used to facilitate various applications support.

Details

Journal of Knowledge Management, vol. 19 no. 1
Type: Research Article
ISSN: 1367-3270

Keywords

Article
Publication date: 14 September 2010

Volker Gruhn and Ralf Laue

The purpose of this paper is to present a new heuristic approach for finding errors and possible improvements in business process models.

1951

Abstract

Purpose

The purpose of this paper is to present a new heuristic approach for finding errors and possible improvements in business process models.

Design/methodology/approach

First, the paper translates the information that is included in a model into a set of Prolog facts. It then searches for patterns which are related to a violation of the soundness property or bad modeling style or otherwise gives rise to the assumption that the model should be improved. To validate this approach, the paper analyzes a repository of almost 1,000 business process models. For this purpose, three different model‐checkers that explore the state space of all possible executions of a model are used. The result of these tools are compared with the results given by this heuristic approach.

Findings

The paper finds that the heuristic approach identifies violations of the soundness property almost as accurate as model‐checkers. However, other than these tools, the approach never ran into state space explosion problems. Furthermore, this heuristic approach can also detect patterns for bad modeling style which can help to improve the quality of models.

Practical implications

Heuristic checks can run in the background while the modeler works on the model. In this way, feedback about possible modeling errors can be provided instantly. This feedback can be used to correct possible problems immediately.

Originality/value

Current Prolog‐based validation tools check mainly for syntactical correctness and consistency requirements. This approach adds one more perspective by also detecting control‐flow errors (like deadlocks) and even pragmatic issues.

Details

Business Process Management Journal, vol. 16 no. 5
Type: Research Article
ISSN: 1463-7154

Keywords

Article
Publication date: 15 June 2012

Shohei Ohsawa, Toshiyuki Amagasa and Hiroyuki Kitagawa

The purpose of this paper is to improve the performance of querying and reasoning and querying over large‐scale Resource Description Framework (RDF) data. When processing RDF(S…

Abstract

Purpose

The purpose of this paper is to improve the performance of querying and reasoning and querying over large‐scale Resource Description Framework (RDF) data. When processing RDF(S) data, RDFS entailment is performed which often generates a large number of additional triples, which causes a poor performance. To deal with large‐scale RDF data, it is important to develop a scheme which enables the processing of large RDF data in an efficient manner.

Design/methodology/approach

The authors propose RDF packages, which is a space efficient format for RDF data. In an RDF package, a set of triples of the same class or triples having the same predicate are grouped into a dedicated node named Package. Any RDF data can be represented using RDF packages, and vice versa.

Findings

It is found that using RDF packages can significantly reduce the size of RDF data, even after RDFS entailment. The authors experimentally evaluate the performance of the proposed scheme in terms of triple size, reasoning speed, and querying speed.

Research limitations/implications

The proposed scheme is useful in processing RDF(S) data, but it needs further development to deal with an ontological language such as OWL.

Originality/value

An important feature of the RDF packages is that, when performing RDFS reasoning, there is no need to modify either reasoning rules or reasoning engine; while other related schemes require reasoning rules or reasoning engine to be modified.

Details

International Journal of Web Information Systems, vol. 8 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 10 December 2019

Xiaoming Zhang, Mingming Meng, Xiaoling Sun and Yu Bai

With the advent of the era of Big Data, the scale of knowledge graph (KG) in various domains is growing rapidly, which holds huge amount of knowledge surely benefiting the…

Abstract

Purpose

With the advent of the era of Big Data, the scale of knowledge graph (KG) in various domains is growing rapidly, which holds huge amount of knowledge surely benefiting the question answering (QA) research. However, the KG, which is always constituted of entities and relations, is structurally inconsistent with the natural language query. Thus, the QA system based on KG is still faced with difficulties. The purpose of this paper is to propose a method to answer the domain-specific questions based on KG, providing conveniences for the information query over domain KG.

Design/methodology/approach

The authors propose a method FactQA to answer the factual questions about specific domain. A series of logical rules are designed to transform the factual questions into the triples, in order to solve the structural inconsistency between the user’s question and the domain knowledge. Then, the query expansion strategies and filtering strategies are proposed from two levels (i.e. words and triples in the question). For matching the question with domain knowledge, not only the similarity values between the words in the question and the resources in the domain knowledge but also the tag information of these words is considered. And the tag information is obtained by parsing the question using Stanford CoreNLP. In this paper, the KG in metallic materials domain is used to illustrate the FactQA method.

Findings

The designed logical rules have time stability for transforming the factual questions into the triples. Additionally, after filtering the synonym expansion results of the words in the question, the expansion quality of the triple representation of the question is improved. The tag information of the words in the question is considered in the process of data matching, which could help to filter out the wrong matches.

Originality/value

Although the FactQA is proposed for domain-specific QA, it can also be applied to any other domain besides metallic materials domain. For a question that cannot be answered, FactQA would generate a new related question to answer, providing as much as possible the user with the information they probably need. The FactQA could facilitate the user’s information query based on the emerging KG.

Details

Data Technologies and Applications, vol. 54 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

1 – 10 of over 59000