Search results
1 – 10 of 438Gerd Hübscher, Verena Geist, Dagmar Auer, Nicole Hübscher and Josef Küng
Knowledge- and communication-intensive domains still long for a better support of creativity that considers legal requirements, compliance rules and administrative tasks as well…
Abstract
Purpose
Knowledge- and communication-intensive domains still long for a better support of creativity that considers legal requirements, compliance rules and administrative tasks as well, because current systems focus either on knowledge representation or business process management. The purpose of this paper is to discuss our model of integrated knowledge and business process representation and its presentation to users.
Design/methodology/approach
The authors follow a design science approach in the environment of patent prosecution, which is characterized by a highly standardized, legally prescribed process and individual knowledge study. Thus, the research is based on knowledge study, BPM, graph-based knowledge representation and user interface design. The authors iteratively designed and built a model and a prototype. To evaluate the approach, the authors used analytical proof of concept, real-world test scenarios and case studies in real-world settings, where the authors conducted observations and open interviews.
Findings
The authors designed a model and implemented a prototype for evolving and storing static and dynamic aspects of knowledge. The proposed solution leverages the flexibility of a graph-based model to enable open and not only continuously developing user-centered processes but also pre-defined ones. The authors further propose a user interface concept which supports users to benefit from the richness of the model but provides sufficient guidance.
Originality/value
The balanced integration of the data and task perspectives distinguishes the model significantly from other approaches such as BPM or knowledge graphs. The authors further provide a sophisticated user interface design, which allows the users to effectively and efficiently use the graph-based knowledge representation in their daily study.
Details
Keywords
Zhongxiang Zhou, Liang Ji, Rong Xiong and Yue Wang
In robot programming by demonstration (PbD) of small parts assembly tasks, the accuracy of parts poses estimated by vision-based techniques in demonstration stage is far from…
Abstract
Purpose
In robot programming by demonstration (PbD) of small parts assembly tasks, the accuracy of parts poses estimated by vision-based techniques in demonstration stage is far from enough to ensure a successful execution. This paper aims to develop an inference method to improve the accuracy of poses and assembly relations between parts by integrating visual observation with computer-aided design (CAD) model.
Design/methodology/approach
In this paper, the authors propose a spatial information inference method called probabilistic assembly graph with optional CAD model, shorted as PAGC*, to achieve this task. Then an assembly relation extraction method from CAD model is designed, where different assembly relation descriptions in CAD model are summarized into two fundamental relations that are colinear and coplanar. The relation similarity, distance similarity and rotation similarity are adopted as the similar part matching criterions between the CAD model and the observation. The knowledge of part in CAD is used to correct that of the corresponding part in observation. The likelihood maximization estimation is used to infer the accurate poses and assembly relations based on the probabilistic assembly graph.
Findings
In the experiments, both simulated data and real-world data are applied to evaluate the performance of the PAGC* model. The experimental results show the superiority of PAGC* in accuracy compared with assembly graph (AG) and probabilistic assembly graph without CAD model (PAG).
Originality/value
The paper provides a new approach to get the accurate pose of each part in demonstration stage of the robot PbD system. By integrating information from visual observation with prior knowledge from CAD model, PAGC* ensures the success in execution stage of the PbD system.
Details
Keywords
Maren Parnas Gulnes, Ahmet Soylu and Dumitru Roman
Neuroscience data are spread across a variety of sources, typically provisioned through ad-hoc and non-standard approaches and formats and often have no connection to the related…
Abstract
Purpose
Neuroscience data are spread across a variety of sources, typically provisioned through ad-hoc and non-standard approaches and formats and often have no connection to the related data sources. These make it difficult for researchers to understand, integrate and reuse brain-related data. The aim of this study is to show that a graph-based approach offers an effective mean for representing, analysing and accessing brain-related data, which is highly interconnected, evolving over time and often needed in combination.
Design/methodology/approach
The authors present an approach for organising brain-related data in a graph model. The approach is exemplified in the case of a unique data set of quantitative neuroanatomical data about the murine basal ganglia––a group of nuclei in the brain essential for processing information related to movement. Specifically, the murine basal ganglia data set is modelled as a graph, integrated with relevant data from third-party repositories, published through a Web-based user interface and API, analysed from exploratory and confirmatory perspectives using popular graph algorithms to extract new insights.
Findings
The evaluation of the graph model and the results of the graph data analysis and usability study of the user interface suggest that graph-based data management in the neuroscience domain is a promising approach, since it enables integration of various disparate data sources and improves understanding and usability of data.
Originality/value
The study provides a practical and generic approach for representing, integrating, analysing and provisioning brain-related data and a set of software tools to support the proposed approach.
Details
Keywords
Xuhui Li, Liuyan Liu, Xiaoguang Wang, Yiwen Li, Qingfeng Wu and Tieyun Qian
The purpose of this paper is to propose a graph-based representation approach for evolutionary knowledge under the big data circumstance, aiming to gradually build conceptual…
Abstract
Purpose
The purpose of this paper is to propose a graph-based representation approach for evolutionary knowledge under the big data circumstance, aiming to gradually build conceptual models from data.
Design/methodology/approach
A semantic data model named meaning graph (MGraph) is introduced to represent knowledge concepts to organize the knowledge instances in a graph-based knowledge base. MGraph uses directed acyclic graph–like types as concept schemas to specify the structural features of knowledge with intention variety. It also proposes several specialization mechanisms to enable knowledge evolution. Based on MGraph, a paradigm is introduced to model the evolutionary concept schemas, and a scenario on video semantics modeling is introduced in detail.
Findings
MGraph is fit for the evolution features of representing knowledge from big data and lays the foundation for building a knowledge base under the big data circumstance.
Originality/value
The representation approach based on MGraph can effectively and coherently address the major issues of evolutionary knowledge from big data. The new approach is promising in building a big knowledge base.
Details
Keywords
Xuhui Li, Yanqiu Wu, Xiaoguang Wang, Tieyun Qian and Liang Hong
The purpose of this paper is to explore a semantics representation framework for narrative images, conforming to the image-interpretation process.
Abstract
Purpose
The purpose of this paper is to explore a semantics representation framework for narrative images, conforming to the image-interpretation process.
Design/methodology/approach
This paper explores the essential features of semantics evolution in the process of narrative images interpretation. It proposes a novel semantics representation framework, ESImage (evolution semantics of image) for narrative images. ESImage adopts a hierarchical architecture to progressively organize the semantic information in images, enabling the evolutionary interpretation under the support of a graph-based semantics data model. Also, the study shows the feasibility of this framework by addressing the issues of typical semantics representation with the scenario of the Dunhuang fresco.
Findings
The process of image interpretation mainly concerns three issues: bottom-up description, the multi-faceted semantics representation and the top-down semantics complementation. ESImage can provide a comprehensive solution for narrative image semantics representation by addressing the major issues based on the semantics evolution mechanisms of the graph-based semantics data model.
Research limitations/implications
ESImage needs to be combined with machine learning to meet the requirements of automatic annotation and semantics interpretation of large-scale image resources.
Originality/value
This paper sorts out the characteristics of the gradual interpretation of narrative images and has discussed the major issues in its semantics representation. Also, it proposes the semantic framework ESImage which deploys a flexible and sound mechanism to represent the semantic information of narrative images.
Details
Keywords
Kerstin Altmanninger, Martina Seidl and Manuel Wimmer
The purpose of this paper is to provide a feature‐based characterization of version control systems (VCSs), providing an overview about the state‐of‐the‐art of versioning systems…
Abstract
Purpose
The purpose of this paper is to provide a feature‐based characterization of version control systems (VCSs), providing an overview about the state‐of‐the‐art of versioning systems dedicated to modeling artifacts.
Design/methodology/approach
Based on a literature study of existing approaches, a description of the features of versioning systems is established. Special focus is set on three‐way merging which is an integral component of optimistic versioning. This characterization is employed on current model versioning systems, which allows the derivation of challenges in this research area.
Findings
The results of the evaluation show that several challenges need to be addressed in future developments of VCSs and merging tools in order to allow the parallel development of model artifacts.
Practical implications
Making model‐driven engineering (MDE) a success requires supporting the parallel development of model artifacts as is done nowadays for text‐based artifacts. Therefore, model versioning capabilities are a must for leveraging MDE in practice.
Originality/value
The paper gives a comprehensive overview of collaboration features of VCSs for software engineering artifacts in general, discusses the state‐of‐the‐art of systems for model artifacts, and finally, lists urgent challenges, which have to be considered in future model versioning system for realizing MDE in practice.
Details
Keywords
Azra Nazir, Roohie Naaz Mir and Shaima Qureshi
Natural languages have a fundamental quality of suppleness that makes it possible to present a single idea in plenty of different ways. This feature is often exploited in the…
Abstract
Purpose
Natural languages have a fundamental quality of suppleness that makes it possible to present a single idea in plenty of different ways. This feature is often exploited in the academic world, leading to the theft of work referred to as plagiarism. Many approaches have been put forward to detect such cases based on various text features and grammatical structures of languages. However, there is a huge scope of improvement for detecting intelligent plagiarism.
Design/methodology/approach
To realize this, the paper introduces a hybrid model to detect intelligent plagiarism by breaking the entire process into three stages: (1) clustering, (2) vector formulation in each cluster based on semantic roles, normalization and similarity index calculation and (3) Summary generation using encoder-decoder. An effective weighing scheme has been introduced to select terms used to build vectors based on K-means, which is calculated on the synonym set for the said term. If the value calculated in the last stage lies above a predefined threshold, only then the next semantic argument is analyzed. When the similarity score for two documents is beyond the threshold, a short summary for plagiarized documents is created.
Findings
Experimental results show that this method is able to detect connotation and concealment used in idea plagiarism besides detecting literal plagiarism.
Originality/value
The proposed model can help academics stay updated by providing summaries of relevant articles. It would eliminate the practice of plagiarism infesting the academic community at an unprecedented pace. The model will also accelerate the process of reviewing academic documents, aiding in the speedy publishing of research articles.
Details
Keywords
Yi-Hung Liu, Xiaolong Song and Sheng-Fong Chen
Whether automatically generated summaries of health social media can aid users in managing their diseases appropriately is an important question. The purpose of this paper is to…
Abstract
Purpose
Whether automatically generated summaries of health social media can aid users in managing their diseases appropriately is an important question. The purpose of this paper is to introduce a novel text summarization approach for acquiring the most informative summaries from online patient posts accurately and effectively.
Design/methodology/approach
The data set regarding diabetes and HIV posts was, respectively, collected from two online disease forums. The proposed summarizer is based on the graph-based method to generate summaries by considering social network features, text sentiment and sentence features. Representative health-related summaries were identified and summarization performance as well as user judgments were analyzed.
Findings
The findings show that awarding sentences without using all the incorporating features decreases summarization performance compared with the classic summarization method and comparison approaches. The proposed summarizer significantly outperformed the comparison baseline.
Originality/value
This study contributes to the literature on health knowledge management by analyzing patients’ experiences and opinions through the health summarization model. The research additionally develops a new mindset to design abstractive summarization weighting schemes from the health user-generated content.
Details
Keywords
Yishan Liu, Wenming Cao and Guitao Cao
Session-based recommendation aims to predict the user's next preference based on the user's recent activities. Although most existing studies consider the global characteristics…
Abstract
Purpose
Session-based recommendation aims to predict the user's next preference based on the user's recent activities. Although most existing studies consider the global characteristics of items, they only learn the global characteristics of items based on a single connection relationship, which cannot fully capture the complex transformation relationship between items. We believe that multiple relationships between items in learning sessions can improve the performance of session recommendation tasks and the scalability of recommendation models. At the same time, high-quality global features of the item help to explore the potential common preferences of users.
Design/methodology/approach
This work proposes a session-based recommendation method with a multi-relation global context–enhanced network to capture this global transition relationship. Specifically, we construct a multi-relation global item graph based on a group of sessions, use a graded attention mechanism to learn different types of connection relations independently and obtain the global feature of the item according to the multi-relation weight.
Findings
We did related experiments on three benchmark datasets. The experimental results show that our proposed model is superior to the existing state-of-the-art methods, which verifies the effectiveness of our model.
Originality/value
First, we construct a multi-relation global item graph to learn the complex transition relations of the global context of the item and effectively mine the potential association of items between different sessions. Second, our model effectively improves the scalability of the model by obtaining high-quality item global features and enables some previously unconsidered items to make it onto the candidate list.
Details
Keywords
Xiaoming Zhang, Mingming Meng, Xiaoling Sun and Yu Bai
With the advent of the era of Big Data, the scale of knowledge graph (KG) in various domains is growing rapidly, which holds huge amount of knowledge surely benefiting the…
Abstract
Purpose
With the advent of the era of Big Data, the scale of knowledge graph (KG) in various domains is growing rapidly, which holds huge amount of knowledge surely benefiting the question answering (QA) research. However, the KG, which is always constituted of entities and relations, is structurally inconsistent with the natural language query. Thus, the QA system based on KG is still faced with difficulties. The purpose of this paper is to propose a method to answer the domain-specific questions based on KG, providing conveniences for the information query over domain KG.
Design/methodology/approach
The authors propose a method FactQA to answer the factual questions about specific domain. A series of logical rules are designed to transform the factual questions into the triples, in order to solve the structural inconsistency between the user’s question and the domain knowledge. Then, the query expansion strategies and filtering strategies are proposed from two levels (i.e. words and triples in the question). For matching the question with domain knowledge, not only the similarity values between the words in the question and the resources in the domain knowledge but also the tag information of these words is considered. And the tag information is obtained by parsing the question using Stanford CoreNLP. In this paper, the KG in metallic materials domain is used to illustrate the FactQA method.
Findings
The designed logical rules have time stability for transforming the factual questions into the triples. Additionally, after filtering the synonym expansion results of the words in the question, the expansion quality of the triple representation of the question is improved. The tag information of the words in the question is considered in the process of data matching, which could help to filter out the wrong matches.
Originality/value
Although the FactQA is proposed for domain-specific QA, it can also be applied to any other domain besides metallic materials domain. For a question that cannot be answered, FactQA would generate a new related question to answer, providing as much as possible the user with the information they probably need. The FactQA could facilitate the user’s information query based on the emerging KG.
Details