Search results

1 – 10 of 447
Article
Publication date: 19 August 2021

Jacques Chabin, Cédric Eichler, Mirian Halfeld Ferrari and Nicolas Hiot

Graph rewriting concerns the technique of transforming a graph; it is thus natural to conceive its application in the evolution of graph databases. This paper aims to propose a…

Abstract

Purpose

Graph rewriting concerns the technique of transforming a graph; it is thus natural to conceive its application in the evolution of graph databases. This paper aims to propose a two-step framework where rewriting rules formalize instance or schema changes, ensuring graph’s consistency with respect to constraints, and updates are managed by ensuring rule applicability through the generation of side effects: new updates which guarantee that rule application conditions hold.

Design/methodology/approach

This paper proposes Schema Evolution Through UPdates, optimized version (SetUpOPT), a theoretical and applied framework for the management of resource description framework (RDF)/S database evolution on the basis of graph rewriting rules. The framework is an improvement of SetUp which avoids the computation of superfluous side effects and proposes, via SetUpoptND, a flexible and extensible package of solutions to deal with non-determinism.

Findings

This paper shows graph rewriting into a practical and useful application which ensures consistent evolution of RDF databases. It introduces an optimised approach for dealing with side effects and a flexible and customizable way of dealing with non-determinism. Experimental evaluation of SetUpoptND demonstrates the importance of the proposed optimisations as they significantly reduce side-effect generation and limit data degradation.

Originality/value

SetUp originality lies in the use of graph rewriting techniques under the closed world assumption to set an updating system which preserves database consistency. Efficiency is ensured by avoiding the generation of superfluous side effects. Flexibility is guaranteed by offering different solutions for non-determinism and allowing the integration of customized choice functions.

Details

International Journal of Web Information Systems, vol. 17 no. 6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 November 2021

Maren Parnas Gulnes, Ahmet Soylu and Dumitru Roman

Neuroscience data are spread across a variety of sources, typically provisioned through ad-hoc and non-standard approaches and formats and often have no connection to the related…

Abstract

Purpose

Neuroscience data are spread across a variety of sources, typically provisioned through ad-hoc and non-standard approaches and formats and often have no connection to the related data sources. These make it difficult for researchers to understand, integrate and reuse brain-related data. The aim of this study is to show that a graph-based approach offers an effective mean for representing, analysing and accessing brain-related data, which is highly interconnected, evolving over time and often needed in combination.

Design/methodology/approach

The authors present an approach for organising brain-related data in a graph model. The approach is exemplified in the case of a unique data set of quantitative neuroanatomical data about the murine basal ganglia––a group of nuclei in the brain essential for processing information related to movement. Specifically, the murine basal ganglia data set is modelled as a graph, integrated with relevant data from third-party repositories, published through a Web-based user interface and API, analysed from exploratory and confirmatory perspectives using popular graph algorithms to extract new insights.

Findings

The evaluation of the graph model and the results of the graph data analysis and usability study of the user interface suggest that graph-based data management in the neuroscience domain is a promising approach, since it enables integration of various disparate data sources and improves understanding and usability of data.

Originality/value

The study provides a practical and generic approach for representing, integrating, analysing and provisioning brain-related data and a set of software tools to support the proposed approach.

Details

Data Technologies and Applications, vol. 56 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 13 July 2015

Eleni Papadonikolaki, Ruben Vrijhoef and Hans Wamelink

The purpose of this paper is to propose a methodology to integrate the construction Supply Chain (SC) through the application of Building Information Modeling (BIM) and Supply…

2147

Abstract

Purpose

The purpose of this paper is to propose a methodology to integrate the construction Supply Chain (SC) through the application of Building Information Modeling (BIM) and Supply Chain Management (SCM). It features a renovation case as a proof-of-concept.

Design/methodology/approach

After analyzing the relevant gaps in the literature, the research followed a modeling approach. The proposed model merged product-, process- and organizational models in a graph-based model to represent and analyze a BIM-based SCM project.

Findings

Presently, the information flows of the construction SC are vague. BIM is an aspiring integrator of information flows for construction. The proposed model for SC integration with BIM, offers an approach to identify the project complexities in relation to organizational structures, roles and interactions and integrate the industry.

Practical implications

Currently BIM-enabled SCM is not very widely applied in the industry. However, the authors report the increasing interest of most construction stakeholders to engage in the application of the two, after acknowledging the benefits from the individual approaches. Since this combination is quite rare, the research uses a retrospective real-world case study of a SC project with an imaginary application of BIM.

Originality/value

Thus far, there is no formal model to represent the interactions of the SC actors along with BIM. The unique combination of a product and a process model, i.e. BIM, with an organizational model aims at integrating the information flows of the SC. The proposed model aims at analyzing and supporting the BIM-enabled SCM in Architecture Engineering and Construction.

Open Access
Article
Publication date: 6 September 2021

Gerd Hübscher, Verena Geist, Dagmar Auer, Nicole Hübscher and Josef Küng

Knowledge- and communication-intensive domains still long for a better support of creativity that considers legal requirements, compliance rules and administrative tasks as well…

880

Abstract

Purpose

Knowledge- and communication-intensive domains still long for a better support of creativity that considers legal requirements, compliance rules and administrative tasks as well, because current systems focus either on knowledge representation or business process management. The purpose of this paper is to discuss our model of integrated knowledge and business process representation and its presentation to users.

Design/methodology/approach

The authors follow a design science approach in the environment of patent prosecution, which is characterized by a highly standardized, legally prescribed process and individual knowledge study. Thus, the research is based on knowledge study, BPM, graph-based knowledge representation and user interface design. The authors iteratively designed and built a model and a prototype. To evaluate the approach, the authors used analytical proof of concept, real-world test scenarios and case studies in real-world settings, where the authors conducted observations and open interviews.

Findings

The authors designed a model and implemented a prototype for evolving and storing static and dynamic aspects of knowledge. The proposed solution leverages the flexibility of a graph-based model to enable open and not only continuously developing user-centered processes but also pre-defined ones. The authors further propose a user interface concept which supports users to benefit from the richness of the model but provides sufficient guidance.

Originality/value

The balanced integration of the data and task perspectives distinguishes the model significantly from other approaches such as BPM or knowledge graphs. The authors further provide a sophisticated user interface design, which allows the users to effectively and efficiently use the graph-based knowledge representation in their daily study.

Details

International Journal of Web Information Systems, vol. 17 no. 6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 9 July 2022

Riju Bhattacharya, Naresh Kumar Nagwani and Sarsij Tripathi

Social networking platforms are increasingly using the Follower Link Prediction tool in an effort to expand the number of their users. It facilitates the discovery of previously…

Abstract

Purpose

Social networking platforms are increasingly using the Follower Link Prediction tool in an effort to expand the number of their users. It facilitates the discovery of previously unidentified individuals and can be employed to determine the relationships among the nodes in a social network. On the other hand, social site firms use follower–followee link prediction (FFLP) to increase their user base. FFLP can help identify unfamiliar people and determine node-to-node links in a social network. Choosing the appropriate person to follow becomes crucial as the number of users increases. A hybrid model employing the Ensemble Learning algorithm for FFLP (HMELA) is proposed to advise the formation of new follower links in large networks.

Design/methodology/approach

HMELA includes fundamental classification techniques for treating link prediction as a binary classification problem. The data sets are represented using a variety of machine-learning-friendly hybrid graph features. The HMELA is evaluated using six real-world social network data sets.

Findings

The first set of experiments used exploratory data analysis on a di-graph to produce a balanced matrix. The second set of experiments compared the benchmark and hybrid features on data sets. This was followed by using benchmark classifiers and ensemble learning methods. The experiments show that the proposed (HMELA) method predicts missing links better than other methods.

Practical implications

A hybrid suggested model for link prediction is proposed in this paper. The suggested HMELA model makes use of AUC scores to predict new future links. The proposed approach facilitates comprehension and insight into the domain of link prediction. This work is almost entirely aimed at academics, practitioners, and those involved in the field of social networks, etc. Also, the model is quite effective in the field of product recommendation and in recommending a new friend and user on social networks.

Originality/value

The outcome on six benchmark data sets revealed that when the HMELA strategy had been applied to all of the selected data sets, the area under the curve (AUC) scores were greater than when individual techniques were applied to the same data sets. Using the HMELA technique, the maximum AUC score in the Facebook data set has been increased by 10.3 per cent from 0.8449 to 0.9479. There has also been an 8.53 per cent increase in the accuracy of the Net Science, Karate Club and USAir databases. As a result, the HMELA strategy outperforms every other strategy tested in the study.

Details

Data Technologies and Applications, vol. 57 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 30 January 2019

Mehmet Yalcinkaya and Vishal Singh

The purpose of this paper is to describe the technical features, underlying concepts and implementation details of a novel Building Information Modeling (BIM)-integrated…

Abstract

Purpose

The purpose of this paper is to describe the technical features, underlying concepts and implementation details of a novel Building Information Modeling (BIM)-integrated, graph-based platform developed to support BIM for facilities management through a usability driven visual representation of the construction operations building information exchange (COBie) spreadsheet data.

Design/methodology/approach

This paper is based on the iterative steps of design thinking and agile software development methodology. The conceptual development of the VisualCOBie platform is based on Gestalt’s principles of visual perception to facilitate usability and comprehension of the COBie data.

Findings

The paper demonstrates that Gestalt’s principles of visual perception provide a suitable conceptual as well as implementable basis for improving the usability and comprehension of COBie spreadsheets. The implemented BIM-integrated, graph-based VisualCOBie platform supports visual navigation and dynamic search, reducing the cognitive load of large spreadsheets that are common in facilities management software.

Research limitations/implications

The usability, visual search and dependencies-based search of VisualCOBie can potentially transform how we implement and use facilities and information management systems in construction, where large spreadsheets are frequently used in conjunction with BIM and other tools. VisualCOBie also provides usability-based step towards BIM for facilities management.

Originality/value

The VisualCOBie approach provides a novel user interface and information management platform. This paper may also foster a potential paradigm shift in our approach to the representation and use of information exchange standards such as COBie, which are required to facilitate the research and practice on BIM for facilities management.

Details

Facilities, vol. 37 no. 7/8
Type: Research Article
ISSN: 0263-2772

Keywords

Article
Publication date: 14 December 2021

Deepak S. Uplaonkar, Virupakshappa and Nagabhushan Patil

The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.

Abstract

Purpose

The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.

Design/methodology/approach

After collecting the ultrasound images, contrast-limited adaptive histogram equalization approach (CLAHE) is applied as preprocessing, in order to enhance the visual quality of the images that helps in better segmentation. Then, adaptively regularized kernel-based fuzzy C means (ARKFCM) is used to segment tumor from the enhanced image along with local ternary pattern combined with selective level set approaches.

Findings

The proposed segmentation algorithm precisely segments the tumor portions from the enhanced images with lower computation cost. The proposed segmentation algorithm is compared with the existing algorithms and ground truth values in terms of Jaccard coefficient, dice coefficient, precision, Matthews correlation coefficient, f-score and accuracy. The experimental analysis shows that the proposed algorithm achieved 99.18% of accuracy and 92.17% of f-score value, which is better than the existing algorithms.

Practical implications

From the experimental analysis, the proposed ARKFCM with enhanced level set algorithm obtained better performance in ultrasound liver tumor segmentation related to graph-based algorithm. However, the proposed algorithm showed 3.11% improvement in dice coefficient compared to graph-based algorithm.

Originality/value

The image preprocessing is carried out using CLAHE algorithm. The preprocessed image is segmented by employing selective level set model and Local Ternary Pattern in ARKFCM algorithm. In this research, the proposed algorithm has advantages such as independence of clustering parameters, robustness in preserving the image details and optimal in finding the threshold value that effectively reduces the computational cost.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 15 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 12 June 2019

Xuhui Li, Yanqiu Wu, Xiaoguang Wang, Tieyun Qian and Liang Hong

The purpose of this paper is to explore a semantics representation framework for narrative images, conforming to the image-interpretation process.

Abstract

Purpose

The purpose of this paper is to explore a semantics representation framework for narrative images, conforming to the image-interpretation process.

Design/methodology/approach

This paper explores the essential features of semantics evolution in the process of narrative images interpretation. It proposes a novel semantics representation framework, ESImage (evolution semantics of image) for narrative images. ESImage adopts a hierarchical architecture to progressively organize the semantic information in images, enabling the evolutionary interpretation under the support of a graph-based semantics data model. Also, the study shows the feasibility of this framework by addressing the issues of typical semantics representation with the scenario of the Dunhuang fresco.

Findings

The process of image interpretation mainly concerns three issues: bottom-up description, the multi-faceted semantics representation and the top-down semantics complementation. ESImage can provide a comprehensive solution for narrative image semantics representation by addressing the major issues based on the semantics evolution mechanisms of the graph-based semantics data model.

Research limitations/implications

ESImage needs to be combined with machine learning to meet the requirements of automatic annotation and semantics interpretation of large-scale image resources.

Originality/value

This paper sorts out the characteristics of the gradual interpretation of narrative images and has discussed the major issues in its semantics representation. Also, it proposes the semantic framework ESImage which deploys a flexible and sound mechanism to represent the semantic information of narrative images.

Details

The Electronic Library , vol. 37 no. 3
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 5 July 2021

Xuhui Li, Liuyan Liu, Xiaoguang Wang, Yiwen Li, Qingfeng Wu and Tieyun Qian

The purpose of this paper is to propose a graph-based representation approach for evolutionary knowledge under the big data circumstance, aiming to gradually build conceptual…

Abstract

Purpose

The purpose of this paper is to propose a graph-based representation approach for evolutionary knowledge under the big data circumstance, aiming to gradually build conceptual models from data.

Design/methodology/approach

A semantic data model named meaning graph (MGraph) is introduced to represent knowledge concepts to organize the knowledge instances in a graph-based knowledge base. MGraph uses directed acyclic graph–like types as concept schemas to specify the structural features of knowledge with intention variety. It also proposes several specialization mechanisms to enable knowledge evolution. Based on MGraph, a paradigm is introduced to model the evolutionary concept schemas, and a scenario on video semantics modeling is introduced in detail.

Findings

MGraph is fit for the evolution features of representing knowledge from big data and lays the foundation for building a knowledge base under the big data circumstance.

Originality/value

The representation approach based on MGraph can effectively and coherently address the major issues of evolutionary knowledge from big data. The new approach is promising in building a big knowledge base.

Details

The Electronic Library , vol. 39 no. 3
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 14 March 2008

S.E. Kruck, Faye Teer and William A. Christian

The purpose of this paper is to describe a new software tool that graphically depicts analysis of visitor traffic. This new tool is the graph‐based server log analysis program…

Abstract

Purpose

The purpose of this paper is to describe a new software tool that graphically depicts analysis of visitor traffic. This new tool is the graph‐based server log analysis program (GSLAP).

Design/methodology/approach

Discovering hidden and meaningful information about web users' patterns of usage is critical to optimization of the web server. The authors designed and developed GSLAP. Presented in this paper is an example of GSLAP in the context of an analysis of the web site of a small fictitious company. Also included is an explanation of current literature that supports graphical display of data as a cognitive aid to understanding data.

Findings

GSLAP is shown to provide a visual server log analysis that is a great improvement on the textual server log.

Research limitations/implications

The benefits of the output from GSLAP are compared with the typical textual output.

Originality/value

The paper describes a software tool that helps the analysis of usage patterns of web traffic.

Details

Industrial Management & Data Systems, vol. 108 no. 2
Type: Research Article
ISSN: 0263-5577

Keywords

1 – 10 of 447