Search results

11 – 20 of over 31000
Article
Publication date: 6 September 2023

Antonio Llanes, Baldomero Imbernón Tudela, Manuel Curado and Jesús Soto

The authors will review the main concepts of graphs, present the implemented algorithm, as well as explain the different techniques applied to the graph, to achieve an efficient…

Abstract

Purpose

The authors will review the main concepts of graphs, present the implemented algorithm, as well as explain the different techniques applied to the graph, to achieve an efficient execution of the algorithm, both in terms of the use of multiple cores that the authors have available today, and the use of massive data parallelism through the parallelization of the algorithm, bringing the graph closer to the execution through CUDA on GPUs.

Design/methodology/approach

In this work, the authors approach the graphs isomorphism problem, approaching this problem from a point of view very little worked during all this time, the application of parallelism and the high-performance computing (HPC) techniques to the detection of isomorphism between graphs.

Findings

Results obtained give compelling reasons to ensure that more in-depth studies on the HPC techniques should be applied in these fields, since gains of up to 722x speedup are achieved in the most favorable scenarios, maintaining an average performance speedup of 454x.

Originality/value

The paper is new and original.

Details

Engineering Computations, vol. 40 no. 7/8
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 12 September 2023

Wenjing Wu, Caifeng Wen, Qi Yuan, Qiulan Chen and Yunzhong Cao

Learning from safety accidents and sharing safety knowledge has become an important part of accident prevention and improving construction safety management. Considering the…

Abstract

Purpose

Learning from safety accidents and sharing safety knowledge has become an important part of accident prevention and improving construction safety management. Considering the difficulty of reusing unstructured data in the construction industry, the knowledge in it is difficult to be used directly for safety analysis. The purpose of this paper is to explore the construction of construction safety knowledge representation model and safety accident graph through deep learning methods, extract construction safety knowledge entities through BERT-BiLSTM-CRF model and propose a data management model of data–knowledge–services.

Design/methodology/approach

The ontology model of knowledge representation of construction safety accidents is constructed by integrating entity relation and logic evolution. Then, the database of safety incidents in the architecture, engineering and construction (AEC) industry is established based on the collected construction safety incident reports and related dispute cases. The construction method of construction safety accident knowledge graph is studied, and the precision of BERT-BiLSTM-CRF algorithm in information extraction is verified through comparative experiments. Finally, a safety accident report is used as an example to construct the AEC domain construction safety accident knowledge graph (AEC-KG), which provides visual query knowledge service and verifies the operability of knowledge management.

Findings

The experimental results show that the combined BERT-BiLSTM-CRF algorithm has a precision of 84.52%, a recall of 92.35%, and an F1 value of 88.26% in named entity recognition from the AEC domain database. The construction safety knowledge representation model and safety incident knowledge graph realize knowledge visualization.

Originality/value

The proposed framework provides a new knowledge management approach to improve the safety management of practitioners and also enriches the application scenarios of knowledge graph. On the one hand, it innovatively proposes a data application method and knowledge management method of safety accident report that integrates entity relationship and matter evolution logic. On the other hand, the legal adjudication dimension is innovatively added to the knowledge graph in the construction safety field as the basis for the postincident disposal measures of safety accidents, which provides reference for safety managers' decision-making in all aspects.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 5 October 2022

Michael DeBellis and Biswanath Dutta

The purpose of this paper is to describe the CODO ontology (COviD-19 Ontology) that captures epidemiological data about the COVID-19 pandemic in a knowledge graph that follows the…

Abstract

Purpose

The purpose of this paper is to describe the CODO ontology (COviD-19 Ontology) that captures epidemiological data about the COVID-19 pandemic in a knowledge graph that follows the FAIR principles. This study took information from spreadsheets and integrated it into a knowledge graph that could be queried with SPARQL and visualized with the Gruff tool in AllegroGraph.

Design/methodology/approach

The knowledge graph was designed with the Web Ontology Language. The methodology was a hybrid approach integrating the YAMO methodology for ontology design and Agile methods to define iterations and approach to requirements, testing and implementation.

Findings

The hybrid approach demonstrated that Agile can bring the same benefits to knowledge graph projects as it has to other projects. The two-person team went from an ontology to a large knowledge graph with approximately 5 M triples in a few months. The authors gathered useful real-world experience on how to most effectively transform “from strings to things.”

Originality/value

This study is the only FAIR model (to the best of the authors’ knowledge) to address epidemiology data for the COVID-19 pandemic. It also brought to light several practical issues that generalize to other studies wishing to go from an ontology to a large knowledge graph. This study is one of the first studies to document how the Agile approach can be used for knowledge graph development.

Details

International Journal of Web Information Systems, vol. 18 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 7 February 2023

Riju Bhattacharya, Naresh Kumar Nagwani and Sarsij Tripathi

A community demonstrates the unique qualities and relationships between its members that distinguish it from other communities within a network. Network analysis relies heavily on…

Abstract

Purpose

A community demonstrates the unique qualities and relationships between its members that distinguish it from other communities within a network. Network analysis relies heavily on community detection. Despite the traditional spectral clustering and statistical inference methods, deep learning techniques for community detection have grown in popularity due to their ease of processing high-dimensional network data. Graph convolutional neural networks (GCNNs) have received much attention recently and have developed into a potential and ubiquitous method for directly detecting communities on graphs. Inspired by the promising results of graph convolutional networks (GCNs) in analyzing graph structure data, a novel community graph convolutional network (CommunityGCN) as a semi-supervised node classification model has been proposed and compared with recent baseline methods graph attention network (GAT), GCN-based technique for unsupervised community detection and Markov random fields combined with graph convolutional network (MRFasGCN).

Design/methodology/approach

This work presents the method for identifying communities that combines the notion of node classification via message passing with the architecture of a semi-supervised graph neural network. Six benchmark datasets, namely, Cora, CiteSeer, ACM, Karate, IMDB and Facebook, have been used in the experimentation.

Findings

In the first set of experiments, the scaled normalized average matrix of all neighbor's features including the node itself was obtained, followed by obtaining the weighted average matrix of low-dimensional nodes. In the second set of experiments, the average weighted matrix was forwarded to the GCN with two layers and the activation function for predicting the node class was applied. The results demonstrate that node classification with GCN can improve the performance of identifying communities on graph datasets.

Originality/value

The experiment reveals that the CommunityGCN approach has given better results with accuracy, normalized mutual information, F1 and modularity scores of 91.26, 79.9, 92.58 and 70.5 per cent, respectively, for detecting communities in the graph network, which is much greater than the range of 55.7–87.07 per cent reported in previous literature. Thus, it has been concluded that the GCN with node classification models has improved the accuracy.

Details

Data Technologies and Applications, vol. 57 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 31 August 2022

Guoquan Zhang, Yaohui Wang, Jian He and Yi Xiong

Composite cellular structures have wide application in advanced engineering fields due to their high specific stiffness and strength. As an emerging technology, continuous…

Abstract

Purpose

Composite cellular structures have wide application in advanced engineering fields due to their high specific stiffness and strength. As an emerging technology, continuous fiber-reinforced polymer additive manufacturing provides a cost-effective solution for fabricating composite cellular structures with complex designs. However, the corresponding path planning methods are case-specific and have not considered any manufacturing constraints. This study aims to develop a generally applicable path planning method to fill the above research gap.

Design/methodology/approach

This study proposes a path planning method based on the graph theory, yielding an infill toolpath with a minimum fiber cutting frequency, printing time and total turning angle. More specifically, the cellular structure design is converted to a graph first. Then, the graph is modified to search an Eulerian path by adding an optimal set of extra edges determined through the integer linear programming method. Finally, the toolpath with minimum total turning angle is obtained with a constrained Euler path search algorithm.

Findings

The effectiveness of the proposed method is validated through the fabrication of both periodic and nonperiodic composite cellular structures, i.e. triangular unit cell-based, Voronoi diagram-based and topology optimized structures. The proposed method provides the basis for manufacturing planar thin-walled cellular structures of continuous fiber-reinforced polymer (CFRP). Moreover, the proposed method shows a notable improvement compared with the existing method. The fiber cutting frequency, printing time and total turning angle have been reduced up to 88.7%, 52.6% and 65.5%, respectively.

Originality/value

A generally applicable path planning method is developed to generate continuous toolpaths for fabricating cellular structures in CFRP-additive manufacturing, which is an emerging technology. More importantly, manufacturing constraints such as fiber cutting frequency, printing time and total turning angle of fibers are considered within the process planning for the first time.

Details

Rapid Prototyping Journal, vol. 29 no. 2
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 25 October 2022

Samir Sellami and Nacer Eddine Zarour

Massive amounts of data, manifesting in various forms, are being produced on the Web every minute and becoming the new standard. Exploring these information sources distributed in…

Abstract

Purpose

Massive amounts of data, manifesting in various forms, are being produced on the Web every minute and becoming the new standard. Exploring these information sources distributed in different Web segments in a unified way is becoming a core task for a variety of users’ and companies’ scenarios. However, knowledge creation and exploration from distributed Web data sources is a challenging task. Several data integration conflicts need to be resolved and the knowledge needs to be visualized in an intuitive manner. The purpose of this paper is to extend the authors’ previous integration works to address semantic knowledge exploration of enterprise data combined with heterogeneous social and linked Web data sources.

Design/methodology/approach

The authors synthesize information in the form of a knowledge graph to resolve interoperability conflicts at integration time. They begin by describing KGMap, a mapping model for leveraging knowledge graphs to bridge heterogeneous relational, social and linked web data sources. The mapping model relies on semantic similarity measures to connect the knowledge graph schema with the sources' metadata elements. Then, based on KGMap, this paper proposes KeyFSI, a keyword-based semantic search engine. KeyFSI provides a responsive faceted navigating Web user interface designed to facilitate the exploration and visualization of embedded data behind the knowledge graph. The authors implemented their approach for a business enterprise data exploration scenario where inputs are retrieved on the fly from a local customer relationship management database combined with the DBpedia endpoint and the Facebook Web application programming interface (API).

Findings

The authors conducted an empirical study to test the effectiveness of their approach using different similarity measures. The observed results showed better efficiency when using a semantic similarity measure. In addition, a usability evaluation was conducted to compare KeyFSI features with recent knowledge exploration systems. The obtained results demonstrate the added value and usability of the contributed approach.

Originality/value

Most state-of-the-art interfaces allow users to browse one Web segment at a time. The originality of this paper lies in proposing a cost-effective virtual on-demand knowledge creation approach, a method that enables organizations to explore valuable knowledge across multiple Web segments simultaneously. In addition, the responsive components implemented in KeyFSI allow the interface to adequately handle the uncertainty imposed by the nature of Web information, thereby providing a better user experience.

Details

International Journal of Web Information Systems, vol. 18 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 28 November 2022

Anuraj Mohan, Karthika P.V., Parvathi Sankar, K. Maya Manohar and Amala Peter

Money laundering is the process of concealing unlawfully obtained funds by presenting them as coming from a legitimate source. Criminals use crypto money laundering to hide the…

Abstract

Purpose

Money laundering is the process of concealing unlawfully obtained funds by presenting them as coming from a legitimate source. Criminals use crypto money laundering to hide the illicit origin of funds using a variety of methods. The most simplified form of bitcoin money laundering leans hard on the fact that transactions made in cryptocurrencies are pseudonymous, but open data gives more power to investigators and enables the crowdsourcing of forensic analysis. With the motive to curb these illegal activities, there exist various rules, policies and technologies collectively known as anti-money laundering (AML) tools. When properly implemented, AML restrictions reduce the negative effects of illegal economic activity while also promoting financial market integrity and stability, but these bear high costs for institutions. The purpose of this work is to motivate the opportunity to reconcile the cause of safety with that of financial inclusion, bearing in mind the limitations of the available data. The authors use the Elliptic dataset; to the best of the authors' knowledge, this is the largest labelled transaction dataset publicly available in any cryptocurrency.

Design/methodology/approach

AML in bitcoin can be modelled as a node classification task in dynamic networks. In this work, graph convolutional decision forest will be introduced, which combines the potentialities of evolving graph convolutional network and deep neural decision forest (DNDF). This model will be used to classify the unknown transactions in the Elliptic dataset. Additionally, the application of knowledge distillation (KD) over the proposed approach gives finest results compared to all the other experimented techniques.

Findings

The importance of utilising a concatenation between dynamic graph learning and ensemble feature learning is demonstrated in this work. The results show the superiority of the proposed model to classify the illicit transactions in the Elliptic dataset. Experiments also show that the results can be further improved when the system is fine-tuned using a KD framework.

Originality/value

Existing works used either ensemble learning or dynamic graph learning to tackle the problem of AML in bitcoin. The proposed model provides a novel view to combine the power of random forest with dynamic graph learning methods. Furthermore, the work also demonstrates the advantage of KD in improving the performance of the whole system.

Details

Data Technologies and Applications, vol. 57 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 12 August 2022

Alex Riensche, Jordan Severson, Reza Yavari, Nicholas L. Piercy, Kevin D. Cole and Prahalada Rao

The purpose of this paper is to develop, apply and validate a mesh-free graph theory–based approach for rapid thermal modeling of the directed energy deposition (DED) additive…

Abstract

Purpose

The purpose of this paper is to develop, apply and validate a mesh-free graph theory–based approach for rapid thermal modeling of the directed energy deposition (DED) additive manufacturing (AM) process.

Design/methodology/approach

In this study, the authors develop a novel mesh-free graph theory–based approach to predict the thermal history of the DED process. Subsequently, the authors validated the graph theory predicted temperature trends using experimental temperature data for DED of titanium alloy parts (Ti-6Al-4V). Temperature trends were tracked by embedding thermocouples in the substrate. The DED process was simulated using the graph theory approach, and the thermal history predictions were validated based on the data from the thermocouples.

Findings

The temperature trends predicted by the graph theory approach have mean absolute percentage error of approximately 11% and root mean square error of 23°C when compared to the experimental data. Moreover, the graph theory simulation was obtained within 4 min using desktop computing resources, which is less than the build time of 25 min. By comparison, a finite element–based model required 136 min to converge to similar level of error.

Research limitations/implications

This study uses data from fixed thermocouples when printing thin-wall DED parts. In the future, the authors will incorporate infrared thermal camera data from large parts.

Practical implications

The DED process is particularly valuable for near-net shape manufacturing, repair and remanufacturing applications. However, DED parts are often afflicted with flaws, such as cracking and distortion. In DED, flaw formation is largely governed by the intensity and spatial distribution of heat in the part during the process, often referred to as the thermal history. Accordingly, fast and accurate thermal models to predict the thermal history are necessary to understand and preclude flaw formation.

Originality/value

This paper presents a new mesh-free computational thermal modeling approach based on graph theory (network science) and applies it to DED. The approach eschews the tedious and computationally demanding meshing aspect of finite element modeling and allows rapid simulation of the thermal history in additive manufacturing. Although the graph theory has been applied to thermal modeling of laser powder bed fusion (LPBF), there are distinct phenomenological differences between DED and LPBF that necessitate substantial modifications to the graph theory approach.

Details

Rapid Prototyping Journal, vol. 29 no. 2
Type: Research Article
ISSN: 1355-2546

Keywords

Open Access
Article
Publication date: 8 February 2023

Edoardo Ramalli and Barbara Pernici

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model…

Abstract

Purpose

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model performance. Uncertainty inherently affects experiment measurements and is often missing in the available data sets due to its estimation cost. For similar reasons, experiments are very few compared to other data sources. Discarding experiments based on the missing uncertainty values would preclude the development of predictive models. Data profiling techniques are fundamental to assess data quality, but some data quality dimensions are challenging to evaluate without knowing the uncertainty. In this context, this paper aims to predict the missing uncertainty of the experiments.

Design/methodology/approach

This work presents a methodology to forecast the experiments’ missing uncertainty, given a data set and its ontological description. The approach is based on knowledge graph embeddings and leverages the task of link prediction over a knowledge graph representation of the experiments database. The validity of the methodology is first tested in multiple conditions using synthetic data and then applied to a large data set of experiments in the chemical kinetic domain as a case study.

Findings

The analysis results of different test case scenarios suggest that knowledge graph embedding can be used to predict the missing uncertainty of the experiments when there is a hidden relationship between the experiment metadata and the uncertainty values. The link prediction task is also resilient to random noise in the relationship. The knowledge graph embedding outperforms the baseline results if the uncertainty depends upon multiple metadata.

Originality/value

The employment of knowledge graph embedding to predict the missing experimental uncertainty is a novel alternative to the current and more costly techniques in the literature. Such contribution permits a better data quality profiling of scientific repositories and improves the development process of data-driven models based on scientific experiments.

Open Access
Article
Publication date: 13 October 2022

Neha Keshan, Kathleen Fontaine and James A. Hendler

This paper aims to describe the “InDO: Institute Demographic Ontology” and demonstrates the InDO-based semiautomated process for both generating and extending a knowledge graph to…

Abstract

Purpose

This paper aims to describe the “InDO: Institute Demographic Ontology” and demonstrates the InDO-based semiautomated process for both generating and extending a knowledge graph to provide a comprehensive resource for marginalized US graduate students. The knowledge graph currently consists of instances related to the semistructured National Science Foundation Survey of Earned Doctorates (NSF SED) 2019 analysis report data tables. These tables contain summary statistics of an institute’s doctoral recipients based on a variety of demographics. Incorporating institute Wikidata links ultimately produces a table of unique, clearly readable data.

Design/methodology/approach

The authors use a customized semantic extract transform and loader (SETLr) script to ingest data from 2019 US doctoral-granting institute tables and preprocessed NSF SED Tables 1, 3, 4 and 9. The generated InDO knowledge graph is evaluated using two methods. First, the authors compare competency questions’ sparql results from both the semiautomatically and manually generated graphs. Second, the authors expand the questions to provide a better picture of an institute’s doctoral-recipient demographics within study fields.

Findings

With some preprocessing and restructuring of the NSF SED highly interlinked tables into a more parsable format, one can build the required knowledge graph using a semiautomated process.

Originality/value

The InDO knowledge graph allows the integration of US doctoral-granting institutes demographic data based on NSF SED data tables and presentation in machine-readable form using a new semiautomated methodology.

Details

International Journal of Web Information Systems, vol. 18 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

11 – 20 of over 31000