Search results

1 – 10 of over 15000
Article
Publication date: 7 February 2023

Riju Bhattacharya, Naresh Kumar Nagwani and Sarsij Tripathi

A community demonstrates the unique qualities and relationships between its members that distinguish it from other communities within a network. Network analysis relies heavily on…

Abstract

Purpose

A community demonstrates the unique qualities and relationships between its members that distinguish it from other communities within a network. Network analysis relies heavily on community detection. Despite the traditional spectral clustering and statistical inference methods, deep learning techniques for community detection have grown in popularity due to their ease of processing high-dimensional network data. Graph convolutional neural networks (GCNNs) have received much attention recently and have developed into a potential and ubiquitous method for directly detecting communities on graphs. Inspired by the promising results of graph convolutional networks (GCNs) in analyzing graph structure data, a novel community graph convolutional network (CommunityGCN) as a semi-supervised node classification model has been proposed and compared with recent baseline methods graph attention network (GAT), GCN-based technique for unsupervised community detection and Markov random fields combined with graph convolutional network (MRFasGCN).

Design/methodology/approach

This work presents the method for identifying communities that combines the notion of node classification via message passing with the architecture of a semi-supervised graph neural network. Six benchmark datasets, namely, Cora, CiteSeer, ACM, Karate, IMDB and Facebook, have been used in the experimentation.

Findings

In the first set of experiments, the scaled normalized average matrix of all neighbor's features including the node itself was obtained, followed by obtaining the weighted average matrix of low-dimensional nodes. In the second set of experiments, the average weighted matrix was forwarded to the GCN with two layers and the activation function for predicting the node class was applied. The results demonstrate that node classification with GCN can improve the performance of identifying communities on graph datasets.

Originality/value

The experiment reveals that the CommunityGCN approach has given better results with accuracy, normalized mutual information, F1 and modularity scores of 91.26, 79.9, 92.58 and 70.5 per cent, respectively, for detecting communities in the graph network, which is much greater than the range of 55.7–87.07 per cent reported in previous literature. Thus, it has been concluded that the GCN with node classification models has improved the accuracy.

Details

Data Technologies and Applications, vol. 57 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 4 October 2021

Guang-Yih Sheu and Chang-Yu Li

In a classroom, a support vector machines model with a linear kernel, a neural network and the k-nearest neighbors algorithm failed to detect simulated money laundering accounts…

Abstract

Purpose

In a classroom, a support vector machines model with a linear kernel, a neural network and the k-nearest neighbors algorithm failed to detect simulated money laundering accounts generated from the Panama papers data set of the offshore leak database. This study aims to resolve this failure.

Design/methodology/approach

Build a graph attention network having three modules as a new money laundering detection tool. A feature extraction module encodes these input data to create a weighted graph structure. In it, directed edges and their end vertices denote financial transactions. Each directed edge has weights for storing the frequency of money transactions and other significant features. Social network metrics are features of nodes for characterizing an account’s roles in a money laundering typology. A graph attention module implements a self-attention mechanism for highlighting target nodes. A classification module further filters out such targets using the biased rectified linear unit function.

Findings

Resulted from the highlighting of nodes using a self-attention mechanism, the proposed graph attention network outperforms a Naïve Bayes classifier, the random forest method and a support vector machines model with a radial kernel in detecting money laundering accounts. The Naïve Bayes classifier produces second accurate classifications.

Originality/value

This paper develops a new money laundering detection tool, which outperforms existing methods. This new tool produces more accurate detections of money laundering, perfects warns of money laundering accounts or links and provides sharp efficiency in processing financial transaction records without being afraid of their amount.

Details

Journal of Money Laundering Control, vol. 25 no. 3
Type: Research Article
ISSN: 1368-5201

Keywords

Book part
Publication date: 25 July 2008

Martin J. Conyon and Mark R. Muldoon

In this chapter we investigate the ownership and control of UK firms using contemporary methods from computational graph theory. Specifically, we analyze a ‘small-world’ model of…

Abstract

In this chapter we investigate the ownership and control of UK firms using contemporary methods from computational graph theory. Specifically, we analyze a ‘small-world’ model of ownership and control. A small-world is a network whose actors are linked by a short chain of acquaintances (short path lengths), but at the same time have a strongly overlapping circle of friends (high clustering). We simulate a set of corporate worlds using an ensemble of random graphs introduced by Chung and Lu (2002a, 2002b). We find that the corporate governance network structures analyzed here are more clustered (‘clubby’) than would be predicted by the random-graph model. Path lengths, though, are generally not shorter than expected. In addition, we investigate the role of financial institutions: potentially important conduits creating connectivity in corporate networks. We find such institutions give rise to systematically different network topologies.

Details

Network Strategy
Type: Book
ISBN: 978-0-7623-1442-3

Article
Publication date: 3 August 2012

Andrew Adamatzky and Theresa Schubert

The purpose of this paper is to develop experimental laboratory biological techniques for approximation of principle transport networks, optimizing transport links, and developing…

Abstract

Purpose

The purpose of this paper is to develop experimental laboratory biological techniques for approximation of principle transport networks, optimizing transport links, and developing optimal solutions to current transport problems. It also aims to study how slime mould of Physarum polycephalum approximate autobahn networks in Germany.

Design/methodology/approach

The paper considers the 21 most populous urban areas in Germany. It represents these areas with source of nutrients placed in the positions of slime mould growing substrate corresponding to the areas. At the beginning of each experiment slime mould is inoculated in the Berlin area. Slime mould exhibits foraging behavior and spans sources of nutrients (which represent urban areas) with a network of protoplasmic tubes (which approximate vehicular transport networks). The study analyzes structure of transport networks developed by slime mould and compares it with families of known proximity graphs. It also imitates slime‐mould response to simulated disaster by placing sources of chemo‐repellents in the positions of nuclear power plants.

Findings

It is found that the plasmodium of Physarum polycephalum develops a minimal approximation of a transport network spanning urban areas. Physarum‐developed network matches autobahn network very well. The high degree of similarity is preserved even when we place high‐demand constraints on repeatability of links in the experiments. Physarum approximates almost all major transport links. In response to a sudden disaster, gradually spreading from its epicenter, the Physarum transport networks react by abandoning transport links affected by disaster zone, enhancement of those unaffected directly by the disaster, massive sprouting from the epicenter, and increase of scouting activity in the regions distant to the epicenter of the disaster.

Originality/value

Experimental methods and computer analysis techniques presented in the paper lay a foundation of novel biological laboratory approaches to imitation and prognostication of socio‐economical developments.

Details

Kybernetes, vol. 41 no. 7/8
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 28 October 2020

Francesco Rouhana and Dima Jawad

This paper aims to present a novel approach for assessing the resilience of transportation road infrastructure against different failure scenarios based on the topological…

Abstract

Purpose

This paper aims to present a novel approach for assessing the resilience of transportation road infrastructure against different failure scenarios based on the topological properties of the network. The approach is implemented in the context of developing countries where data scarcity is the norm, taking the capital city of Beirut as a case study.

Design/methodology/approach

The approach is based on the graph theory concepts and uses spatial data and urban network analysis toolbox to estimate the resilience under random and rank-ordering failure scenarios. The quantitative approach is applied to statistically model the topological graph properties, centralities and appropriate resilience metrics.

Findings

The research approach is able to provide a unique insight into the network configuration in terms of resilience against failures. The road network of Beirut, with an average nodal degree of three, turns to act more similarly to a random graph when exposed to failures. Topological parameters, connectivity and density indices of the network decline through disruptions while revealing an entire dependence on the state of nodes. The Beirut random network responds similarly to random and targeted removals. Critical network components are highlighted following the approach.

Research limitations/implications

The approach is limited to an undirected and weighted specific graph of Beirut where the capacity to collect and process the necessary data in such context is limited.

Practical implications

Decision-makers are better able to direct and optimize resources by prioritizing the critical network components, therefore reducing the failure-induced downtime in the functionality.

Originality/value

The resilience of Beirut transportation network is quantified uniquely through graph theory under various node removal modes.

Details

International Journal of Disaster Resilience in the Built Environment, vol. 12 no. 4
Type: Research Article
ISSN: 1759-5908

Keywords

Book part
Publication date: 18 July 2016

Christopher J. Quinn, Matthew J. Quinn, Alan D. Olinsky and John T. Quinn

Online social networks are increasingly important venues for businesses to promote their products and image. However, information propagation in online social networks is…

Abstract

Online social networks are increasingly important venues for businesses to promote their products and image. However, information propagation in online social networks is significantly more complicated compared to traditional transmission media such as newspaper, radio, and television. In this chapter, we will discuss research on modeling and forecasting diffusion of virally marketed content in social networks. Important aspects include the content and its presentation, the network topology, and transmission dynamics. Theoretical models, algorithms, and case studies of viral marketing will be explored.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-1-78635-534-8

Keywords

Article
Publication date: 9 July 2022

Riju Bhattacharya, Naresh Kumar Nagwani and Sarsij Tripathi

Social networking platforms are increasingly using the Follower Link Prediction tool in an effort to expand the number of their users. It facilitates the discovery of previously…

Abstract

Purpose

Social networking platforms are increasingly using the Follower Link Prediction tool in an effort to expand the number of their users. It facilitates the discovery of previously unidentified individuals and can be employed to determine the relationships among the nodes in a social network. On the other hand, social site firms use follower–followee link prediction (FFLP) to increase their user base. FFLP can help identify unfamiliar people and determine node-to-node links in a social network. Choosing the appropriate person to follow becomes crucial as the number of users increases. A hybrid model employing the Ensemble Learning algorithm for FFLP (HMELA) is proposed to advise the formation of new follower links in large networks.

Design/methodology/approach

HMELA includes fundamental classification techniques for treating link prediction as a binary classification problem. The data sets are represented using a variety of machine-learning-friendly hybrid graph features. The HMELA is evaluated using six real-world social network data sets.

Findings

The first set of experiments used exploratory data analysis on a di-graph to produce a balanced matrix. The second set of experiments compared the benchmark and hybrid features on data sets. This was followed by using benchmark classifiers and ensemble learning methods. The experiments show that the proposed (HMELA) method predicts missing links better than other methods.

Practical implications

A hybrid suggested model for link prediction is proposed in this paper. The suggested HMELA model makes use of AUC scores to predict new future links. The proposed approach facilitates comprehension and insight into the domain of link prediction. This work is almost entirely aimed at academics, practitioners, and those involved in the field of social networks, etc. Also, the model is quite effective in the field of product recommendation and in recommending a new friend and user on social networks.

Originality/value

The outcome on six benchmark data sets revealed that when the HMELA strategy had been applied to all of the selected data sets, the area under the curve (AUC) scores were greater than when individual techniques were applied to the same data sets. Using the HMELA technique, the maximum AUC score in the Facebook data set has been increased by 10.3 per cent from 0.8449 to 0.9479. There has also been an 8.53 per cent increase in the accuracy of the Net Science, Karate Club and USAir databases. As a result, the HMELA strategy outperforms every other strategy tested in the study.

Details

Data Technologies and Applications, vol. 57 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 27 April 2022

Milind Tiwari, Jamie Ferrill and Vishal Mehrotra

This paper advocates the use of graph database platforms to investigate networks of illicit companies identified in money laundering schemes. It explains the setup of the data…

Abstract

Purpose

This paper advocates the use of graph database platforms to investigate networks of illicit companies identified in money laundering schemes. It explains the setup of the data structure to investigate a network of illicit companies identified in cases of money laundering schemes and presents its key application in practice. Grounded in the technology acceptance model (TAM), this paper aims to present key operationalisations and theoretical considerations for effectively driving and facilitating its wider adoption among a range of stakeholders focused on anti-money laundering solutions.

Design/methodology/approach

This paper explores the benefits of adopting graph databases and critiques their limitations by drawing on primary data collection processes that have been undertaken to derive a network topology. Such representation on a graph database platform provides the opportunity to uncover hidden relationships critical for combatting illicit activities such as money laundering.

Findings

The move to adopt a graph database for storing information related to corporate entities will aid investigators, journalists and other stakeholders in the identification of hidden links among entities to deter activities of corruption and money laundering.

Research limitations/implications

This paper does not display the nodal data as it is framed as a background to how graph databases can be used in practice.

Originality/value

To the best of the authors’ knowledge, no studies in the past have considered companies from multiple cases in the same graph network and attempted to investigate the links between them. The advocation for such an approach has significant implications for future studies.

Details

Journal of Money Laundering Control, vol. 26 no. 3
Type: Research Article
ISSN: 1368-5201

Keywords

Article
Publication date: 9 November 2015

Teodor Sommestad and Fredrik Sandström

The purpose of this paper is to test the practical utility of attack graph analysis. Attack graphs have been proposed as a viable solution to many problems in computer network

Abstract

Purpose

The purpose of this paper is to test the practical utility of attack graph analysis. Attack graphs have been proposed as a viable solution to many problems in computer network security management. After individual vulnerabilities are identified with a vulnerability scanner, an attack graph can relate the individual vulnerabilities to the possibility of an attack and subsequently analyze and predict which privileges attackers could obtain through multi-step attacks (in which multiple vulnerabilities are exploited in sequence).

Design/methodology/approach

The attack graph tool, MulVAL, was fed information from the vulnerability scanner Nexpose and network topology information from 8 fictitious organizations containing 199 machines. Two teams of attackers attempted to infiltrate these networks over the course of two days and reported which machines they compromised and which attack paths they attempted to use. Their reports are compared to the predictions of the attack graph analysis.

Findings

The prediction accuracy of the attack graph analysis was poor. Attackers were more than three times likely to compromise a host predicted as impossible to compromise compared to a host that was predicted as possible to compromise. Furthermore, 29 per cent of the hosts predicted as impossible to compromise were compromised during the two days. The inaccuracy of the vulnerability scanner and MulVAL’s interpretation of vulnerability information are primary reasons for the poor prediction accuracy.

Originality/value

Although considerable research contributions have been made to the development of attack graphs, and several analysis methods have been proposed using attack graphs, the extant literature does not describe any tests of their accuracy under realistic conditions.

Details

Information & Computer Security, vol. 23 no. 5
Type: Research Article
ISSN: 2056-4961

Keywords

Article
Publication date: 8 July 2022

Chuanming Yu, Zhengang Zhang, Lu An and Gang Li

In recent years, knowledge graph completion has gained increasing research focus and shown significant improvements. However, most existing models only use the structures of…

Abstract

Purpose

In recent years, knowledge graph completion has gained increasing research focus and shown significant improvements. However, most existing models only use the structures of knowledge graph triples when obtaining the entity and relationship representations. In contrast, the integration of the entity description and the knowledge graph network structure has been ignored. This paper aims to investigate how to leverage both the entity description and the network structure to enhance the knowledge graph completion with a high generalization ability among different datasets.

Design/methodology/approach

The authors propose an entity-description augmented knowledge graph completion model (EDA-KGC), which incorporates the entity description and network structure. It consists of three modules, i.e. representation initialization, deep interaction and reasoning. The representation initialization module utilizes entity descriptions to obtain the pre-trained representation of entities. The deep interaction module acquires the features of the deep interaction between entities and relationships. The reasoning component performs matrix manipulations with the deep interaction feature vector and entity representation matrix, thus obtaining the probability distribution of target entities. The authors conduct intensive experiments on the FB15K, WN18, FB15K-237 and WN18RR data sets to validate the effect of the proposed model.

Findings

The experiments demonstrate that the proposed model outperforms the traditional structure-based knowledge graph completion model and the entity-description-enhanced knowledge graph completion model. The experiments also suggest that the model has greater feasibility in different scenarios such as sparse data, dynamic entities and limited training epochs. The study shows that the integration of entity description and network structure can significantly increase the effect of the knowledge graph completion task.

Originality/value

The research has a significant reference for completing the missing information in the knowledge graph and improving the application effect of the knowledge graph in information retrieval, question answering and other fields.

Details

Aslib Journal of Information Management, vol. 75 no. 3
Type: Research Article
ISSN: 2050-3806

Keywords

1 – 10 of over 15000