Search results

1 – 10 of over 27000
Article
Publication date: 9 November 2015

Teodor Sommestad and Fredrik Sandström

The purpose of this paper is to test the practical utility of attack graph analysis. Attack graphs have been proposed as a viable solution to many problems in computer network…

Abstract

Purpose

The purpose of this paper is to test the practical utility of attack graph analysis. Attack graphs have been proposed as a viable solution to many problems in computer network security management. After individual vulnerabilities are identified with a vulnerability scanner, an attack graph can relate the individual vulnerabilities to the possibility of an attack and subsequently analyze and predict which privileges attackers could obtain through multi-step attacks (in which multiple vulnerabilities are exploited in sequence).

Design/methodology/approach

The attack graph tool, MulVAL, was fed information from the vulnerability scanner Nexpose and network topology information from 8 fictitious organizations containing 199 machines. Two teams of attackers attempted to infiltrate these networks over the course of two days and reported which machines they compromised and which attack paths they attempted to use. Their reports are compared to the predictions of the attack graph analysis.

Findings

The prediction accuracy of the attack graph analysis was poor. Attackers were more than three times likely to compromise a host predicted as impossible to compromise compared to a host that was predicted as possible to compromise. Furthermore, 29 per cent of the hosts predicted as impossible to compromise were compromised during the two days. The inaccuracy of the vulnerability scanner and MulVAL’s interpretation of vulnerability information are primary reasons for the poor prediction accuracy.

Originality/value

Although considerable research contributions have been made to the development of attack graphs, and several analysis methods have been proposed using attack graphs, the extant literature does not describe any tests of their accuracy under realistic conditions.

Details

Information & Computer Security, vol. 23 no. 5
Type: Research Article
ISSN: 2056-4961

Keywords

Article
Publication date: 1 November 2021

Maren Parnas Gulnes, Ahmet Soylu and Dumitru Roman

Neuroscience data are spread across a variety of sources, typically provisioned through ad-hoc and non-standard approaches and formats and often have no connection to the related…

Abstract

Purpose

Neuroscience data are spread across a variety of sources, typically provisioned through ad-hoc and non-standard approaches and formats and often have no connection to the related data sources. These make it difficult for researchers to understand, integrate and reuse brain-related data. The aim of this study is to show that a graph-based approach offers an effective mean for representing, analysing and accessing brain-related data, which is highly interconnected, evolving over time and often needed in combination.

Design/methodology/approach

The authors present an approach for organising brain-related data in a graph model. The approach is exemplified in the case of a unique data set of quantitative neuroanatomical data about the murine basal ganglia––a group of nuclei in the brain essential for processing information related to movement. Specifically, the murine basal ganglia data set is modelled as a graph, integrated with relevant data from third-party repositories, published through a Web-based user interface and API, analysed from exploratory and confirmatory perspectives using popular graph algorithms to extract new insights.

Findings

The evaluation of the graph model and the results of the graph data analysis and usability study of the user interface suggest that graph-based data management in the neuroscience domain is a promising approach, since it enables integration of various disparate data sources and improves understanding and usability of data.

Originality/value

The study provides a practical and generic approach for representing, integrating, analysing and provisioning brain-related data and a set of software tools to support the proposed approach.

Details

Data Technologies and Applications, vol. 56 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 21 February 2022

Fatemeh Khozaei Ravari, Ahmad Sanusi Hassan, Muhammad Hafeez Abdul Nasir and Mohsen Mohammad Taheri

The study's main objective is to evaluate the morphological developments in the characteristics of the spatial configurations of the residential layouts in Kerman, Iran, in…

Abstract

Purpose

The study's main objective is to evaluate the morphological developments in the characteristics of the spatial configurations of the residential layouts in Kerman, Iran, in examining the impact on the level of visual privacy through the spectrum of permeability and wayfinding in space syntax analysis.

Design/methodology/approach

In this paper, plan graph analysis is used to measure the syntactic properties of seven topological residential architecture plans in Kerman, Iran, built from the 1970s to 2010s. The methodology involves the development of mathematical measurements to signify permeability and simulation of visibility graph analysis (VGA) to indicate wayfinding.

Findings

The findings reveal the residential layouts of Iranian houses tend to be less integrated over decades of design development from the 1970s to 2010s. Reduction in spatial integration corresponds to increase segregation allowing for enhanced visual privacy. The study underpins that, even with the constraints in the scale of the house and reduction in the number of nodes, as evident in the design of the modern residential layout, the efficient level of visual privacy is still achievable with regards to the standards demanded by the local culture.

Originality/value

The study examines the development in residential spatial configuration and building scale on visual privacy through a proposed methodology based on the level of permeability and wayfinding measured as a combined effect using the space syntax analysis and visual accessibility.

Details

International Journal of Building Pathology and Adaptation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-4708

Keywords

Article
Publication date: 4 January 2018

Varinder Singh and Pravin M. Singru

The purpose of this paper is to propose the use of graph theoretic structural modeling for assessing the possible reduction in complexity of the work flow procedures in an…

Abstract

Purpose

The purpose of this paper is to propose the use of graph theoretic structural modeling for assessing the possible reduction in complexity of the work flow procedures in an organization due to lean initiatives. A tool to assess the impact of lean initiative on complexity of the system at an early stage of decision making is proposed.

Design/methodology/approach

First, the permanent function-based graph theoretic structural model has been applied to understand the complex structure of a manufacturing system under consideration. The model helps by systematically breaking it into different sub-graphs that identify all the cycles of interactions among the subsystems in the organization in a systematic manner. The physical interpretation of the existing quantitative methods linked to graph theoretic methodology, namely two types of coefficients of dissimilarity, has been used to evolve the new measures of organizational complexity. The new methods have been deployed for studying the impact of different lean initiatives on complexity reduction in a case industrial organization.

Findings

The usefulness and the application of new proposed measures of complexity have been demonstrated with the help of three cases of lean initiatives in an industrial organization. The new measures of complexity have been proposed as a credible tool for studying the lean initiatives and their implications.

Research limitations/implications

The paper may lead many researchers to use the proposed tool to model different cases of lean manufacturing and pave a new direction for future research in lean manufacturing.

Practical implications

The paper demonstrates the application of new tools through cases and the tool may be used by practitioners of lean philosophy or total quality management to model and investigate their decisions.

Originality/value

The proposed measures of complexity are absolutely new addition to the tool box of graph theoretic structural modeling and have a potential to be adopted by practical decision makers to steer their organizations though such decisions before the costly interruptions in manufacturing systems are tried on ground.

Details

Journal of Manufacturing Technology Management, vol. 29 no. 2
Type: Research Article
ISSN: 1741-038X

Keywords

Article
Publication date: 7 July 2020

Adam B. Turner, Stephen McCombie and Allon J. Uhlmann

The purpose of this paper is to investigate available forensic data on the Bitcoin blockchain to identify typical transaction patterns of ransomware attacks. Specifically, the…

Abstract

Purpose

The purpose of this paper is to investigate available forensic data on the Bitcoin blockchain to identify typical transaction patterns of ransomware attacks. Specifically, the authors explore how distinct these patterns are and their potential value for intelligence exploitation in support of countering ransomware attacks.

Design/methodology/approach

The authors created an analytic framework – the Ransomware–Bitcoin Intelligence–Forensic Continuum framework – to search for transaction patterns in the blockchain records from actual ransomware attacks. Data of a number of different ransomware Bitcoin addresses was extracted to populate the framework, via the WalletExplorer.com programming interface. This data was then assembled in a representation of the target network for pattern analysis on the input (cash-in) and output (cash-out) side of the ransomware seed addresses. Different graph algorithms were applied to these networks. The results were compared to a “control” network derived from a Bitcoin charity.

Findings

The findings show discernible patterns in the network relating to the input and output side of the ransomware graphs. However, these patterns are not easily distinguishable from those associated with the charity Bitcoin address on the input side. Nonetheless, the collection profile over time is more volatile than with the charity Bitcoin address. On the other hand, ransomware output patterns differ from those associated charity addresses, as the attacker cash-out tactics are quite different from the way charities mobilise their donations. We further argue that an application of graph machine learning provides a basis for future analysis and data refinement possibilities.

Research limitations/implications

Limitations are evident in the sample size of data taken on ransomware campaigns and the “control” subject. Further analysis of additional ransomware campaigns and “control” subjects over time would help refine and validate the preliminary observations in this paper. Future research will also benefit from the application of more powerful computing resources and analytics platforms that scale with the amount of data being collected.

Originality/value

This research contributes to the maturity of the field by analysing ransomware-Bitcoin behaviour using the Ransomware–Bitcoin Intelligence–Forensic Continuum. By combining several different techniques to discerning patterns of ransomware activity on the Bitcoin network, it provides insight into whether a ransomware attack is occurring and could be used to trigger alerts to seek additional evidence of attack, or could corroborate other information in the system.

Details

Journal of Money Laundering Control, vol. 23 no. 3
Type: Research Article
ISSN: 1368-5201

Keywords

Article
Publication date: 10 October 2007

A. Kaveh and K. Koohestani

This paper seeks to present an efficient algorithm for the formation of null basis for finite element model discretized as rectangular bending elements. The bases obtained by this…

Abstract

Purpose

This paper seeks to present an efficient algorithm for the formation of null basis for finite element model discretized as rectangular bending elements. The bases obtained by this algorithm correspond to highly sparse and narrowly banded flexibility matrices and such bases can be considered as an efficient tool for optimal analysis of structures.

Design/methodology/approach

In the present method, two graphs are associated with finite element mesh consisting of an “interface graph” and an “associate digraph”. The underlying subgraphs of the self‐equilibrating systems (SESs) (null vectors) are obtained by graph theoretical approaches forming a null basis. Application of unit loads (moments) at the end of the generator of each subgraph results in the corresponding null vector.

Findings

In the present hybrid method, graph theory is used for the formation of null vectors as far as it is possible and then algebraic method is utilized to find the complementary part of the null basis.

Originality/value

This hybrid approach makes the use of pure force method in the finite element analysis feasible. Here, a simplified version of the algorithm is also presented where the SESs for weighted graphs are obtained using an analytical approach. Thus, the formation of null bases is achieved using the least amount of algebraic operations, resulting in substantial saving in computational time and storage.

Details

Engineering Computations, vol. 24 no. 7
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 7 May 2021

Rafael Sousa Lima, André Luiz Marques Serrano, Joshua Onome Imoniana and César Medeiros Cupertino

This study aims to understand how forensic accountants can analyse bank transactions suspected of being involved with money laundering crimes in Brazil through social network…

Abstract

Purpose

This study aims to understand how forensic accountants can analyse bank transactions suspected of being involved with money laundering crimes in Brazil through social network analysis (SNA).

Design/methodology/approach

The methodological approach taken in this study was exploratory. This study cleaned and debugged bank statements from criminal investigations in Brazil using computational algorithms. Then graphs were designed and matched with money laundering regulations.

Findings

The findings indicated that graph techniques contribute to a range of beneficial information to help identify typical banking transactions (pooling accounts, strawmen, smurfing) used to conceal or disguise the movement of illicit resources, enhancing visual aspects of financial analysis.

Research limitations/implications

Research found limitations in the data sets with reduced identification of originators and beneficiaries, considered low compared to other investigations in Brazil. Furthermore, to preserve restrict information and keep data confidential, data sets used in research were not made available.

Practical implications

Law enforcement agencies and financial intelligence units can apply graph-based technique cited in this research to strengthen anti-money laundering activities. The results, grounded in analytical approaches, may offer a source of data to regulators and academia for future research.

Originality/value

This study created data sets using real-life bank statements from two investigations of competence by the Brazilian Federal Justice, including real-data perspectives in academic research. This study uses SNA, which is a popular approach in several areas of knowledge.

Details

Journal of Money Laundering Control, vol. 25 no. 1
Type: Research Article
ISSN: 1368-5201

Keywords

Article
Publication date: 23 March 2021

Ulya Bayram, Runia Roy, Aqil Assalil and Lamia BenHiba

The COVID-19 pandemic has sparked a remarkable volume of research literature, and scientists are increasingly in need of intelligent tools to cut through the noise and uncover…

Abstract

Purpose

The COVID-19 pandemic has sparked a remarkable volume of research literature, and scientists are increasingly in need of intelligent tools to cut through the noise and uncover relevant research directions. As a response, the authors propose a novel framework. In this framework, the authors develop a novel weighted semantic graph model to compress the research studies efficiently. Also, the authors present two analyses on this graph to propose alternative ways to uncover additional aspects of COVID-19 research.

Design/methodology/approach

The authors construct the semantic graph using state-of-the-art natural language processing (NLP) techniques on COVID-19 publication texts (>100,000 texts). Next, the authors conduct an evolutionary analysis to capture the changes in COVID-19 research across time. Finally, the authors apply a link prediction study to detect novel COVID-19 research directions that are so far undiscovered.

Findings

Findings reveal the success of the semantic graph in capturing scientific knowledge and its evolution. Meanwhile, the prediction experiments provide 79% accuracy on returning intelligible links, showing the reliability of the methods for predicting novel connections that could help scientists discover potential new directions.

Originality/value

To the authors’ knowledge, this is the first study to propose a holistic framework that includes encoding the scientific knowledge in a semantic graph, demonstrates an evolutionary examination of past and ongoing research and offers scientists with tools to generate new hypotheses and research directions through predictive modeling and deep machine learning techniques.

Details

Online Information Review, vol. 45 no. 4
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 1 March 2000

Paul Mather, Alan Ramsay and Adam Steen

This paper investigates the use of graphs, selection of variables to graph and construction of graphs in prospectuses issued by Australian companies making their initial public…

2103

Abstract

This paper investigates the use of graphs, selection of variables to graph and construction of graphs in prospectuses issued by Australian companies making their initial public offering (IPO) of shares to the Australian capital market. The paper formulates and tests hypotheses concerning selectivity in the use of graphs and distortion in the construction of graphs presented in IPO prospectuses, as well as providing descriptive evidence about the use of graphs in such prospectuses. Results show that firms enjoying improving profit performance are significantly more likely to include graphs of key financial variables in their prospectuses than firms suffering deteriorating profit performance. Thus, similar to studies of graphs in annual reports, evidence of selectivity in the inclusion of graphs is found. No significant relationship is found between performance on the variable being graphed and distortion in the construction of the graph. When the graphs are split between those covering key financial variables and other variables, a significant relationship is found in both categories. For graphs of other variables, a significant positive association is found between performance and distortion. However, the relationship for key financial variables is in the opposite direction to that suggested by impression management. Further analysis identifies significant sub‐period differences in selectivity and distortion which are consistent with the view that the major regulatory and institutional changes outlined in the paper, reduced the extent of selectivity and graphical distortion in the post‐1991 period. As far as we are aware, this is the first study reported in the literature to investigate the use of graphs in prospectuses. The results also have policy implications for the regulatory authority in Australia.

Details

Accounting, Auditing & Accountability Journal, vol. 13 no. 1
Type: Research Article
ISSN: 0951-3574

Keywords

Article
Publication date: 4 November 2020

Pachayappan Murugaiyan and Venkatesakumar Ramakrishnan

Little attention has been paid to restructuring existing massive amounts of literature data such that evidence-based meaningful inferences and networks be drawn therefrom. This…

331

Abstract

Purpose

Little attention has been paid to restructuring existing massive amounts of literature data such that evidence-based meaningful inferences and networks be drawn therefrom. This paper aims to structure extant literature data into a network and demonstrate by graph visualization and manipulation tool “Gephi” how to obtain an evidence-based literature review.

Design/methodology/approach

The main objective of this paper is to propose a methodology to structure existing literature data into a network. This network is examined through certain graph theory metrics to uncover evidence-based research insights arising from existing huge amounts of literature data. From the list metrics, this study considers degree centrality, closeness centrality and betweenness centrality to comprehend the information available in the literature pool.

Findings

There is a significant amount of literature on any given research problem. Approaching this massive volume of literature data to find an appropriate research problem is a complicated process. The proposed methodology and metrics enable the extraction of appropriate and relevant information from huge quantities of literature data. The methodology is validated by three different scenarios of review questions, and results are reported.

Research limitations/implications

The proposed methodology comprises of more manual hours to structure literature data.

Practical implications

This paper enables researchers in any domain to systematically extract and visualize meaningful and evidence-based insights from existing literature.

Originality/value

The procedure for converting literature data into a network representation is not documented in the existing literature. The paper lays down the procedure to structure literature data into a network.

Details

Journal of Modelling in Management, vol. 17 no. 1
Type: Research Article
ISSN: 1746-5664

Keywords

1 – 10 of over 27000