Search results

1 – 10 of over 5000
Article
Publication date: 5 March 2018

Baidyanath Biswas and Arunabha Mukhopadhyay

Malicious attackers frequently breach information systems by exploiting disclosed software vulnerabilities. Knowledge of these vulnerabilities over time is essential to decide the…

Abstract

Purpose

Malicious attackers frequently breach information systems by exploiting disclosed software vulnerabilities. Knowledge of these vulnerabilities over time is essential to decide the use of software products by organisations. The purpose of this paper is to propose a novel G-RAM framework for business organisations to assess and mitigate risks arising out of software vulnerabilities.

Design/methodology/approach

The G-RAM risk assessment module uses GARCH to model vulnerability growth. Using 16-year data across 1999-2016 from the National Vulnerability Database, the authors estimate the model parameters and validate the prediction accuracy. Next, the G-RAM risk mitigation module designs optimal software portfolio using Markowitz’s mean-variance optimisation for a given IT budget and preference.

Findings

Based on an empirical analysis, this study establishes that vulnerability follows a non-linear, time-dependent, heteroskedastic growth pattern. Further, efficient software combinations are proposed that optimise correlated risk. The study also reports the empirical evidence of a shift in efficient frontier of software configurations with time.

Research limitations/implications

Existing assumption of independent and identically distributed residuals after vulnerability function fitting is incorrect. This study applies GARCH technique to measure volatility clustering and mean reversal. The risk (or volatility) represented by the instantaneous variance is dependent on the immediately previous one, as well as on the unconditional variance of the entire vulnerability growth process.

Practical implications

The volatility-based estimation of vulnerability growth is a risk assessment mechanism. Next, the portfolio analysis acts as a risk mitigation activity. Results from this study can decide patch management cycle needed for each software – individual or group patching. G-RAM also ranks them into a 2×2 risk-return matrix to ensure that the correlated risk is diversified. Finally the paper helps the business firms to decide what to purchase and what to avoid.

Originality/value

Contrary to the existing techniques which either analyse with statistical distributions or linear econometric methods, this study establishes that vulnerability growth follows a non-linear, time-dependent, heteroskedastic pattern. The paper also links software risk assessment to IT governance and strategic business objectives. To the authors’ knowledge, this is the first study in IT security to examine and forecast volatility, and further design risk-optimal software portfolios.

Details

Journal of Enterprise Information Management, vol. 31 no. 2
Type: Research Article
ISSN: 1741-0398

Keywords

Article
Publication date: 15 September 2023

Richard G. Mathieu and Alan E. Turovlin

Cyber risk has significantly increased over the past twenty years. In many organizations, data and operations are managed through a complex technology stack underpinned by an…

Abstract

Purpose

Cyber risk has significantly increased over the past twenty years. In many organizations, data and operations are managed through a complex technology stack underpinned by an Enterprise Resource Planning (ERP) system such as systemanalyse programmentwicklung (SAP). The ERP environment by itself can be overwhelming for a typical ERP Manager, coupled with increasing cybersecurity issues that arise creating periods of intense time pressure, stress and workload, increasing risk to the organization. This paper aims to identify a pragmatic approach to prioritize vulnerabilities for the ERP Manager.

Design/methodology/approach

Applying attention-based theory, a pragmatic approach is developed to prioritize an organization’s response to the National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD) vulnerabilities using a Classification and Regression Tree (CART).

Findings

The application of classification and regression tree (CART) to the National Institute of Standards and Technology’s National Vulnerability Database identifies prioritization unavailable within the NIST’s categorization.

Practical implications

The ERP Manager is a role between technology, functionality, centralized control and organization data. Without CART, vulnerabilities are left to a reactive approach, subject to overwhelming situations due to intense time pressure, stress and workload.

Originality/value

To the best of the authors’ knowledge, this work is original and has not been published elsewhere, nor is it currently under consideration for publication elsewhere. CART has previously not been applied to the prioritizing cybersecurity vulnerabilities.

Details

Information & Computer Security, vol. 31 no. 5
Type: Research Article
ISSN: 2056-4961

Keywords

Article
Publication date: 12 October 2010

Dimitrios Patsos, Sarandis Mitropoulos and Christos Douligeris

The paper proposes looking at the automation of the incident response (IR) process, through formal, systematic and standardized methods for collection, normalization and

Abstract

Purpose

The paper proposes looking at the automation of the incident response (IR) process, through formal, systematic and standardized methods for collection, normalization and correlation of security data (i.e. vulnerability, exploit and intrusion detection information).

Design/methodology/approach

The paper proposes the incident response intelligence system (IRIS) that models the context of discovered vulnerabilities, calculates their significance, finds and analyzes potential exploit code and defines the necessary intrusion detection signatures that combat possible attacks, using standardized techniques. It presents the IRIS architecture and operations, as well as the implementation issues.

Findings

The paper presents detailed evaluation results obtained from real‐world application scenarios, including a survey of the users' experience, to highlight IRIS contribution in the area of IR.

Originality/value

The paper introduces the IRIS, a system that provides detailed security information during the entire lifecycle of a security incident, facilitates decision support through the provision of possible attack and response paths, while deciding on the significance and magnitude of an attack with a standardized method.

Details

Information Management & Computer Security, vol. 18 no. 4
Type: Research Article
ISSN: 0968-5227

Keywords

Article
Publication date: 1 September 2005

Andrew Blyth and Paula Thomas

One of the problems facing systems administrators and security auditors is that a security test/audit can generate a vast quantity of information that needs to be stored, analysed…

Abstract

Purpose

One of the problems facing systems administrators and security auditors is that a security test/audit can generate a vast quantity of information that needs to be stored, analysed and cross referenced for later use. The current state‐of‐the‐art in security audit tools does not allow for information from multiple different tools to be shared and integrated. This paper aims to develop an Extensible Markup Language (XML)‐based architecture that is capable of encoding information from a variety of disparate heterogeneous sources and then unifying and integrating them into a single SQL database schema.

Design/methodology/approach

The paper demonstrates how, through the application of the architecture, large quantities of security related information can be captured within a single database schema. This database can then be used to ensure that systems are conforming to an organisation's network security policy.

Findings

This type of data integration and data unification within a vulnerability assessment/security audit is currently not possible; this leads to confusion and omissions in the security audit process.

Originality/value

This paper develops a data integration and unification architecture that will allow data from multiple vulnerability assessment tools to be integrated into a single unified picture of the security state of a network of interconnected computer systems.

Details

Information Management & Computer Security, vol. 13 no. 4
Type: Research Article
ISSN: 0968-5227

Keywords

Article
Publication date: 6 June 2008

Edson dos Santos Moreira, Luciana Andréia Fondazzi Martimiano, Antonio José dos Santos Brandão and Mauro César Bernardes

This paper aims to show the difficulties involved in dealing with the quantity, diversity and the lack of semantics security information. It seeks to propose the use of ontologies…

3045

Abstract

Purpose

This paper aims to show the difficulties involved in dealing with the quantity, diversity and the lack of semantics security information. It seeks to propose the use of ontologies to tackle the problem.

Design/methodology/approach

The paper describes the general methodology to create security ontologies and illustrates the case with the design and validation of two ontologies: system vulnerabilities and security incidents.

Findings

Two examples of ontologies, one related to systems vulnerability and the other related to security incidents (designed to illustrate this proposal) are described. The portability/reusability propriety is demonstrated, inferring that the information structured at lower levels (by security management tools and people) can be successfully used and understood at higher levels (by security governance tools and people).

Research limitations/implications

Work in the area of managing privacy policies, risk assessment and mitigation management, as well as CRM, business alignment and business intelligence, could be greatly eased by using an ontology to properly define the concepts involved in the area.

Practical implications

Ontologies can facilitate the interoperability among different security tools, creating a unique way to represent security data and allow the security data from any security tool (for instance, Snort) to be mapped into an ontology, such as the security incident one described in the paper. An example showing how the two ontologies could be plugged into a high level decision‐making system is described at the end.

Originality/value

Although several previous papers examined the value of using ontologies to represent security information, this one looks at their properties for a possible integrated use of management and governance tools.

Details

Information Management & Computer Security, vol. 16 no. 2
Type: Research Article
ISSN: 0968-5227

Keywords

Article
Publication date: 11 October 2011

Hannes Holm, Teodor Sommestad, Jonas Almroth and Mats Persson

The purpose of this paper is to evaluate if automated vulnerability scanning accurately identifies vulnerabilities in computer networks and if this accuracy is contingent on the…

4911

Abstract

Purpose

The purpose of this paper is to evaluate if automated vulnerability scanning accurately identifies vulnerabilities in computer networks and if this accuracy is contingent on the platforms used.

Design/methodology/approach

Both qualitative comparisons of functionality and quantitative comparisons of false positives and false negatives are made for seven different scanners. The quantitative assessment includes data from both authenticated and unauthenticated scans. Experiments were conducted on a computer network of 28 hosts with various operating systems, services and vulnerabilities. This network was set up by a team of security researchers and professionals.

Findings

The data collected in this study show that authenticated vulnerability scanning is usable. However, automated scanning is not able to accurately identify all vulnerabilities present in computer networks. Also, scans of hosts running Windows are more accurate than scans of hosts running Linux.

Research limitations/implications

This paper focuses on the direct output of automated scans with respect to the vulnerabilities they identify. Areas such as how to interpret the results assessed by each scanner (e.g. regarding remediation guidelines) or aggregating information about individual vulnerabilities into risk measures are out of scope.

Practical implications

This paper describes how well automated vulnerability scanners perform when it comes to identifying security issues in a network. The findings suggest that a vulnerability scanner is a useable tool to have in your security toolbox given that user credentials are available for the hosts in your network. Manual effort is however needed to complement automated scanning in order to get satisfactory accuracy regarding network security problems.

Originality/value

Previous studies have focused on the qualitative aspects on vulnerability assessment. This study presents a quantitative evaluation of seven of the most popular vulnerability scanners available on the market.

Details

Information Management & Computer Security, vol. 19 no. 4
Type: Research Article
ISSN: 0968-5227

Keywords

Article
Publication date: 16 October 2023

Miguel Calvo and Marta Beltrán

This paper aims to propose a new method to derive custom dynamic cyber risk metrics based on the well-known Goal, Question, Metric (GQM) approach. A framework that complements it…

Abstract

Purpose

This paper aims to propose a new method to derive custom dynamic cyber risk metrics based on the well-known Goal, Question, Metric (GQM) approach. A framework that complements it and makes it much easier to use has been proposed too. Both, the method and the framework, have been validated within two challenging application domains: continuous risk assessment within a smart farm and risk-based adaptive security to reconfigure a Web application firewall.

Design/methodology/approach

The authors have identified a problem and provided motivation. They have developed their theory and engineered a new method and a framework to complement it. They have demonstrated the proposed method and framework work, validating them in two real use cases.

Findings

The GQM method, often applied within the software quality field, is a good basis for proposing a method to define new tailored cyber risk metrics that meet the requirements of current application domains. A comprehensive framework that formalises possible goals and questions translated to potential measurements can greatly facilitate the use of this method.

Originality/value

The proposed method enables the application of the GQM approach to cyber risk measurement. The proposed framework allows new cyber risk metrics to be inferred by choosing between suggested goals and questions and measuring the relevant elements of probability and impact. The authors’ approach demonstrates to be generic and flexible enough to allow very different organisations with heterogeneous requirements to derive tailored metrics useful for their particular risk management processes.

Details

Information & Computer Security, vol. 32 no. 2
Type: Research Article
ISSN: 2056-4961

Keywords

Article
Publication date: 13 March 2017

Hannes Holm and Teodor Sommestad

It is often argued that the increased automation and availability of offensive cyber tools has decreased the skill and knowledge required by attackers. Some say that all it takes…

Abstract

Purpose

It is often argued that the increased automation and availability of offensive cyber tools has decreased the skill and knowledge required by attackers. Some say that all it takes to succeed with an attack is to follow some instructions and push some buttons. This paper aims to tests this idea empirically through live exploits and vulnerable machines in a cyber range.

Design/methodology/approach

The experiment involved 204 vulnerable machines in a cyber range. Exploits were chosen based on the results of automated vulnerability scanning. Each exploit was executed following a set of carefully planned actions that enabled reliable tests. A total of 1,223 exploitation attempts were performed.

Findings

A mere eight exploitation attempts succeeded. All these involved the same exploit module (ms08_067_netapi). It is concluded that server-side attacks still are too complicated for novices who lack the skill or knowledge to tune their attacks.

Originality/value

This paper presents the largest conducted test of exploit effectiveness to date. It also presents a sound method for reliable tests of exploit effectiveness (or system vulnerability).

Details

Information & Computer Security, vol. 25 no. 1
Type: Research Article
ISSN: 2056-4961

Keywords

Article
Publication date: 16 August 2022

Nisha TN and Mugdha Shailendra Kulkarni

The purpose of the study is to confirm the fact that in informations security, the human factor was considered as a key carrier of the majority of attacks that an information…

Abstract

Purpose

The purpose of the study is to confirm the fact that in informations security, the human factor was considered as a key carrier of the majority of attacks that an information system faces. Banking and other financial services are always top among the most attractive targets for cyber attackers. Blind phishing or spear phishing is still one of the major contributors to all malicious activities in the e-banking sector. All the counter mechanisms, therefore, revolve around the concept of how security-aware the customers are. To fool these mechanisms, attacks are becoming smarter and are searching for methods where the human involvement is diminishing to zero. Zero click attacks are one big leap that attackers are taking that removes the requirement of human involvement in initiating attacks and are moving toward an era of unassisted attacks. Even though the standard procedure and protocols are built into the banking system, they fail to detect this attack resulting in significant losses.

Design/methodology/approach

This paper follows a conceptual review of the upcoming concept in security and its implication in e-banking sector. The methodology adopted in this paper uses review papers, articles and white papers to conclude a theoretical model. A detailed analysis of unassisted attacks is considered from 2010 onwards till 2022.

Findings

This research deliberates on the methodologies of zero click attacks and gives a detailed analysis of attack vectors and their exploits. This research also identifies the likely attacks on e-banking that these vulnerabilities can trigger.

Originality/value

The key contribution is toward the early detection of zero click attacks, suggesting countermeasure, reducing the likelihood of these attacks and the financial impact.

Details

Journal of Financial Crime, vol. 30 no. 5
Type: Research Article
ISSN: 1359-0790

Keywords

Article
Publication date: 4 September 2007

Klaus Möller

Grid computing has often been heralded as the next logical step after the worldwide web. Users of grids can access dynamic resources such as computer storage and use the computing…

Abstract

Purpose

Grid computing has often been heralded as the next logical step after the worldwide web. Users of grids can access dynamic resources such as computer storage and use the computing resources of computers under the umbrella of a virtual organisation. Although grid computing is often compared to the worldwide web, it is vastly more complex both in organisational and technical areas. This also extends into the area of security and incident response, where established academic computer security incident response teams (CSIRTs) face new challenges arising from the use of grids. This paper aims to outline some of the organisational and technical challenges encountered by the German academic CSIRT, DFN‐CERT while extending and adapting their services to grid environments during the D‐Grid project.

Design/methodology/approach

Most national research and education networks (NRENs) already have computer security incident response teams to respond to security incidents involving computers connected to the networks. This paper considers how one established NREN CSIRT is dealing with the new challenges arising from grid computing.

Findings

The paper finds that D‐Grid Initiative is an ongoing project and the establishment of CSIRT services for grids is still at an early stage. The establishment of communication channels to the various grid communities as well as gaining of knowledge about grid software has required DFN‐CERT to make changes even though the basic principles of CSIRT operation remain the same.

Originality/value

The D‐Grid project aims to establish a common grid infrastructure that can be used by other scientific domains. The project consists of six community projects and one integration project (DGI – D‐Grid Integration). The DGI project will develop the basic infrastructure, while the community projects will build on this infrastructure and enhance it for the specific needs of their research areas. At the initial stage of the DGI project, the idea of a central CSIRT for all grids in Germany was seen as an advantage over having a CSIRT for each grid project, which would have replicated efforts and thus wasted resources. This paper gives an overview about the organisational and technical challenges and experiences DFN‐CERT has encountered while setting up a CSIRT for the D‐Grid communities.

Details

Campus-Wide Information Systems, vol. 24 no. 4
Type: Research Article
ISSN: 1065-0741

Keywords

1 – 10 of over 5000