Search results

1 – 10 of over 8000
Article
Publication date: 5 March 2018

Baidyanath Biswas and Arunabha Mukhopadhyay

Malicious attackers frequently breach information systems by exploiting disclosed software vulnerabilities. Knowledge of these vulnerabilities over time is essential to decide the…

Abstract

Purpose

Malicious attackers frequently breach information systems by exploiting disclosed software vulnerabilities. Knowledge of these vulnerabilities over time is essential to decide the use of software products by organisations. The purpose of this paper is to propose a novel G-RAM framework for business organisations to assess and mitigate risks arising out of software vulnerabilities.

Design/methodology/approach

The G-RAM risk assessment module uses GARCH to model vulnerability growth. Using 16-year data across 1999-2016 from the National Vulnerability Database, the authors estimate the model parameters and validate the prediction accuracy. Next, the G-RAM risk mitigation module designs optimal software portfolio using Markowitz’s mean-variance optimisation for a given IT budget and preference.

Findings

Based on an empirical analysis, this study establishes that vulnerability follows a non-linear, time-dependent, heteroskedastic growth pattern. Further, efficient software combinations are proposed that optimise correlated risk. The study also reports the empirical evidence of a shift in efficient frontier of software configurations with time.

Research limitations/implications

Existing assumption of independent and identically distributed residuals after vulnerability function fitting is incorrect. This study applies GARCH technique to measure volatility clustering and mean reversal. The risk (or volatility) represented by the instantaneous variance is dependent on the immediately previous one, as well as on the unconditional variance of the entire vulnerability growth process.

Practical implications

The volatility-based estimation of vulnerability growth is a risk assessment mechanism. Next, the portfolio analysis acts as a risk mitigation activity. Results from this study can decide patch management cycle needed for each software – individual or group patching. G-RAM also ranks them into a 2×2 risk-return matrix to ensure that the correlated risk is diversified. Finally the paper helps the business firms to decide what to purchase and what to avoid.

Originality/value

Contrary to the existing techniques which either analyse with statistical distributions or linear econometric methods, this study establishes that vulnerability growth follows a non-linear, time-dependent, heteroskedastic pattern. The paper also links software risk assessment to IT governance and strategic business objectives. To the authors’ knowledge, this is the first study in IT security to examine and forecast volatility, and further design risk-optimal software portfolios.

Details

Journal of Enterprise Information Management, vol. 31 no. 2
Type: Research Article
ISSN: 1741-0398

Keywords

Article
Publication date: 22 October 2019

Navneet Bhatt, Adarsh Anand and Deepti Aggrawal

The purpose of this paper is to provide a mathematical framework to optimally allocate resources required for the discovery of vulnerabilities pertaining to different severity…

Abstract

Purpose

The purpose of this paper is to provide a mathematical framework to optimally allocate resources required for the discovery of vulnerabilities pertaining to different severity risk levels.

Design/methodology/approach

Different sets of optimization problems have been formulated and using the concept of dynamic programming approach, sequence of recursive functions has been constructed for the optimal allocation of resources used for discovering vulnerabilities of different severity scores. Mozilla Thunderbird web browser data set has been considered for giving the empirical evaluation by working with vulnerabilities of different severities.

Findings

As per the impact associated with a vulnerability, critical and high severity level are required to be patched promptly, and hence, a larger amount of funds have to be allocated for vulnerability discovery. Nevertheless, a low or medium risk vulnerability might also get exploited and thereby their discovery is also crucial for higher severity vulnerabilities. The current framework provides a diversified allocation of funds as per the requirement of a software manager and also aims at improving the discovery of vulnerability significantly.

Practical implications

The finding of this research may enable software managers to adequately assign resources in managing the discovery of vulnerabilities. It may also help in acknowledging the funds required for various bug bounty programs to cater security reporters based on the potential number of vulnerabilities present in software.

Originality/value

Much of the attention has been focused on the vulnerability discovery modeling and the risk associated with the security flaws. But, as far as the authors’ knowledge is concern, there is no such study that incorporates optimal allocation of resources with respect to the vulnerabilities of different severity scores. Hence, the building block of this paper contributes to future research.

Details

International Journal of Quality & Reliability Management, vol. 37 no. 6/7
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 1 March 2013

Ting Chen, Xiao‐song Zhang, Xu Xiao, Yue Wu, Chun‐xiang Xu and Hong‐tian Zhao

Software vulnerabilities have been the greatest threat to the software industry for a long time. Many detection techniques have been developed to address this kind of issue, such…

Abstract

Purpose

Software vulnerabilities have been the greatest threat to the software industry for a long time. Many detection techniques have been developed to address this kind of issue, such as Fuzzing, but mere Fuzz Testing is not good enough, because the Fuzzing only alters the input of program randomly, and does not consider the basic semantics of the target software. The purpose of this paper is to introduce a new vulnerability exploring system, called “SEVE” to explore the target software more deeply and to generate more test cases with more accuracy.

Design/methodology/approach

Symbolic execution is the core technique of SEVE. The user can just input a standard input, and the SEVE system will record the execution path, alter the critical branches of it, and generate a totally different test case which will make the software under test execute a different path. In this way, some potential bugs or defects, even the exploitable vulnerabilities will be discovered. To alleviate path explosion, the authors propose heuristic method and function abstraction, which in turn improve the performance of SEVE even further.

Findings

We evaluate SEVE system to record critical data about its efficiency and performance. We have tested some real‐world vulnerabilities, from which the underlying file‐input programs suffer. After that, the results show that SEVE is not only re‐creating the discovery of these vulnerabilities, but also at a higher performance level than traditional techniques.

Originality/value

The paper proposes a new vulnerability exploring system, called “SEVE” to explore the target software and generate test cases automatically and also heuristic method and function abstraction to handle path explosion.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 32 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Book part
Publication date: 8 July 2010

Charles Cullinan, Steve G. Sutton and Vicky Arnold

During the past decade, enterprise resource planning (ERP) system implementations have exponentially grown within first large and then small- and medium-sized enterprises…

Abstract

During the past decade, enterprise resource planning (ERP) system implementations have exponentially grown within first large and then small- and medium-sized enterprises. Contemporary implementations, often through application service providers (ASPs), increase already existing pressures to adopt the embedded “best practices” that have been incorporated into the ERP software. The result is the rapid spread of generic business processes enabled through one of only a handful of leading ERP packages. This chapter focuses on the extant research on biodiversity and its focus on the negative effects of monoculture strategies – that is, the focus on a single crop (system) versus a diversity of crops (systems). The biodiversity research establishes a clear pattern of deleterious effects resulting from the vulnerabilities of monoculture strategies. These patterns are mirrored in the ERP environment as vulnerabilities loom from the diminution of diverse business processes, limited adaptability to business environment changes given technology-driven/enabled processes, and increased susceptibility to widespread parasite damage through cyber-attacks. The implications of the study raise questions as to the sustainability of accounting systems, the business environment, and society as a whole from the rapid implementation of sterilized business processes and uniformly vulnerable enterprise software.

Details

Advances in Accounting Behavioral Research
Type: Book
ISBN: 978-0-85724-137-5

Book part
Publication date: 29 May 2023

Divya Nair and Neeta Mhavan

A zero-day vulnerability is a complimentary ticket to the attackers for gaining entry into the network. Thus, there is necessity to device appropriate threat detection systems and…

Abstract

A zero-day vulnerability is a complimentary ticket to the attackers for gaining entry into the network. Thus, there is necessity to device appropriate threat detection systems and establish an innovative and safe solution that prevents unauthorised intrusions for defending various components of cybersecurity. We present a survey of recent Intrusion Detection Systems (IDS) in detecting zero-day vulnerabilities based on the following dimensions: types of cyber-attacks, datasets used and kinds of network detection systems.

Purpose: The study focuses on presenting an exhaustive review on the effectiveness of the recent IDS with respect to zero-day vulnerabilities.

Methodology: Systematic exploration was done at the IEEE, Elsevier, Springer, RAID, ESCORICS, Google Scholar, and other relevant platforms of studies published in English between 2015 and 2021 using keywords and combinations of relevant terms.

Findings: It is possible to train IDS for zero-day attacks. The existing IDS have strengths that make them capable of effective detection against zero-day attacks. However, they display certain limitations that reduce their credibility. Novel strategies like deep learning, machine learning, fuzzing technique, runtime verification technique, and Hidden Markov Models can be used to design IDS to detect malicious traffic.

Implication: This paper explored and highlighted the advantages and limitations of existing IDS enabling the selection of best possible IDS to protect the system. Moreover, the comparison between signature-based and anomaly-based IDS exemplifies that one viable approach to accurately detect the zero-day vulnerabilities would be the integration of hybrid mechanism.

Details

Smart Analytics, Artificial Intelligence and Sustainable Performance Management in a Global Digitalised Economy
Type: Book
ISBN: 978-1-80382-555-7

Keywords

Article
Publication date: 6 June 2016

Zhengbiao Han, Shuiqing Huang, Huan Li and Ni Ren

This paper uses the GB/T20984-2007 multiplicative method to assess the information security risk of a typical digital library in compliance with the principle and thought of ISO…

3826

Abstract

Purpose

This paper uses the GB/T20984-2007 multiplicative method to assess the information security risk of a typical digital library in compliance with the principle and thought of ISO 27000. The purpose of this paper is to testify the feasibility of this method and provide suggestions for improving information security of the digital library.

Design/methodology/approach

This paper adopts convenience sampling to select respondents. The assessment of assets is through analyzing digital library-related business and function through a questionnaire which collects data to determine asset types and the importance of asset attributes. The five-point Likert scale questionnaire method is used to identify the threat possibility and its influence on the assets. The 12 respondents include directors and senior network technicians from the editorial department, comic library, children’s library, counseling department and the learning promotion centre. Three different Guttman scale questionnaires, tool testing and on-site inspection are combined to identify and assess vulnerabilities. There were different Guttman scale questionnaires for management personnel, technical personnel and general librarian. In all, 15 management librarians, 7 technical librarians and 72 ordinary librarians answered the vulnerability questionnaire. On-site inspection was conducted on the basis of 11 control domains of ISO 27002. Vulnerabilities were scanned using remote security evaluation system NSFOCUS. The scanning covered ten IP sections and a total of 81 hosts.

Findings

Overall, 2,792 risk scores were obtained. Among them, 282 items (accounting for 10.1 per cent of the total) reached the high risk level; 2 (0.1 per cent) reached the very high risk level. High-risk items involved 26 threat types (accounting for 44.1 per cent of all threat types) and 13 vulnerability types (accounting for 22.1 per cent of all vulnerability types). The evaluation revealed that this digital library faces seven major hidden dangers in information security. The assessment results were well accepted by staff members of this digital library, which testified to the applicability of this method to a Chinese digital library.

Research limitations/implications

This paper is only a case study of a typical Chinese digital library using a digital library information security assessment method. More case-based explorations are necessary to prove the feasibility of the assessing strategy proposed in this study.

Originality/value

Based on the findings of recent literature, the authors found that very few researchers have made efforts to develop methods for calculating the indicators for digital library information security risk assessment. On the basis of ISO 27000 and other related information security standards, this case study proposed an operable method of digital library information security risk assessment and used it to assess a the information security of a typical Chinese digital library. This study can offer insights for formulating a digital library information security risk assessment scale.

Details

The Electronic Library, vol. 34 no. 3
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 15 September 2023

Richard G. Mathieu and Alan E. Turovlin

Cyber risk has significantly increased over the past twenty years. In many organizations, data and operations are managed through a complex technology stack underpinned by an…

Abstract

Purpose

Cyber risk has significantly increased over the past twenty years. In many organizations, data and operations are managed through a complex technology stack underpinned by an Enterprise Resource Planning (ERP) system such as systemanalyse programmentwicklung (SAP). The ERP environment by itself can be overwhelming for a typical ERP Manager, coupled with increasing cybersecurity issues that arise creating periods of intense time pressure, stress and workload, increasing risk to the organization. This paper aims to identify a pragmatic approach to prioritize vulnerabilities for the ERP Manager.

Design/methodology/approach

Applying attention-based theory, a pragmatic approach is developed to prioritize an organization’s response to the National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD) vulnerabilities using a Classification and Regression Tree (CART).

Findings

The application of classification and regression tree (CART) to the National Institute of Standards and Technology’s National Vulnerability Database identifies prioritization unavailable within the NIST’s categorization.

Practical implications

The ERP Manager is a role between technology, functionality, centralized control and organization data. Without CART, vulnerabilities are left to a reactive approach, subject to overwhelming situations due to intense time pressure, stress and workload.

Originality/value

To the best of the authors’ knowledge, this work is original and has not been published elsewhere, nor is it currently under consideration for publication elsewhere. CART has previously not been applied to the prioritizing cybersecurity vulnerabilities.

Details

Information & Computer Security, vol. 31 no. 5
Type: Research Article
ISSN: 2056-4961

Keywords

Article
Publication date: 1 June 2012

Teodor Sommestad, Hannes Holm and Mathias Ekstedt

The purpose of this paper is to identify the importance of the factors that influence the success rate of remote arbitrary code execution attacks. In other words, attacks which…

Abstract

Purpose

The purpose of this paper is to identify the importance of the factors that influence the success rate of remote arbitrary code execution attacks. In other words, attacks which use software vulnerabilities to execute the attacker's own code on targeted machines. Both attacks against servers and attacks against clients are studied.

Design/methodology/approach

The success rates of attacks are assessed for 24 scenarios: 16 scenarios for server‐side attacks and eight for client‐side attacks. The assessment is made through domain experts and is synthesized using Cooke's classical method, an established method for weighting experts' judgments. The variables included in the study were selected based on the literature, a pilot study, and interviews with domain experts.

Findings

Depending on the scenario in question, the expected success rate varies between 15 and 67 percent for server‐side attacks and between 43 and 67 percent for client‐side attacks. Based on these scenarios, the influence of different protective measures is identified.

Practical implications

The results of this study offer guidance to decision makers on how to best secure their assets against remote code execution attacks. These results also indicate the overall risk posed by this type of attack.

Originality/value

Attacks that use software vulnerabilities to execute code on targeted machines are common and pose a serious risk to most enterprises. However, there are no quantitative data on how difficult such attacks are to execute or on how effective security measures are against them. The paper provides such data using a structured technique to combine expert judgments.

Expert briefing
Publication date: 24 March 2021

Microsoft attributed the first hack exploiting these flaws to a Chinese state-sponsored group. These flaws were subsequently leaked online, and organisations that have failed to…

Details

DOI: 10.1108/OXAN-DB260397

ISSN: 2633-304X

Keywords

Geographic
Topical
Article
Publication date: 1 March 2013

Zhi Liu, Xiaosong Zhang, Yue Wu and Ting Chen

The purpose of this paper is to propose an approach to detect Indirect Memory‐Corruption Exploit (IMCE) at runtime on binary code, which is often caused by integer conversion…

Abstract

Purpose

The purpose of this paper is to propose an approach to detect Indirect Memory‐Corruption Exploit (IMCE) at runtime on binary code, which is often caused by integer conversion error. Real‐world attacks were evaluated for experimentation.

Design/methodology/approach

Current dynamic analysis detects attacks by enforcing low level policy which can only detect control‐flow hijacking attack. The proposed approach detects IMCE with high level policy enforcement using dynamic taint analysis. Unlike low‐level policy enforced on instruction level, the authors' policy is imposed on memory operation routine. The authors implemented a fine‐grained taint analysis system with accurate taint propagation for detection.

Findings

Conversion errors are common and most of them are legitimate. Taint analysis with high‐level policy can accurately block IMCE but have false positives. Proper design of data structures to maintain taint tag can greatly improve overhead.

Originality/value

This paper proposes an approach to block IMCE with high‐level policy enforcement using taint analysis. It has very low false negatives, though still causes certain false positives. The authors made several implementation contributions to strengthen accuracy and performance.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 32 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

1 – 10 of over 8000