Search results

1 – 10 of over 8000
Article
Publication date: 22 November 2011

Helen Kapodistria, Sarandis Mitropoulos and Christos Douligeris

The purpose of this paper is to introduce a new tool which detects, prevents and records common web attacks that mainly result in web applications information leaking using…

1696

Abstract

Purpose

The purpose of this paper is to introduce a new tool which detects, prevents and records common web attacks that mainly result in web applications information leaking using pattern recognition. It is a cross‐platform application, namely, it is not OS‐dependent or web server dependent. It offers a flexible attacks search engine, which scans http requests and responses during a webpage serving without affecting the web server performance.

Design/methodology/approach

The paper starts with a study of the most known web vulnerabilities and the way they can be exploited. Then, it focuses on those web attacks based on input validation, which are the ones the new tool detects through pattern recognition. This tool acts as a proxy server having a simple GUI for administration purposes. Patterns can be detected in both http requests and responses in an extensible and manageable way.

Findings

The new tool was compared to dotDefender, a commercial web application firewall, and ModSecurity, a widely used open source application firewall, using over 200 attack patterns. The new tool had satisfying results for every attack category examined having a high percentage of success. Results for stored XSS could not be achieved since the other tools are not able to search and detect them in http responses. The fact that the new tool is very extensible, it makes it possible for future work to be done.

Originality/value

This paper introduces a new web server plug‐in, which has some advanced web application firewall features with a flexible attacks search engine which scans http requests and responses. By scanning http responses, attacks such as stored XSS can be detected, a feature that cannot be found on other web application firewalls.

Details

Information Management & Computer Security, vol. 19 no. 5
Type: Research Article
ISSN: 0968-5227

Keywords

Article
Publication date: 4 April 2008

C.I. Ezeife, Jingyu Dong and A.K. Aggarwal

The purpose of this paper is to propose a web intrusion detection system (IDS), SensorWebIDS, which applies data mining, anomaly and misuse intrusion detection on web environment.

Abstract

Purpose

The purpose of this paper is to propose a web intrusion detection system (IDS), SensorWebIDS, which applies data mining, anomaly and misuse intrusion detection on web environment.

Design/methodology/approach

SensorWebIDS has three main components: the network sensor for extracting parameters from real‐time network traffic, the log digger for extracting parameters from web log files and the audit engine for analyzing all web request parameters for intrusion detection. To combat web intrusions like buffer‐over‐flow attack, SensorWebIDS utilizes an algorithm based on standard deviation (δ) theory's empirical rule of 99.7 percent of data lying within 3δ of the mean, to calculate the possible maximum value length of input parameters. Association rule mining technique is employed for mining frequent parameter list and their sequential order to identify intrusions.

Findings

Experiments show that proposed system has higher detection rate for web intrusions than SNORT and mod security for such classes of web intrusions like cross‐site scripting, SQL‐Injection, session hijacking, cookie poison, denial of service, buffer overflow, and probes attacks.

Research limitations/implications

Future work may extend the system to detect intrusions implanted with hacking tools and not through straight HTTP requests or intrusions embedded in non‐basic resources like multimedia files and others, track illegal web users with their prior web‐access sequences, implement minimum and maximum values for integer data, and automate the process of pre‐processing training data so that it is clean and free of intrusion for accurate detection results.

Practical implications

Web service security, as a branch of network security, is becoming more important as more business and social activities are moved online to the web.

Originality/value

Existing network IDSs are not directly applicable to web intrusion detection, because these IDSs are mostly sitting on the lower (network/transport) level of network model while web services are running on the higher (application) level. Proposed SensorWebIDS detects XSS and SQL‐Injection attacks through signatures, while other types of attacks are detected using association rule mining and statistics to compute frequent parameter list order and their maximum value lengths.

Details

International Journal of Web Information Systems, vol. 4 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 February 2006

Yang Xiang and Wanlei Zhou

In the last a few years a number of highly publicized incidents of Distributed Denial of Service (DDoS) attacks against high‐profile government and commercial websites have made…

Abstract

In the last a few years a number of highly publicized incidents of Distributed Denial of Service (DDoS) attacks against high‐profile government and commercial websites have made people aware of the importance of providing data and services security to users. A DDoS attack is an availability attack, which is characterized by an explicit attempt from an attacker to prevent legitimate users of a service from using the desired resources. This paper introduces the vulnerability of web applications to DDoS attacks, and presents an active distributed defense system that has a deployment mixture of sub‐systems to protect web applications from DDoS attacks. According to the simulation experiments, this system is effective in that it is able to defend web applications against attacks. It can avoid overall network congestion and provide more resources to legitimate web users.

Details

International Journal of Web Information Systems, vol. 2 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 10 November 2014

Ammar Alazab, Michael Hobbs, Jemal Abawajy, Ansam Khraisat and Mamoun Alazab

The purpose of this paper is to mitigate vulnerabilities in web applications, security detection and prevention are the most important mechanisms for security. However, most…

1311

Abstract

Purpose

The purpose of this paper is to mitigate vulnerabilities in web applications, security detection and prevention are the most important mechanisms for security. However, most existing research focuses on how to prevent an attack at the web application layer, with less work dedicated to setting up a response action if a possible attack happened.

Design/methodology/approach

A combination of a Signature-based Intrusion Detection System (SIDS) and an Anomaly-based Intrusion Detection System (AIDS), namely, the Intelligent Intrusion Detection and Prevention System (IIDPS).

Findings

After evaluating the new system, a better result was generated in line with detection efficiency and the false alarm rate. This demonstrates the value of direct response action in an intrusion detection system.

Research limitations/implications

Data limitation.

Originality/value

The contributions of this paper are to first address the problem of web application vulnerabilities. Second, to propose a combination of an SIDS and an AIDS, namely, the IIDPS. Third, this paper presents a novel approach by connecting the IIDPS with a response action using fuzzy logic. Fourth, use the risk assessment to determine an appropriate response action against each attack event. Combining the system provides a better performance for the Intrusion Detection System, and makes the detection and prevention more effective.

Details

Information Management & Computer Security, vol. 22 no. 5
Type: Research Article
ISSN: 0968-5227

Keywords

Article
Publication date: 24 August 2012

Ruxia Ma, Xiaofeng Meng and Zhongyuan Wang

The Web is the largest repository of information. Personal information is usually scattered on various pages of different websites. Search engines have made it easier to find…

Abstract

Purpose

The Web is the largest repository of information. Personal information is usually scattered on various pages of different websites. Search engines have made it easier to find personal information. An attacker may collect a user's scattered information together via search engines, and infer some privacy information. The authors call this kind of privacy attack “Privacy Inference Attack via Search Engines”. The purpose of this paper is to provide a user‐side automatic detection service for detecting the privacy leakage before publishing personal information.

Design/methodology/approach

In this paper, the authors propose a user‐side automatic detection service. In the user‐side service, the authors construct a user information correlation (UICA) graph to model the association between user information returned by search engines. The privacy inference attack is mapped into a decision problem of searching a privacy inferring path with the maximal probability in the UICA graph and it is proved that it is a nondeterministic polynomial time (NP)‐complete problem by a two‐step reduction. A Privacy Leakage Detection Probability (PLD‐Probability) algorithm is proposed to find the privacy inferring path: it combines two significant factors which can influence the vertexes' probability in the UICA graph and uses greedy algorithm to find the privacy inferring path.

Findings

The authors reveal that privacy inferring attack via search engines is very serious in real life. In this paper, a user‐side automatic detection service is proposed to detect the risk of privacy inferring. The authors make three kinds of experiments to evaluate the seriousness of privacy leakage problem and the performance of methods proposed in this paper. The results show that the algorithm for the service is reasonable and effective.

Originality/value

The paper introduces a new family of privacy attacks on the Web: privacy inferring attack via search engines and presents a privacy inferring model to describe the process and principles of personal privacy inferring attack via search engines. A user‐side automatic detection service is proposed to detect the privacy inference before publishing personal information. In this user‐side service, the authors propose a Privacy Leakage Detection Probability (PLD‐Probability) algorithm. Extensive experiments show these methods are reasonable and effective.

Details

International Journal of Web Information Systems, vol. 8 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 5 October 2015

Vijaya Geeta Dharmavaram

The purpose of the paper is to assess the precautionary measures adopted by the popular websites in India, and, thus, find out how vulnerable the Indian Web users are to this form…

Abstract

Purpose

The purpose of the paper is to assess the precautionary measures adopted by the popular websites in India, and, thus, find out how vulnerable the Indian Web users are to this form of attack. Today almost all work is done through the Internet, including monetary transactions. This holds true even for developing countries like India, thus making secure browsing a necessity. However, an attack called “clickjacking” can help Internet scammers to carry out fraudulent tasks. Even though researchers had proposed different techniques to face this threat, it remains a question on how effectively they are deployed in practice.

Design/methodology/approach

To carry out the study, top 100 Indian and global websites in India were identified and were divided into static and dynamic websites based on the level of interaction they offer to the users. These websites were checked to see whether they offer any basic protection against clickjacking and, if so, which defence technique is used. A comparison between Indian websites and global websites is done to see where India stands in terms of providing security.

Findings

The results show that 86 per cent of Indian websites offer no protection against clickjacking, in contrast to 51 per cent of global websites. It is also observed that in the case of dynamic websites, only 18 per cent of Indian websites offer some form of protection, when compared to 63 per cent of global websites. This is quite alarming, as dynamic websites such as social networking and banking websites are the likely candidates for clickjacking, resulting in serious consequences such as identity and monetary theft.

Originality/value

In this paper, vulnerability of Indian websites to clickjacking is presented, which was not addressed before. This will help in creating awareness among the Indian Web developers as well as the general public, so that precautionary measures can be adopted.

Details

Journal of Money Laundering Control, vol. 18 no. 4
Type: Research Article
ISSN: 1368-5201

Keywords

Content available
Article
Publication date: 21 March 2023

Abel Yeboah-Ofori and Francisca Afua Opoku-Boateng

Various organizational landscapes have evolved to improve their business processes, increase production speed and reduce the cost of distribution and have integrated their…

Abstract

Purpose

Various organizational landscapes have evolved to improve their business processes, increase production speed and reduce the cost of distribution and have integrated their Internet with small and medium scale enterprises (SMEs) and third-party vendors to improve business growth and increase global market share, including changing organizational requirements and business process collaborations. Benefits include a reduction in the cost of production, online services, online payments, product distribution channels and delivery in a supply chain environment. However, the integration has led to an exponential increase in cybercrimes, with adversaries using various attack methods to penetrate and exploit the organizational network. Thus, identifying the attack vectors in the event of cyberattacks is very important in mitigating cybercrimes effectively and has become inevitable. However, the invincibility nature of cybercrimes makes it challenging to detect and predict the threat probabilities and the cascading impact in an evolving organization landscape leading to malware, ransomware, data theft and denial of service attacks, among others. The paper explores the cybercrime threat landscape, considers the impact of the attacks and identifies mitigating circumstances to improve security controls in an evolving organizational landscape.

Design/methodology/approach

The approach follows two main cybercrime framework design principles that focus on existing attack detection phases and proposes a cybercrime mitigation framework (CCMF) that uses detect, assess, analyze, evaluate and respond phases and subphases to reduce the attack surface. The methods and implementation processes were derived by identifying an organizational goal, attack vectors, threat landscape, identification of attacks and models and validation of framework standards to improve security. The novelty contribution of this paper is threefold: first, the authors explore the existing threat landscapes, various cybercrimes, models and the methods that adversaries are deploying on organizations. Second, the authors propose a threat model required for mitigating the risk factors. Finally, the authors recommend control mechanisms in line with security standards to improve security.

Findings

The results show that cybercrimes can be mitigated using a CCMF to detect, assess, analyze, evaluate and respond to cybercrimes to improve security in an evolving organizational threat landscape.

Research limitations/implications

The paper does not consider the organizational size between large organizations and SMEs. The challenges facing the evolving organizational threat landscape include vulnerabilities brought about by the integrations of various network nodes. Factor influencing these vulnerabilities includes inadequate threat intelligence gathering, a lack of third-party auditing and inadequate control mechanisms leading to various manipulations, exploitations, exfiltration and obfuscations.

Practical implications

Attack methods are applied to a case study for the implementation to evaluate the model based on the design principles. Inadequate cyber threat intelligence (CTI) gathering, inadequate attack modeling and security misconfigurations are some of the key factors leading to practical implications in mitigating cybercrimes.

Social implications

There are no social implications; however, cybercrimes have severe consequences for organizations and third-party vendors that integrate their network systems, leading to legal and reputational damage.

Originality/value

The paper’s originality considers mitigating cybercrimes in an evolving organization landscape that requires strategic, tactical and operational management imperative using the proposed framework phases, including detect, assess, analyze, evaluate and respond phases and subphases to reduce the attack surface, which is currently inadequate.

Details

Continuity & Resilience Review, vol. 5 no. 1
Type: Research Article
ISSN: 2516-7502

Keywords

Article
Publication date: 14 June 2013

Tran Tri Dang and Tran Khanh Dang

The purpose of this paper is to propose novel information visualization and interaction techniques to help security administrators analyze past web form submissions, with the…

Abstract

Purpose

The purpose of this paper is to propose novel information visualization and interaction techniques to help security administrators analyze past web form submissions, with the goals of searching, inspecting, verifying, and understanding about malicious submissions.

Design/methodology/approach

The authors utilize well‐known visual design principles in the techniques to support the analysis process. They also implement a prototype and use it to investigate simulated normal and malicious web submissions.

Findings

The techniques can increase analysts' efficiency by displaying large amounts of information at a time, help analysts detect certain kinds of anomalies, and support the analyzing process via provided interaction capabilities.

Research limitations/implications

Due to resources constraints, the authors experimented on simulated data only, not real data.

Practical implications

The techniques can be used to investigate past web form submissions, which is a first step in analysing and understanding the current security situation and attackers' skills. The knowledge gained from this process can be used to plan for effective future defence strategy, e.g. by improving/fine‐tuning the attack signatures of an automatic intrusion detection system.

Originality/value

The visualization and interaction designs are the first visual analysis technique for security investigation of web form submissions.

Details

International Journal of Web Information Systems, vol. 9 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 9 March 2015

Eugene Ferry, John O Raw and Kevin Curran

The interoperability of cloud data between web applications and mobile devices has vastly improved over recent years. The popularity of social media, smartphones and cloud-based…

2110

Abstract

Purpose

The interoperability of cloud data between web applications and mobile devices has vastly improved over recent years. The popularity of social media, smartphones and cloud-based web services have contributed to the level of integration that can be achieved between applications. This paper investigates the potential security issues of OAuth, an authorisation framework for granting third-party applications revocable access to user data. OAuth has rapidly become an interim de facto standard for protecting access to web API data. Vendors have implemented OAuth before the open standard was officially published. To evaluate whether the OAuth 2.0 specification is truly ready for industry application, an entire OAuth client server environment was developed and validated against the speciation threat model. The research also included the analysis of the security features of several popular OAuth integrated websites and comparing those to the threat model. High-impacting exploits leading to account hijacking were identified with a number of major online publications. It is hypothesised that the OAuth 2.0 specification can be a secure authorisation mechanism when implemented correctly.

Design/methodology/approach

To analyse the security of OAuth implementations in industry a list of the 50 most popular websites in Ireland was retrieved from the statistical website Alexa (Noureddine and Bashroush, 2011). Each site was analysed to identify if it utilised OAuth. Out of the 50 sites, 21 were identified with OAuth support. Each vulnerability in the threat model was then tested against each OAuth-enabled site. To test the robustness of the OAuth framework, an entire OAuth environment was required. The proposed solution would compose of three parts: a client application, an authorisation server and a resource server. The client application needed to consume OAuth-enabled services. The authorisation server had to manage access to the resource server. The resource server had to expose data from the database based on the authorisation the user would be given from the authorisation server. It was decided that the client application would consume emails from Google’s Gmail API. The authorisation and resource server were modelled around a basic task-tracking web application. The client application would also consume task data from the developed resource server. The client application would also support Single Sign On for Google and Facebook, as well as a developed identity provider “MyTasks”. The authorisation server delegated authorisation to the client application and stored cryptography information for each access grant. The resource server validated the supplied access token via public cryptography and returned the requested data.

Findings

Two sites out of the 21 were found to be susceptible to some form of attack, meaning that 10.5 per cent were vulnerable. In total, 18 per cent of the world’s 50 most popular sites were in the list of 21 OAuth-enabled sites. The OAuth 2.0 specification is still very much in its infancy, but when implemented correctly, it can provide a relatively secure and interoperable authentication delegation mechanism. The IETF are currently addressing issues and expansions in their working drafts. Once a strict level of conformity is achieved between vendors and vulnerabilities are mitigated, it is likely that the framework will change the way we access data on the web and other devices.

Originality/value

OAuth is flexible, in that it offers extensions to support varying situations and existing technologies. A disadvantage of this flexibility is that new extensions typically bring new security exploits. Members of the IETF OAuth Working Group are constantly refining the draft specifications and are identifying new threats to the expanding functionality. OAuth provides a flexible authentication mechanism to protect and delegate access to APIs. It solves the password re-use across multiple accounts problem and stops the user from having to disclose their credentials to third parties. Filtering access to information by scope and giving the user the option to revoke access at any point gives the user control of their data. OAuth does raise security concerns, such as defying phishing education, but there are always going to be security issues with any authentication technology. Although several high impacting vulnerabilities were identified in industry, the developed solution proves the predicted hypothesis that a secure OAuth environment can be built when implemented correctly. Developers must conform to the defined specification and are responsible for validating their implementation against the given threat model. OAuth is an evolving authorisation framework. It is still in its infancy, and much work needs to be done in the specification to achieve stricter validation and vendor conformity. Vendor implementations need to become better aligned in order to provider a rich and truly interoperable authorisation mechanism. Once these issues are resolved, OAuth will be on track for becoming the definitive authentication standard on the web.

Details

Information & Computer Security, vol. 23 no. 1
Type: Research Article
ISSN: 2056-4961

Keywords

Article
Publication date: 29 March 2013

Tran Khanh Dang and Tran Tri Dang

By reviewing different information visualization techniques for securing web information systems, this paper aims to provide a foundation for further studies of the same topic…

1108

Abstract

Purpose

By reviewing different information visualization techniques for securing web information systems, this paper aims to provide a foundation for further studies of the same topic. Another purpose of the paper is to discover directions in which there is a lack of extensive research, thereby encouraging more investigations.

Design/methodology/approach

The related techniques are classified first by their locations in the web information systems architecture: client side, server side, and application side. Then the techniques in each category are further classified based on attributes specific to that category.

Findings

Although there is much research on information visualization for securing web browser user interface and server side systems, there are very few studies about the same techniques on web application side.

Originality/value

This paper is the first published paper reviewing extensively information visualization techniques for securing web information systems. The classification used here offers a framework for further studies as well as in‐depth investigations.

Details

International Journal of Web Information Systems, vol. 9 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of over 8000