Search results

1 – 10 of over 8000
Article
Publication date: 22 November 2011

Helen Kapodistria, Sarandis Mitropoulos and Christos Douligeris

The purpose of this paper is to introduce a new tool which detects, prevents and records common web attacks that mainly result in web applications information leaking using…

1683

Abstract

Purpose

The purpose of this paper is to introduce a new tool which detects, prevents and records common web attacks that mainly result in web applications information leaking using pattern recognition. It is a cross‐platform application, namely, it is not OS‐dependent or web server dependent. It offers a flexible attacks search engine, which scans http requests and responses during a webpage serving without affecting the web server performance.

Design/methodology/approach

The paper starts with a study of the most known web vulnerabilities and the way they can be exploited. Then, it focuses on those web attacks based on input validation, which are the ones the new tool detects through pattern recognition. This tool acts as a proxy server having a simple GUI for administration purposes. Patterns can be detected in both http requests and responses in an extensible and manageable way.

Findings

The new tool was compared to dotDefender, a commercial web application firewall, and ModSecurity, a widely used open source application firewall, using over 200 attack patterns. The new tool had satisfying results for every attack category examined having a high percentage of success. Results for stored XSS could not be achieved since the other tools are not able to search and detect them in http responses. The fact that the new tool is very extensible, it makes it possible for future work to be done.

Originality/value

This paper introduces a new web server plug‐in, which has some advanced web application firewall features with a flexible attacks search engine which scans http requests and responses. By scanning http responses, attacks such as stored XSS can be detected, a feature that cannot be found on other web application firewalls.

Details

Information Management & Computer Security, vol. 19 no. 5
Type: Research Article
ISSN: 0968-5227

Keywords

Article
Publication date: 4 April 2008

C.I. Ezeife, Jingyu Dong and A.K. Aggarwal

The purpose of this paper is to propose a web intrusion detection system (IDS), SensorWebIDS, which applies data mining, anomaly and misuse intrusion detection on web environment.

Abstract

Purpose

The purpose of this paper is to propose a web intrusion detection system (IDS), SensorWebIDS, which applies data mining, anomaly and misuse intrusion detection on web environment.

Design/methodology/approach

SensorWebIDS has three main components: the network sensor for extracting parameters from real‐time network traffic, the log digger for extracting parameters from web log files and the audit engine for analyzing all web request parameters for intrusion detection. To combat web intrusions like buffer‐over‐flow attack, SensorWebIDS utilizes an algorithm based on standard deviation (δ) theory's empirical rule of 99.7 percent of data lying within 3δ of the mean, to calculate the possible maximum value length of input parameters. Association rule mining technique is employed for mining frequent parameter list and their sequential order to identify intrusions.

Findings

Experiments show that proposed system has higher detection rate for web intrusions than SNORT and mod security for such classes of web intrusions like cross‐site scripting, SQL‐Injection, session hijacking, cookie poison, denial of service, buffer overflow, and probes attacks.

Research limitations/implications

Future work may extend the system to detect intrusions implanted with hacking tools and not through straight HTTP requests or intrusions embedded in non‐basic resources like multimedia files and others, track illegal web users with their prior web‐access sequences, implement minimum and maximum values for integer data, and automate the process of pre‐processing training data so that it is clean and free of intrusion for accurate detection results.

Practical implications

Web service security, as a branch of network security, is becoming more important as more business and social activities are moved online to the web.

Originality/value

Existing network IDSs are not directly applicable to web intrusion detection, because these IDSs are mostly sitting on the lower (network/transport) level of network model while web services are running on the higher (application) level. Proposed SensorWebIDS detects XSS and SQL‐Injection attacks through signatures, while other types of attacks are detected using association rule mining and statistics to compute frequent parameter list order and their maximum value lengths.

Details

International Journal of Web Information Systems, vol. 4 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 10 November 2014

Ammar Alazab, Michael Hobbs, Jemal Abawajy, Ansam Khraisat and Mamoun Alazab

The purpose of this paper is to mitigate vulnerabilities in web applications, security detection and prevention are the most important mechanisms for security. However, most…

1303

Abstract

Purpose

The purpose of this paper is to mitigate vulnerabilities in web applications, security detection and prevention are the most important mechanisms for security. However, most existing research focuses on how to prevent an attack at the web application layer, with less work dedicated to setting up a response action if a possible attack happened.

Design/methodology/approach

A combination of a Signature-based Intrusion Detection System (SIDS) and an Anomaly-based Intrusion Detection System (AIDS), namely, the Intelligent Intrusion Detection and Prevention System (IIDPS).

Findings

After evaluating the new system, a better result was generated in line with detection efficiency and the false alarm rate. This demonstrates the value of direct response action in an intrusion detection system.

Research limitations/implications

Data limitation.

Originality/value

The contributions of this paper are to first address the problem of web application vulnerabilities. Second, to propose a combination of an SIDS and an AIDS, namely, the IIDPS. Third, this paper presents a novel approach by connecting the IIDPS with a response action using fuzzy logic. Fourth, use the risk assessment to determine an appropriate response action against each attack event. Combining the system provides a better performance for the Intrusion Detection System, and makes the detection and prevention more effective.

Details

Information Management & Computer Security, vol. 22 no. 5
Type: Research Article
ISSN: 0968-5227

Keywords

Article
Publication date: 6 March 2017

Kushal Anjaria and Arun Mishra

Any computing architecture cannot be designed with complete confidentiality. As a result, at any point, it may leak the information. So, it is important to decide leakage…

Abstract

Purpose

Any computing architecture cannot be designed with complete confidentiality. As a result, at any point, it may leak the information. So, it is important to decide leakage threshold in any computing architecture. To prevent leakage more than the predefined threshold, quantitative analysis is helpful. This paper aims to provide a method to quantify information leakage in service-oriented architecture (SOA)-based Web services.

Design/methodology/approach

To visualize the dynamic binding of SOA components, first, the orchestration of components is modeled. The modeling helps to information-theoretically quantify information leakage in SOA-based Web services. Then, the paper considers the non-interference policy in a global way to quantify information leakage. It considers not only variables which interfere with security sensitive content but also other architectural parameters to quantify leakage in Web services. To illustrate the attacker’s ability, a strong threat model has been proposed in the paper.

Findings

The paper finds that information leakage can be quantified in SOA-based Web services by considering parameters that interfere with security sensitive content and information theory. A hypothetical case study scenario of flight ticket booking Web services has been considered in the present paper in which leakage of 18.89 per cent information is calculated.

Originality/value

The paper shows that it is practically possible to quantify information leakage in SOA-based Web services. While modeling the SOA-based Web services, it will be of help to architects to identify parameters which may cause the leakage of secret contents.

Details

Kybernetes, vol. 46 no. 3
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 5 October 2015

Vijaya Geeta Dharmavaram

The purpose of the paper is to assess the precautionary measures adopted by the popular websites in India, and, thus, find out how vulnerable the Indian Web users are to this form…

Abstract

Purpose

The purpose of the paper is to assess the precautionary measures adopted by the popular websites in India, and, thus, find out how vulnerable the Indian Web users are to this form of attack. Today almost all work is done through the Internet, including monetary transactions. This holds true even for developing countries like India, thus making secure browsing a necessity. However, an attack called “clickjacking” can help Internet scammers to carry out fraudulent tasks. Even though researchers had proposed different techniques to face this threat, it remains a question on how effectively they are deployed in practice.

Design/methodology/approach

To carry out the study, top 100 Indian and global websites in India were identified and were divided into static and dynamic websites based on the level of interaction they offer to the users. These websites were checked to see whether they offer any basic protection against clickjacking and, if so, which defence technique is used. A comparison between Indian websites and global websites is done to see where India stands in terms of providing security.

Findings

The results show that 86 per cent of Indian websites offer no protection against clickjacking, in contrast to 51 per cent of global websites. It is also observed that in the case of dynamic websites, only 18 per cent of Indian websites offer some form of protection, when compared to 63 per cent of global websites. This is quite alarming, as dynamic websites such as social networking and banking websites are the likely candidates for clickjacking, resulting in serious consequences such as identity and monetary theft.

Originality/value

In this paper, vulnerability of Indian websites to clickjacking is presented, which was not addressed before. This will help in creating awareness among the Indian Web developers as well as the general public, so that precautionary measures can be adopted.

Details

Journal of Money Laundering Control, vol. 18 no. 4
Type: Research Article
ISSN: 1368-5201

Keywords

Article
Publication date: 29 March 2013

Tran Khanh Dang and Tran Tri Dang

By reviewing different information visualization techniques for securing web information systems, this paper aims to provide a foundation for further studies of the same topic…

1087

Abstract

Purpose

By reviewing different information visualization techniques for securing web information systems, this paper aims to provide a foundation for further studies of the same topic. Another purpose of the paper is to discover directions in which there is a lack of extensive research, thereby encouraging more investigations.

Design/methodology/approach

The related techniques are classified first by their locations in the web information systems architecture: client side, server side, and application side. Then the techniques in each category are further classified based on attributes specific to that category.

Findings

Although there is much research on information visualization for securing web browser user interface and server side systems, there are very few studies about the same techniques on web application side.

Originality/value

This paper is the first published paper reviewing extensively information visualization techniques for securing web information systems. The classification used here offers a framework for further studies as well as in‐depth investigations.

Details

International Journal of Web Information Systems, vol. 9 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 13 April 2010

Riaan J. Rudman

The purpose of this paper is to identify and investigate the security issues an organisation operating in the “new” online environment is exposed to through Web 2.0 applications…

3274

Abstract

Purpose

The purpose of this paper is to identify and investigate the security issues an organisation operating in the “new” online environment is exposed to through Web 2.0 applications, with specific focus on unauthorised access (encompassing hackers). The study aims to recommend possible safeguards to mitigate these incremental risks to an acceptable level.

Design/methodology/approach

An extensive literature review was performed to obtain an understanding of the technologies driving Web 2.0 applications. Thereafter, the technologies were mapped against Control Objectives for Information and Related Technology (CobiT) and Trust Service Principles and Criteria and associated control objectives relating to security risks, specifically to hacker risks. These objectives were used to identify relevant risks and formulate appropriate internal control measures.

Findings

The findings show that every organisation, technology and application is unique and the safeguards depend on the nature of the organisation, information at stake, degree of vulnerability and risks. A comprehensive security program, including a multi‐layer technological, as well as an administrative component, should be implemented. User training on acceptable practices should also be conducted.

Originality/value

Obtaining an understanding of Web 2.0 and Web 2.0 security is important, as Web 2.0 is a new, poorly understood technology and with the growing mobility of users, the potential surface area of attack increases and should be managed. The paper will help organisations, information repository managers, information technology (IT) professionals, librarians and internal and external auditors to understand the “new” risks relating to unauthorised access, which previously did not exist in an on‐line environment, and will assist the development of a framework to limit the most significant risks.

Details

The Electronic Library, vol. 28 no. 2
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 14 June 2013

Tran Tri Dang and Tran Khanh Dang

The purpose of this paper is to propose novel information visualization and interaction techniques to help security administrators analyze past web form submissions, with the…

Abstract

Purpose

The purpose of this paper is to propose novel information visualization and interaction techniques to help security administrators analyze past web form submissions, with the goals of searching, inspecting, verifying, and understanding about malicious submissions.

Design/methodology/approach

The authors utilize well‐known visual design principles in the techniques to support the analysis process. They also implement a prototype and use it to investigate simulated normal and malicious web submissions.

Findings

The techniques can increase analysts' efficiency by displaying large amounts of information at a time, help analysts detect certain kinds of anomalies, and support the analyzing process via provided interaction capabilities.

Research limitations/implications

Due to resources constraints, the authors experimented on simulated data only, not real data.

Practical implications

The techniques can be used to investigate past web form submissions, which is a first step in analysing and understanding the current security situation and attackers' skills. The knowledge gained from this process can be used to plan for effective future defence strategy, e.g. by improving/fine‐tuning the attack signatures of an automatic intrusion detection system.

Originality/value

The visualization and interaction designs are the first visual analysis technique for security investigation of web form submissions.

Details

International Journal of Web Information Systems, vol. 9 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Book part
Publication date: 4 June 2021

Julia Slupska and Leonie Maria Tanczer

Technology-facilitated abuse, so-called “tech abuse,” through phones, trackers, and other emerging innovations, has a substantial impact on the nature of intimate partner violence…

Abstract

Technology-facilitated abuse, so-called “tech abuse,” through phones, trackers, and other emerging innovations, has a substantial impact on the nature of intimate partner violence (IPV). The current chapter examines the risks and harms posed to IPV victims/survivors from the burgeoning Internet of Things (IoT) environment. IoT systems are understood as “smart” devices such as conventional household appliances that are connected to the internet. Interdependencies between different products together with the devices' enhanced functionalities offer opportunities for coercion and control. Across the chapter, we use the example of IoT to showcase how and why tech abuse is a socio-technological issue and requires not only human-centered (i.e., societal) but also cybersecurity (i.e., technical) responses. We apply the method of “threat modeling,” which is a process used to investigate potential cybersecurity attacks, to shift the conventional technical focus from the risks to systems toward risks to people. Through the analysis of a smart lock, we highlight insufficiently designed IoT privacy and security features and uncover how seemingly neutral design decisions can constrain, shape, and facilitate coercive and controlling behaviors.

Details

The Emerald International Handbook of Technology-Facilitated Violence and Abuse
Type: Book
ISBN: 978-1-83982-849-2

Keywords

Article
Publication date: 6 June 2016

Oluyinka Aderemi Adewumi and Ayobami Andronicus Akinyelu

Phishing is one of the major challenges faced by the world of e-commerce today. Thanks to phishing attacks, billions of dollars has been lost by many companies and individuals…

Abstract

Purpose

Phishing is one of the major challenges faced by the world of e-commerce today. Thanks to phishing attacks, billions of dollars has been lost by many companies and individuals. The global impact of phishing attacks will continue to be on the increase and thus a more efficient phishing detection technique is required. The purpose of this paper is to investigate and report the use of a nature inspired based-machine learning (ML) approach in classification of phishing e-mails.

Design/methodology/approach

ML-based techniques have been shown to be efficient in detecting phishing attacks. In this paper, firefly algorithm (FFA) was integrated with support vector machine (SVM) with the primary aim of developing an improved phishing e-mail classifier (known as FFA_SVM), capable of accurately detecting new phishing patterns as they occur. From a data set consisting of 4,000 phishing and ham e-mails, a set of features, suitable for phishing e-mail detection, was extracted and used to construct the hybrid classifier.

Findings

The FFA_SVM was applied to a data set consisting of up to 4,000 phishing and ham e-mails. Simulation experiments were performed to evaluate and compared the performance of the classifier. The tests yielded a classification accuracy of 99.94 percent, false positive rate of 0.06 percent and false negative rate of 0.04 percent.

Originality/value

The hybrid algorithm has not been earlier apply, as in this work, to the classification and detection of phishing e-mail, to the best of the authors’ knowledge.

Details

Kybernetes, vol. 45 no. 6
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of over 8000