Search results

1 – 10 of over 9000
To view the access options for this content please click here
Article
Publication date: 1 March 2001

Bhupesh Kothari and Mark Claypool

The World Wide Web has experienced phenomenal growth over the past few years, placing heavy load on Web servers. Today’s Web servers also process an increasing number of…

Abstract

The World Wide Web has experienced phenomenal growth over the past few years, placing heavy load on Web servers. Today’s Web servers also process an increasing number of requests for dynamic pages, making server load even more critical. The performance of Web servers delivering static pages is well studied and well understood. However, there has been little analytic or empirical study of the performance of Web servers delivering dynamic pages. This paper focuses on experimentally measuring and analyzing the performance of the three dynamic Web page generation technologies: CGI, FastCGI and Servlets. In this paper, we present experimental results for Web server performance under CGI, Fast CGI and Servlets. Then, we develop a multivariate linear regression model and predict Web server performance under some typical dynamic requests. We find that CGI and FastCGI perform effectively the same under most low‐level benchmarks, while Servlets perform noticeably worse. Our regression model shows the same deficiency in Servlets’ performance under typical dynamic Web page requests.

Details

Internet Research, vol. 11 no. 1
Type: Research Article
ISSN: 1066-2243

Keywords

To view the access options for this content please click here
Article
Publication date: 22 November 2011

Shusuke Okamoto and Masaki Kohana

The purpose of this paper is to propose a load distribution technique for a web server. It utilizes Web Workers, which is a new feature of Javascript.

Abstract

Purpose

The purpose of this paper is to propose a load distribution technique for a web server. It utilizes Web Workers, which is a new feature of Javascript.

Design/methodology/approach

The authors have been implementing a web‐based MORPG as an interactive, real‐time web application; previously, the web server alone was responsible for manipulating the behavior of all the game characters. As more users logged in, the workload on the server was increased. Hence, the authors have implemented a technique whereby the CPU load of the server is distributed among the clients.

Findings

The authors found that some caching mechanism is useful for utilizing client‐side calculation. The caching suppresses the increase of communication load. A performance evaluation reveals that the technique plays a role in decreasing the CGI latency of both low‐end server and high‐end server. The average latency is reduced to 59.5 percent of the original system.

Originality/value

Web Workers allows scripts to be executed with keeping the page response on a web browser. It is intended to be used for raising user experience. This technique utilizes Web Workers for a web server to distribute the load to its clients.

Details

International Journal of Web Information Systems, vol. 7 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article
Publication date: 7 November 2008

Rui Zhou

The aim of this research is to enable web‐based tracking and guiding by integrating location‐awareness with the Worldwide Web so that the users can use various…

Abstract

Purpose

The aim of this research is to enable web‐based tracking and guiding by integrating location‐awareness with the Worldwide Web so that the users can use various location‐based applications without installing extra software.

Design/methodology/approach

The concept of web‐based tracking and guiding is introduced and the relevant issues are discussed regarding location‐aware web systems, location determination, location‐dependent content query and personalized presentation. The framework of the web‐based tracking and guiding system – the Web‐Based Guide is proposed, and its prototypical implementation is presented. The main design principles are making use of existing web technologies, making use of available and cheap devices, general‐purpose and lightweight client‐side, and good scalability.

Findings

The paper presents the general‐purpose and modular framework of the Web‐Based Guide, which consists of the Location Server, the Content Server, the Guiding Web Server and the clients which are standard web browsers extended with the Location Control. With such a framework, location‐based applications can offer the services on the web.

Research limitations/implications

The performance of the system should be evaluated and improved, such as the number of the concurrent sessions that the system can sustain, and the workload on the system when in the tracking mode.

Originality/value

The paper proposes a framework for personalized tracking and guiding systems on the web, which can be used in campuses, museums, national parks and so on.

Details

Campus-Wide Information Systems, vol. 25 no. 5
Type: Research Article
ISSN: 1065-0741

Keywords

To view the access options for this content please click here
Article
Publication date: 22 November 2011

Helen Kapodistria, Sarandis Mitropoulos and Christos Douligeris

The purpose of this paper is to introduce a new tool which detects, prevents and records common web attacks that mainly result in web applications information leaking…

Downloads
1560

Abstract

Purpose

The purpose of this paper is to introduce a new tool which detects, prevents and records common web attacks that mainly result in web applications information leaking using pattern recognition. It is a cross‐platform application, namely, it is not OS‐dependent or web server dependent. It offers a flexible attacks search engine, which scans http requests and responses during a webpage serving without affecting the web server performance.

Design/methodology/approach

The paper starts with a study of the most known web vulnerabilities and the way they can be exploited. Then, it focuses on those web attacks based on input validation, which are the ones the new tool detects through pattern recognition. This tool acts as a proxy server having a simple GUI for administration purposes. Patterns can be detected in both http requests and responses in an extensible and manageable way.

Findings

The new tool was compared to dotDefender, a commercial web application firewall, and ModSecurity, a widely used open source application firewall, using over 200 attack patterns. The new tool had satisfying results for every attack category examined having a high percentage of success. Results for stored XSS could not be achieved since the other tools are not able to search and detect them in http responses. The fact that the new tool is very extensible, it makes it possible for future work to be done.

Originality/value

This paper introduces a new web server plug‐in, which has some advanced web application firewall features with a flexible attacks search engine which scans http requests and responses. By scanning http responses, attacks such as stored XSS can be detected, a feature that cannot be found on other web application firewalls.

Details

Information Management & Computer Security, vol. 19 no. 5
Type: Research Article
ISSN: 0968-5227

Keywords

To view the access options for this content please click here
Article
Publication date: 13 April 2012

Ka I. Pun, Yain Whar Si and Kin Chan Pau

Intensive traffic often occurs in web‐enabled business processes hosted by travel industry and government portals. An extreme case for intensive traffic is flash crowd…

Downloads
1253

Abstract

Purpose

Intensive traffic often occurs in web‐enabled business processes hosted by travel industry and government portals. An extreme case for intensive traffic is flash crowd situations when the number of web users spike within a short time due to unexpected events caused by political unrest or extreme weather conditions. As a result, the servers hosting these business processes can no longer handle overwhelming service requests. To alleviate this problem, process engineers usually analyze audit trail data collected from the application server and reengineer their business processes to withstand unexpected surge in the visitors. However, such analysis can only reveal the performance of the application server from the internal perspective. This paper aims to investigate this issue.

Design/methodology/approach

This paper proposes an approach for analyzing key performance indicators of traffic intensive web‐enabled business processes from audit trail data, web server logs, and stress testing logs.

Findings

The key performance indicators identified in the study's approach can be used to understand the behavior of traffic intensive web‐enabled business processes and the underlying factors that affect the stability of the web server.

Originality/value

The proposed analysis also provides an internal as well as an external view of the performance. Moreover, the calculated key performance indicators can be used by the process engineers for locating potential bottlenecks, reengineering business processes, and implementing contingency measures for traffic intensive situations.

Details

Business Process Management Journal, vol. 18 no. 2
Type: Research Article
ISSN: 1463-7154

Keywords

To view the access options for this content please click here
Article
Publication date: 1 October 2002

Jody Condit Fagan

Server‐side include (SSI) codes allow Webmasters to insert content into their Web pages on‐the‐fly without programming knowledge. Using these codes effectively can mean an…

Abstract

Server‐side include (SSI) codes allow Webmasters to insert content into their Web pages on‐the‐fly without programming knowledge. Using these codes effectively can mean an exponential decrease in the time spent maintaining a large or medium‐sized Web site. Most Web servers have server‐side functionality to some extent; a few allow great flexibility with if‐then statements and the ability to set variables. This article describes the functionality of SSI, how to enable the codes on a Web server, and a step‐by‐step process for implementing them. Examples of their use on a large academic library’s Web site are included for illustration.

Details

The Electronic Library, vol. 20 no. 5
Type: Research Article
ISSN: 0264-0473

Keywords

To view the access options for this content please click here
Article
Publication date: 1 October 2006

Gi Woong Yun, Jay Ford, Robert P. Hawkins, Suzanne Pingree, Fiona McTavish, David Gustafson and Haile Berhe

This paper seeks to discuss measurement units by comparing the internet use and the traditional media use, and to understand internet use from the traditional media use…

Abstract

Purpose

This paper seeks to discuss measurement units by comparing the internet use and the traditional media use, and to understand internet use from the traditional media use perspective.

Design/methodology/approach

Benefits and shortcomings of two log file types will be carefully and exhaustively examined. Client‐side and server‐side log files will be analyzed and compared with proposed units of analysis.

Findings

Server‐side session time calculation was remarkably reliable and valid based on the high correlation with the client‐side time calculation. The analysis result revealed that the server‐side log file session time measurement seems more promising than the researchers previously speculated.

Practical implications

An ability to identify each individual user and low caching problems were strong advantages for the analysis. Those web design implementations and web log data analysis scheme are recommended for future web log analysis research.

Originality/value

This paper examined the validity of the client‐side and the server‐side web log data. As a result of the triangulation of two datasets, research designs and propose analysis schemes could be recommended.

Details

Internet Research, vol. 16 no. 5
Type: Research Article
ISSN: 1066-2243

Keywords

To view the access options for this content please click here
Article
Publication date: 20 December 2007

Darcy Benoit and André Trudel

To measure the exact size of the world wide web (i.e. a census). The measure used is the number of publicly accessible web servers on port 80.

Downloads
239

Abstract

Purpose

To measure the exact size of the world wide web (i.e. a census). The measure used is the number of publicly accessible web servers on port 80.

Design/methodology/approach

Every IP address on the internet is queried for the presence of a web server.

Findings

The census found 18,560,257 web servers.

Research limitations/implications

Any web servers hidden behind a firewall, or that did not respond within a reasonable amount of time (20 seconds) were not counted by the census.

Practical implications

Whenever a server is found, we download and store a copy of its homepage. The resulting database of homepages is a historical snapshot of the web which will be mined for information in the future.

Originality/value

Past web surveys performed by various research groups were only estimates of the size of the web. This is the first time its size has been exactly measured.

Details

International Journal of Web Information Systems, vol. 3 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article
Publication date: 1 March 1999

Schubert Foo, Peng Chor Leong, Siu Cheung Hui and Shigong Liu

The study outlines a number of security requirements that are typical of a host of Web‐based applications using a case study of a real life online Web‐based customer…

Downloads
1418

Abstract

The study outlines a number of security requirements that are typical of a host of Web‐based applications using a case study of a real life online Web‐based customer support system. It subsequently proposes a security solution that employs a combination of Web server security measures and cryptographic techniques. The Web server security measures include the formulation and implementation of a policy for server physical security, configuration control, users’ access control and regular Web server log checks. Login passwords, in conjunction with public key cryptographic techniques and random nonces, are used to achieve user authentication, provide a safeguard against replay attacks, and prevent non‐repudiatory usage of system by users. These techniques, together with the use of session keys, will allow data integrity and confidentiality of the customer support system to be enforced. Furthermore, a number of security guidelines have been observed in the implementation of the relevant software to ensure further safety of the system.

Details

Information Management & Computer Security, vol. 7 no. 1
Type: Research Article
ISSN: 0968-5227

Keywords

To view the access options for this content please click here
Article
Publication date: 4 April 2008

C.I. Ezeife, Jingyu Dong and A.K. Aggarwal

The purpose of this paper is to propose a web intrusion detection system (IDS), SensorWebIDS, which applies data mining, anomaly and misuse intrusion detection on web environment.

Abstract

Purpose

The purpose of this paper is to propose a web intrusion detection system (IDS), SensorWebIDS, which applies data mining, anomaly and misuse intrusion detection on web environment.

Design/methodology/approach

SensorWebIDS has three main components: the network sensor for extracting parameters from real‐time network traffic, the log digger for extracting parameters from web log files and the audit engine for analyzing all web request parameters for intrusion detection. To combat web intrusions like buffer‐over‐flow attack, SensorWebIDS utilizes an algorithm based on standard deviation (δ) theory's empirical rule of 99.7 percent of data lying within 3δ of the mean, to calculate the possible maximum value length of input parameters. Association rule mining technique is employed for mining frequent parameter list and their sequential order to identify intrusions.

Findings

Experiments show that proposed system has higher detection rate for web intrusions than SNORT and mod security for such classes of web intrusions like cross‐site scripting, SQL‐Injection, session hijacking, cookie poison, denial of service, buffer overflow, and probes attacks.

Research limitations/implications

Future work may extend the system to detect intrusions implanted with hacking tools and not through straight HTTP requests or intrusions embedded in non‐basic resources like multimedia files and others, track illegal web users with their prior web‐access sequences, implement minimum and maximum values for integer data, and automate the process of pre‐processing training data so that it is clean and free of intrusion for accurate detection results.

Practical implications

Web service security, as a branch of network security, is becoming more important as more business and social activities are moved online to the web.

Originality/value

Existing network IDSs are not directly applicable to web intrusion detection, because these IDSs are mostly sitting on the lower (network/transport) level of network model while web services are running on the higher (application) level. Proposed SensorWebIDS detects XSS and SQL‐Injection attacks through signatures, while other types of attacks are detected using association rule mining and statistics to compute frequent parameter list order and their maximum value lengths.

Details

International Journal of Web Information Systems, vol. 4 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of over 9000