Search results

1 – 10 of over 13000
Article
Publication date: 1 December 2003

Kristin Eschenfelder

This paper takes a social shaping of technology approach to identify and explain sources of conflict in the design or enhancement of corporate Web sites. Data from a multi‐case…

2218

Abstract

This paper takes a social shaping of technology approach to identify and explain sources of conflict in the design or enhancement of corporate Web sites. Data from a multi‐case field study show how Web site classification schemes embedded in Web site design elements created intra‐organizational conflicts because the schemes could not equally accommodate different sub‐units' customer requirements. Interview data demonstrate Web managers' perceptions that Web classification schemes privileged certain sets of customer needs, and Web managers' actions to shape the design of classification schemes to satisfy their perceived customer needs. Data analysis identified three design elements of Web sites associated with sub‐unit conflict: classification categories, templates and tool bars, and database entities and attributes.

Details

Information Technology & People, vol. 16 no. 4
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 1 October 2003

Mike Thelwall, Liwen Vaughan, Viv Cothey, Xuemei Li and Alastair G. Smith

The use of the Web by academic researchers is discipline‐dependent and highly variable. It is increasingly central for sharing information, disseminating results and publicising…

1017

Abstract

The use of the Web by academic researchers is discipline‐dependent and highly variable. It is increasingly central for sharing information, disseminating results and publicising research projects. This pilot study seeks to identify the subjects that have the most impact on the Web, and look for national differences in online subject visibility. The highest impact sites were from computing, but there were major national differences in the impact of engineering and technology sites. Another difference was that Taiwan had more high impact non‐academic sites hosted by universities. As a pilot study, the classification process itself was also investigated and the problems of applying subject classification to academic Web sites discussed. The study draws out a number of issues in this regard, having no simple solutions and point to the need to interpret the results with caution.

Details

Online Information Review, vol. 27 no. 5
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 24 May 2011

Asmita Shukla, Narendra K. Sharma and Sanjeev Swami

Web sites are the first point of interaction in the virtual environment and information and entertainment aspects are the most important tenets of web sites. Thus, it becomes…

Abstract

Purpose

Web sites are the first point of interaction in the virtual environment and information and entertainment aspects are the most important tenets of web sites. Thus, it becomes important to know how much information and entertainment is required, is adequate and appropriate for a web site. The purpose of this paper is to classify 43 web sites into information and entertainment profiles.

Design/methodology/approach

The sites were selected from two Indian rating web sites and engineering students. From the pool of the selected web sites, the present study classified 43 web sites on information and entertainment profiles. The web site profile comprised informativeness, organisation of information elements, entertainment properties and organisation of entertainment elements. The classification was done by three independent judges.

Findings

The results revealed that out of 43 web sites, eight were high on both information and entertainment profiles, 15 were high on information and low on entertainment profiles, six were low on information and high on entertainment profiles and 14 were low on both information and entertainment profiles.

Practical implications

Marketers may take cues from the classified web sites and design their web sites to ensure that the web site content meets their goals and satisfies the users while filtering out the content which is irrelevant to their business and incorporate what is essential.

Originality/value

This study provides guidelines regarding the information and/or entertainment aspects which should be stronger in information and/or entertainment‐oriented web sites to attract users. The present study targets the marketers who should prioritise web site features depending upon the needs of their target group.

Details

Journal of Advances in Management Research, vol. 8 no. 1
Type: Research Article
ISSN: 0972-7981

Keywords

Article
Publication date: 7 November 2016

Mehdi Dadkhah, Shahaboddin Shamshirband and Ainuddin Wahid Abdul Wahab

This paper aims to present a hybrid approach based on classification algorithms that was capable of identifying different types of phishing pages. In this approach, after…

Abstract

Purpose

This paper aims to present a hybrid approach based on classification algorithms that was capable of identifying different types of phishing pages. In this approach, after eliminating features that do not play an important role in identifying phishing attacks and also after adding the technique of searching page title in the search engine, the capability of identifying journal phishing and phishing pages embedded in legal sites was added to the presented approach in this paper.

Design/methodology/approach

The hybrid approach of this paper for identifying phishing web sites is presented. This approach consists of four basic sections. The action of identifying phishing web sites and journal phishing attacks is performed via selecting two classification algorithms separately. To identify phishing attacks embedded in legal web sites also the method of page title searching is used and then the result is returned. To facilitate identifying phishing pages the black list approach is used along with the proposed approach so that the operation of identifying phishing web sites can be performed more accurately, and, finally, by using a decision table, it is judged that the intended web site is phishing or legal.

Findings

In this paper, a hybrid approach based on classification algorithms to identify phishing web sites is presented that has the ability to identify a new type of phishing attack known as journal phishing. The presented approach considers the most used features and adds new features to identify these attacks and to eliminate unused features in the identifying process of these attacks, does not have the problems of previous techniques and can identify journal phishing too.

Originality/value

The major advantage of this technique was considering all of the possible and effective features in identifying phishing attacks and eliminating unused features of previous techniques; also, this technique in comparison with other similar techniques has the ability of identifying journal phishing attacks and phishing pages embedded in legal sites.

Details

The Electronic Library, vol. 34 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 1 November 2005

Mohamed Hammami, Youssef Chahir and Liming Chen

Along with the ever growingWeb is the proliferation of objectionable content, such as sex, violence, racism, etc. We need efficient tools for classifying and filtering undesirable…

Abstract

Along with the ever growingWeb is the proliferation of objectionable content, such as sex, violence, racism, etc. We need efficient tools for classifying and filtering undesirable web content. In this paper, we investigate this problem through WebGuard, our automatic machine learning based pornographic website classification and filtering system. Facing the Internet more and more visual and multimedia as exemplified by pornographic websites, we focus here our attention on the use of skin color related visual content based analysis along with textual and structural content based analysis for improving pornographic website filtering. While the most commercial filtering products on the marketplace are mainly based on textual content‐based analysis such as indicative keywords detection or manually collected black list checking, the originality of our work resides on the addition of structural and visual content‐based analysis to the classical textual content‐based analysis along with several major‐data mining techniques for learning and classifying. Experimented on a testbed of 400 websites including 200 adult sites and 200 non pornographic ones, WebGuard, our Web filtering engine scored a 96.1% classification accuracy rate when only textual and structural content based analysis are used, and 97.4% classification accuracy rate when skin color related visual content based analysis is driven in addition. Further experiments on a black list of 12 311 adult websites manually collected and classified by the French Ministry of Education showed that WebGuard scored 87.82% classification accuracy rate when using only textual and structural content‐based analysis, and 95.62% classification accuracy rate when the visual content‐based analysis is driven in addition. The basic framework of WebGuard can apply to other categorization problems of websites which combine, as most of them do today, textual and visual content.

Details

International Journal of Web Information Systems, vol. 1 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 16 October 2009

A.C.M. Fong, S.C. Hui and P.Y. Lee

With the proliferation of objectionable world wide web (WWW or web) materials such as pornography and violence, there is an increasing need for effective web content filtering…

Abstract

Purpose

With the proliferation of objectionable world wide web (WWW or web) materials such as pornography and violence, there is an increasing need for effective web content filtering tools to protect unsuspecting users from the harmful effect of such materials. This paper aims to discuss this issue.

Design/methodology/approach

Using pornographic web materials as a case study, the authors have developed an effective filtering solution that uses machine intelligence to perform offline web page classification into allowed and disallowed web pages.

Findings

The results are stored in a database for fast online retrieval whenever access to a web page is requested.

Practical implications

The separation between offline classification and online filtering ensures fast blocking decisions are made from the user's viewpoint.

Originality/value

There is an urgent and continued need for effective measures against the proliferation of objectionable materials on the web. In this paper, the authors describe a possible solution in the form of a complete working system. Future research will focus on adding appropriate modules to tackle other types of objectionable materials than the type described. The basic framework, however, should be applicable to a wide range of materials.

Details

Kybernetes, vol. 38 no. 9
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 1 August 2005

Ming Yin Ming, Dion Hoe‐lian Goh, Ee‐Peng Lim and Aixin Sun

A web site usually contains a large number of concept entities, each consisting of one or more web pages connected by hyperlinks. In order to discover these concept entities for…

Abstract

A web site usually contains a large number of concept entities, each consisting of one or more web pages connected by hyperlinks. In order to discover these concept entities for more expressive web site queries and other applications, the web unit mining problem has been proposed. Web unit mining aims to determine web pages that constitute a concept entity and classify concept entities into categories. Nevertheless, the performance of an existing web unit mining algorithm, iWUM, suffers as it may create more than one web unit (incomplete web units) from a single concept entity. This paper presents two methods to solve this problem. The first method introduces a more effective web fragment construction method so as reduce later classification errors. The second method incorporates site‐specific knowledge to discover and handle incomplete web units. Experiments show that incomplete web units can be removed and overall accuracy has been significantly improved, especially on the precision and F1 measures.

Details

International Journal of Web Information Systems, vol. 1 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 March 2000

Christian Bauer and Arno Scharl

Describes an approach automatically to classify and evaluate publicly accessible World Wide Web sites. The suggested methodology is equally valuable for analyzing content and…

6335

Abstract

Describes an approach automatically to classify and evaluate publicly accessible World Wide Web sites. The suggested methodology is equally valuable for analyzing content and hypertext structures of commercial, educational and non‐profit organizations. Outlines a research methodology for model building and validation and defines the most relevant attributes of such a process. A set of operational criteria for classifying Web sites is developed. The introduced software tool supports the automated gathering of these parameters, and thereby assures the necessary “critical mass” of empirical data. Based on the preprocessed information, a multi‐methodological approach is chosen that comprises statistical clustering, textual analysis, supervised and non‐supervised neural networks and manual classification for validation purposes.

Details

Internet Research, vol. 10 no. 1
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 21 November 2008

Mohamed Hammami, Radhouane Guermazi and Abdelmajid Ben Hamadou

The growth of the web and the increasing number of documents electronically available has been paralleled by the emergence of harmful web pages content such as pornography…

Abstract

Purpose

The growth of the web and the increasing number of documents electronically available has been paralleled by the emergence of harmful web pages content such as pornography, violence, racism, etc. This emergence involved the necessity of providing filtering systems designed to secure the internet access. Most of them process mainly the adult content and focus on blocking pornography, marginalizing violence. The purpose of this paper is to propose a violent web content detection and filtering system, which uses textual and structural content‐based analysis.

Design/methodology/approach

The violent web content detection and filtering system uses textual and structural content‐based analysis based on a violent keyword dictionary. The paper focuses on the keyword dictionary preparation, and presents a comparative study of different data mining techniques to block violent content web pages.

Findings

The solution presented in this paper showed its effectiveness by scoring a 89 per cent classification accuracy rate on its test data set.

Research limitations/implications

Many future work directions can be considered. This paper analyzed only the web page, and an additional analysis of the visual content can be one of the directions of future work. Future research is underway to develop effective filtering tools for other types of harmful web pages, such as racist, etc.

Originality/value

The paper's major contributions are first, the study and comparison of several decision tree building algorithms to build a violent web classifier based on a textual and structural content‐based analysis for improving web filtering. Second, showing laborious dictionary building by finding automatically discriminative indicative keywords.

Details

International Journal of Web Information Systems, vol. 4 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 May 1999

Deon Nel, Raymond van Niekerk, Jean‐Paul Berthon and Tony Davies

This paper investigates a structure of commercial Web sites, and then attempts to analyse various patterns that emerge which may be of future use as a guideline to businesses that…

2578

Abstract

This paper investigates a structure of commercial Web sites, and then attempts to analyse various patterns that emerge which may be of future use as a guideline to businesses that intend establishing a Web presence. Key to the understanding of these patterns is a clearer grasp of the implications of human interaction with the new medium. The focus is on an experiential construct, namely flow, and how this might vary by Web site, and on using this to begin to unravel the secrets of good commercial Web site design and its implications for business.

Details

Internet Research, vol. 9 no. 2
Type: Research Article
ISSN: 1066-2243

Keywords

1 – 10 of over 13000