Search results

1 – 10 of over 3000
Article
Publication date: 1 November 2006

Reijo Savolainen and Jarkko Kari

The purpose of this paper is to specify user‐defined relevance criteria by which people select hyperlinks and pages in web searching.

2623

Abstract

Purpose

The purpose of this paper is to specify user‐defined relevance criteria by which people select hyperlinks and pages in web searching.

Design/methodology/approach

A quantitative and qualitative analysis was undertaken of talking aloud data from nine web searches conducted about self‐generated topics.

Findings

Altogether 18 different criteria for selecting hyperlinks and web pages were found. The selection is constituted, by two, intertwined processes: the relevance judgment of hyperlinks, and web pages by user‐defined criteria, and decision‐making concerning the acceptance or rejection of hyperlinks and web pages. The study focuses on the former process. Of the individual criteria, specificity, topicality, familiarity, and variety were used most frequently in relevance judgments. The study shows that despite the high number of individual criteria used in the judgments, a few criteria such as specificity and topicality tend to dominate. Searchers were less critical in the judgment of hyperlinks than deciding whether the activated web pages should be consulted in more detail.

Research limitations/implications

The study is exploratory, drawing on a relatively low number of case searches.

Originality/value

The paper gives a detailed picture of the criteria used in the relevance judgments of hyperlinks and web pages. The study also discusses the specific nature of criteria used in web searching, as compared to those used in traditional online searching environments.

Details

Journal of Documentation, vol. 62 no. 6
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 1 September 2005

Lin‐Chih Chen and Cheng‐Jye Luh

This study aims to present a new web page recommendation system that can help users to reduce navigational time on the internet.

1260

Abstract

Purpose

This study aims to present a new web page recommendation system that can help users to reduce navigational time on the internet.

Design/methodology/approach

The proposed design is based on the primacy effect of browsing behavior, that users prefer top ranking items in search results. This approach is intuitive and requires no training data at all.

Findings

A user study showed that users are more satisfied with the proposed search methods than with general search engines using hot keywords. Moreover, two performance measures confirmed that the proposed search methods out‐perform other metasearch and search engines.

Research limitations/implications

The research has limitations and future work is planned along several directions. First, the search methods implemented are primarily based on the keyword match between the contents of web pages and the user query items. Using the semantic web to recommend concepts and items relevant to the user query might be very helpful in finding the exact contents that users want, particularly when the users do not have enough knowledge about the domains in which they are searching. Second, offering a mechanism that groups search results to improve the way search results are segmented and displayed also assists users to locate the contents they need. Finally, more user feedback is needed to fine‐tune the search parameters including α and β to improve the performance.

Practical implications

The proposed model can be used to improve the search performance of any search engine.

Originality/value

First, compared with the democratic voting procedure used by metasearch engines, search engine vector voting (SVV) enables a specific combination of search parameters, denoted as α and β, to be applied to a voted search engine, so that users can either narrow or expand their search results to meet their search preferences. Second, unlike page quality analysis, the hyperlink prediction (HLP) determines qualified pages by simply measuring their user behavior function (UBF) values, and thus takes less computing power. Finally, the advantages of HLP over statistical analysis are that it does not need training data, and it can target both multi‐site and site‐specific analysis.

Details

Internet Research, vol. 15 no. 4
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 6 December 2021

Andrea Hrckova, Robert Moro, Ivan Srba and Maria Bielikova

Partisan news media, which often publish extremely biased, one-sided or even false news, are gaining popularity world-wide and represent a major societal issue. Due to a growing…

Abstract

Purpose

Partisan news media, which often publish extremely biased, one-sided or even false news, are gaining popularity world-wide and represent a major societal issue. Due to a growing number of such media, a need for automatic detection approaches is of high demand. Automatic detection relies on various indicators (e.g. content characteristics) to identify new partisan media candidates and to predict their level of partisanship. The aim of the research is to investigate to a deeper extent whether it would be appropriate to rely on the hyperlinks as possible indicators for better automatic partisan news media detection.

Design/methodology/approach

The authors utilized hyperlink network analysis to study the hyperlinks of partisan and mainstream media. The dataset involved the hyperlinks of 18 mainstream media and 15 partisan media in Slovakia and Czech Republic. More than 171 million domain pairs of inbound and outbound hyperlinks of selected online news media were collected with Ahrefs tool, analyzed and visualized with Gephi software. Additionally, 300 articles covering COVID-19 from both types of media were selected for content analysis of hyperlinks to verify the reliability of quantitative analysis and to provide more detailed analysis.

Findings

The authors conclude that hyperlinks are reliable indicators of media affinity and linking patterns could contribute to partisan news detection. The authors found out that especially the incoming links with dofollow attribute to news websites are reliable indicators for assessing the type of media, as partisan media rarely receive links with dofollow attribute from mainstream media. The outgoing links are not such reliable indicators as both mainstream and partisan media link to mainstream sources similarly.

Originality/value

In contrast to the extensive amount of research aiming at fake news detection within a piece of text or multimedia content (e.g. news articles, social media posts), the authors shift to characterization of the whole news media. In addition, the authors did a geographical shift from more researched US-based media to so far under-researched European context, particularly Central Europe. The results and conclusions can serve as a guide how to derive new features for an automatic detection of possibly partisan news media by means of artificial intelligence (AI).

Peer review

The peer review history for this article is available at the following link: https://publons.com/publon/10.1108/OIR-10-2020-0441.

Details

Online Information Review, vol. 46 no. 5
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 1 September 2000

Stephen G. Dykehouse and Robert T. Sigler

This paper presents the results of two research projects designed to evaluate the extent and nature of the use of the World Wide Web by criminal justice agencies. Discussion…

2550

Abstract

This paper presents the results of two research projects designed to evaluate the extent and nature of the use of the World Wide Web by criminal justice agencies. Discussion focuses on the extent and nature of Web use by type of agency, who links to whom, and the use of the Web to disseminate information from a news‐making criminology perspective.

Details

Policing: An International Journal of Police Strategies & Management, vol. 23 no. 3
Type: Research Article
ISSN: 1363-951X

Keywords

Article
Publication date: 27 November 2020

Chaoqun Wang, Zhongyi Hu, Raymond Chiong, Yukun Bao and Jiang Wu

The aim of this study is to propose an efficient rule extraction and integration approach for identifying phishing websites. The proposed approach can elucidate patterns of…

Abstract

Purpose

The aim of this study is to propose an efficient rule extraction and integration approach for identifying phishing websites. The proposed approach can elucidate patterns of phishing websites and identify them accurately.

Design/methodology/approach

Hyperlink indicators along with URL-based features are used to build the identification model. In the proposed approach, very simple rules are first extracted based on individual features to provide meaningful and easy-to-understand rules. Then, the F-measure score is used to select high-quality rules for identifying phishing websites. To construct a reliable and promising phishing website identification model, the selected rules are integrated using a simple neural network model.

Findings

Experiments conducted using self-collected and benchmark data sets show that the proposed approach outperforms 16 commonly used classifiers (including seven non–rule-based and four rule-based classifiers as well as five deep learning models) in terms of interpretability and identification performance.

Originality/value

Investigating patterns of phishing websites based on hyperlink indicators using the efficient rule-based approach is innovative. It is not only helpful for identifying phishing websites, but also beneficial for extracting simple and understandable rules.

Details

The Electronic Library , vol. 38 no. 5/6
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 1 February 2021

Chi-Chun Chen, Jian-Hong Wang, Hsing-Wen Wang and Jie Zhang

This research proposes an innovative fault-tolerant media content list management technology applied to the smart robot domain.

Abstract

Purpose

This research proposes an innovative fault-tolerant media content list management technology applied to the smart robot domain.

Design/methodology/approach

A fault tolerant Content List Management Unit (CLMU) for real-time streaming systems focusing on smart robot claw machines is proposed to synchronize and manage the hyperlink stored on media servers. The fault-tolerant mechanism is realized by the self-healing method. A media server allows exchanging the hyperlink within the network through the CLMU mechanism.

Findings

Internet users can access the current multimedia information, and the multimedia information list can be rearranged appropriately. Furthermore, the service of the proposed multimedia system should be uninterrupted even when the master media server fails. Therefore, one of the slave media servers enables the Content List Service (CLS) of the proposed CLMU and replaces the defunct master media server.

Originality/value

The recovery time is less than 1.5 seconds. The multimedia transmission is not interrupted while any one of the media servers keeps functioning. The proposed method can serve to stabilize the system of media servers in a smart robot domain.

Article
Publication date: 20 April 2010

Tai‐Li Wang

The blogging phenomenon has become a primary mode of mainstream communication for the Web 2.0 era. While previous studies found that campaign web sites did not realise two‐way…

1113

Abstract

Purpose

The blogging phenomenon has become a primary mode of mainstream communication for the Web 2.0 era. While previous studies found that campaign web sites did not realise two‐way communication ideals, the current study aims to investigate potential differences in communication patterns between campaign blogs and web sites during Taiwan's 2008 general election, with the aim of exploring whether the blogging phenomenon can improve the process of online political communication.

Design/methodology/approach

The study used a content analysis approach, the web style analysis method, which was designed specifically for analysing web content, and applied it to an online campaign context in a different political culture, using Taiwan's general election as a case study.

Findings

Results indicated that the themes of both campaign blogs and web sites focused on “attacking opponents” rather than focusing on political policies or information on particular issues. However, campaign blogs and web sites significantly differed in all other dimensions, including structural features, functions, interactivity and appeal strategies. Overall, in terms of the online democratic ideal, campaign blogs appeared to allow more democratic, broader, deeper and easier two‐way communication models between candidates and voters or among voters.

Research limitations/implications

The current study focused on candidates' blogs and web sites and did not explore the other vast parts of the online political sphere, particularly independent or citizen‐based blogs, which play significant roles in the decentralised and participant‐networked public spheres.

Originality/value

The study illuminates the role of hyperlinks on campaign blogs. By providing a greater abundance of external links than campaign web sites, campaign blogs allowed more voters, especially younger ones, to share political information in a manner that is quite different from the traditional one‐way communication model. The paper also argues that interactivity measures should be incorporated into the web style analysis method.

Details

Online Information Review, vol. 34 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 5 July 2009

Kayvan Kousha

More knowledge about open access (OA) scholarly publishing on the web would be helpful for citation data mining and the development of web‐based citation indexes. Hence, the main…

1485

Abstract

Purpose

More knowledge about open access (OA) scholarly publishing on the web would be helpful for citation data mining and the development of web‐based citation indexes. Hence, the main purpose of this study is to identify common characteristics of open access publishing, which may therefore enable us to measure different aspects of e‐research on the web.

Design/methodology/approach

In the current study, five characteristics of 545 OA citing sources targeting OA research articles in four science and four social science disciplines were manually identified, including file format, hyperlinking, internet domain, language and publication year.

Findings

About 60 per cent of the OA citing sources targeting research papers were in PDF format, 30 per cent were from academic domains ending in edu and ac and 70 per cent of the citations were not hyperlinked. Moreover, 16 per cent of the OA citing sources targeting studied papers in the eight selected disciplines were in non‐English languages. Additional analyses revealed significant disciplinary differences in some studied characteristics across science and the social sciences.

Originality/value

The OA web citation network was dominated by PDF format files and non‐hyperlinked citations. This knowledge of characteristics shaping the OA citation network gives a better understanding about their potential uses for open access scholarly research.

Details

Aslib Proceedings, vol. 61 no. 4
Type: Research Article
ISSN: 0001-253X

Keywords

Article
Publication date: 6 February 2007

F. Rimbach, M. Dannenberg and U. Bleimann

The purpose of this paper is to examine the marketing and sales implications of page ranking techniques, in terms of how companies may use knowledge of their operation to increase…

1697

Abstract

Purpose

The purpose of this paper is to examine the marketing and sales implications of page ranking techniques, in terms of how companies may use knowledge of their operation to increase the chances of attracting custom.

Design/methodology/approach

Explaining the calculation, implementation and impact of the PageRank and Topic Sensitive Page Ranking is the prerequisite to recapitulating existing search engine optimization strategies and to identifying new methods for leveraging the Internet for sales and marketing purposes.

Findings

Different strategies have to be adapted to effectively attract potential customers.

Originality/value

This paper aligns the complex calculations of the two concepts to enable a comparison. The changing technology of search engines means that they are getting ever more complex – this article offers a snapshot of major developments.

Details

Internet Research, vol. 17 no. 1
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 1 August 2002

François Bry and Michael Kraus

While the World Wide Web (WWW or Web) is steadily expanding, electronic books (e‐books) remain a niche market. In this article, it is first postulated that specialized contents…

1148

Abstract

While the World Wide Web (WWW or Web) is steadily expanding, electronic books (e‐books) remain a niche market. In this article, it is first postulated that specialized contents and device independence can make Web‐based e‐books compete with paper prints; and that adaptive features that can be implemented by client‐side computing are relevant for e‐books, while more complex forms of adaptation requiring server‐side computations are not. Then, enhancements of the WWW standards (specifically of XML, XHTML, of the style‐sheet languages CSS and XSL, and of the linking language XLink) are proposed for a better support of client‐side adaptation and device independent content modeling. Finally, advanced browsing functionalities desirable for e‐books as well as their implementation in the WWW context are described.

Details

The Electronic Library, vol. 20 no. 4
Type: Research Article
ISSN: 0264-0473

Keywords

1 – 10 of over 3000