Search results

1 – 7 of 7
Article
Publication date: 1 January 1996

Bracha Shapira, Peretz Shoval, Adi Raveh and Uri Hanani

Hypertext users often experience the ‘lost in hyperspace’ problem. This study suggests a solution which restricts the amount of information made available to the user, thus…

Abstract

Hypertext users often experience the ‘lost in hyperspace’ problem. This study suggests a solution which restricts the amount of information made available to the user, thus allowing improved hypertext browsing. An algorithm calculates the set of most relevant hypertext nodes for the user, utilising the user profile and data clustering technique. The result is an optimal cluster of relevant data items, custom‐tailored for each user's needs.

Details

Online and CD-Rom Review, vol. 20 no. 1
Type: Research Article
ISSN: 1353-2642

Article
Publication date: 1 November 2006

Yuval Elovici, Bracha Shapira and Adlay Meshiach

The purpose of this paper is to prove the ability of PRivAte Web (PRAW) – a system for private web browsing – to stand possible attacks.

Abstract

Purpose

The purpose of this paper is to prove the ability of PRivAte Web (PRAW) – a system for private web browsing – to stand possible attacks.

Design/methodology/approach

Attacks on the systems were simulated, manipulating systems variables. A privacy measure was defined to evaluate the capability of the systems to stand the attacks. Analysis of results was performed.

Findings

It was shown that, even if the attack is optimised to provide the attacker's highest utility, the similarity between the user profile and the approximated profile is pretty low and does not enable the eavesdropper to derive an accurate estimation of the user profile.

Research limitations/implications

One limitation is the “cold start” problem – in the current version, an observer might detect the first transaction, which is always a real user transaction. As a remedy for this problem, the first transaction will be randomly delayed and a random number of fake transactions played before the real one (according to Tr). Another limitation is that PRAW supports only link browsing, originated in search engine interactions (since it is the most common interaction on the web. It should be extended to include concealment of browsing to links originating in the “Favourites” list, that users tend to browse regularly (even a few times a day) for professional or personal reasons.

Practical implications

PRAW is feasible and preserves the privacy of web browsers. It is now undergoing commercialisation to become a shelf tool for privacy preservation.

Originality/value

The paper presents a practical statistical method for privacy preservation and proved that it is standing possible attacks. Methods usually proposed for this problem are not statistical, but cryptography oriented, and are too expensive in processing‐time to be practical.

Details

Online Information Review, vol. 30 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 1 September 2005

Yuval Elovici, Chanan Glezer and Bracha Shapira

To propose a model of a privacy‐enhanced catalogue search system (PECSS) in an attempt to address privacy threats to consumers, who search for products and services on the world…

2405

Abstract

Purpose

To propose a model of a privacy‐enhanced catalogue search system (PECSS) in an attempt to address privacy threats to consumers, who search for products and services on the world wide web.

Design/methodology/approach

The model extends an agent‐based architecture for electronic catalogue mediation by supplementing it with a privacy enhancement mechanism. This mechanism introduces fake queries into the original stream of user queries, in an attempt to reduce the similarity between the actual interests of users (“internal user profile”) and the interests as observed by potential eavesdroppers on the web (“external user profile”). A prototype was constructed to demonstrate the feasibility and effectiveness of the model.

Findings

The evaluation of the model indicates that, by generating five fake queries per each original user query, the user's profile is hidden most effectively from any potential eavesdropper. Future research is needed to identify the optimal glossary of fake queries for various clients. The model also should be tested against various attacks perpetrated against the mixed stream of original and fake queries (i.e. statistical clustering).

Research limitations/implications

The model's feasibility was evaluated through a prototype. It was not empirically tested against various statistical methods used by intruders to reveal the original queries.

Practical implications

A useful architecture for electronic commerce providers, internet service providers (ISP) and individual clients who are concerned with their privacy and wish to minimize their dependencies on third‐party security providers.

Originality/value

The contribution of the PECSS model stems from the fact that, as the internet gradually transforms into a non‐free service, anonymous browsing cannot be employed any more to protect consumers' privacy, and therefore other approaches should be explored. Moreover, unlike other approaches, our model does not rely on the honesty of any third mediators and proxies that are also exposed to the interests of the client. In addition, the proposed model is scalable as it is installed on the user's computer.

Details

Internet Research, vol. 15 no. 4
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 28 September 2010

Veronica Maidel, Peretz Shoval, Bracha Shapira and Meirav Taieb‐Maimon

The purpose of this paper is to describe a new ontological content‐based filtering method for ranking the relevance of items for readers of news items, and its evaluation. The…

Abstract

Purpose

The purpose of this paper is to describe a new ontological content‐based filtering method for ranking the relevance of items for readers of news items, and its evaluation. The method has been implemented in ePaper, a personalised electronic newspaper prototype system. The method utilises a hierarchical ontology of news; it considers common and related concepts appearing in a user's profile on the one hand, and in a news item's profile on the other hand, and measures the “hierarchical distances” between these concepts. On that basis it computes the similarity between item and user profiles and rank‐orders the news items according to their relevance to each user.

Design/methodology/approach

The paper evaluates the performance of the filtering method in an experimental setting. Each participant read news items obtained from an electronic newspaper and rated their relevance. Independently, the filtering method is applied to the same items and generated, for each participant, a list of news items ranked according to relevance.

Findings

The results of the evaluations revealed that the filtering algorithm, which takes into consideration hierarchically related concepts, yielded significantly better results than a filtering method that takes only common concepts into consideration. The paper determined a best set of values (weights) of the hierarchical similarity parameters. It also found out that the quality of filtering improves as the number of items used for implicit updates of the profile increases, and that even with implicitly updated profiles, it is better to start with user‐defined profiles.

Originality/value

The proposed content‐based filtering method can be used for filtering not only news items but items from any domain, and not only with a three‐level hierarchical ontology but any‐level ontology, in any language.

Details

Online Information Review, vol. 34 no. 5
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 25 September 2009

Alexander Binun, Bracha Shapira and Yuval Elovici

The purpose of this paper is to present an extension to a framework based on the information structure (IS) model for combining information filtering (IF) results. The main goal…

Abstract

Purpose

The purpose of this paper is to present an extension to a framework based on the information structure (IS) model for combining information filtering (IF) results. The main goal of the framework is to combine the results of the different IF systems so as to maximise the expected payoff (EP) to the user. In this paper we compare three different approaches to tuning the relevance thresholds of individual IF systems that are being combined in order to maximise the EP to the user. In the first approach we set the same threshold for each of the IF systems. In the second approach the threshold of each IF system is tuned independently to maximise its own EP (“local optimisation”). In the third approach the thresholds of the IF systems are jointly tuned to maximise the EP of the combined system (“global optimisation”).

Design/methodology/approach

An empirical evaluation is conducted to examine the performance of each approach using two IF systems based on somewhat different filtering algorithms (TFIDF, OKAPI). Experiments are run using the TREC3, TREC6, and TREC7 test collections.

Findings

The experiments reveal that, as expected, the third approach always outperforms the first and the second, and that for some user profiles, the difference is significant. However, operational goals argue against global optimisation, and the costs of meeting these operational goals are discussed.

Research limitations/implications

One limitation is the assumption of independence of the IF systems: in real life systems usually use similar algorithms, so dependency might occur. The approach also tends to be examined with the assumption of dependency between systems.

Practical implications

The main practical implications of this study lie in the empirical proof that combination of filtering systems improves filtering results and the finding about the optimal combination methods for the different user profiles. Many filtering applications exist (e.g. spam filters, news personalisation systems, etc.) that can benefit from these findings.

Originality/value

The study presents and compares the contribution of three different combination methods of filtering systems to the improvement of filtering results It empirically shows the benefits of each method and draws important conclusions about the combination of filtering systems.

Details

Online Information Review, vol. 33 no. 5
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 1 August 2005

Bracha Shapira, Meirav Taieb‐Maimon and Yael Nemeth

Query expansion and query limitation are two known techniques for assisting users to define efficient queries. The purpose of this article is to examine the effectiveness of the…

Abstract

Purpose

Query expansion and query limitation are two known techniques for assisting users to define efficient queries. The purpose of this article is to examine the effectiveness of the two methods.

Design/methodology/approach

The research entailed an objective and subjective evaluation of the effectiveness of automatic and interactive query expansion and of two query limit options. The evaluation included both lab simulations and large‐scale user studies. The objective aspects were evaluated in lab simulations with experts judging user performance. The subjective analysis was carried out by having the participants evaluate the quality of, and express their satisfaction with, the retrieval process and its results, thus employing perceived‐value analysis.

Findings

The main findings reveal a difference between the perceived and real values of these techniques. While users expressed their satisfaction with interactive query expansion and its performance, the real‐value analysis of their performance did not show any significant difference between the retrieval modes.

Originality/value

The article evaluates the objective and subjective effectiveness of automatic and interactive query expansion and two query limit options.

Details

Online Information Review, vol. 29 no. 4
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 11 November 2020

Murad A. Mithani and Ipek Kocoglu

The proposed theoretical model offers a systematic approach to synthesize the fragmented research on organizational crisis, disasters and extreme events.

1107

Abstract

Purpose

The proposed theoretical model offers a systematic approach to synthesize the fragmented research on organizational crisis, disasters and extreme events.

Design/methodology/approach

This paper offers a theoretical model of organizational responses to extreme threats.

Findings

The paper explains that organizations choose between hypervigilance (freeze), exit (flight), growth (fight) and dormancy (fright) when faced with extreme threats. The authors explain how the choice between these responses are informed by the interplay between slack and routines.

Research limitations/implications

The study’s theoretical model contributes by explaining the nature of organizational responses to extreme threats and how the two underlying mechanisms, slack and routines, determine heterogeneity between organizations.

Practical implications

The authors advance four key managerial considerations: the need to distinguish between discrete and chronic threats, the critical role of hypervigilance in the face of extreme threats, the distinction between resources and routines during threat mitigation, and the recognition that organizational exit may sometimes be the most effective means for survival.

Originality/value

The novelty of this paper pertains to the authors’ use of the comparative developmental approach to incorporate insights from the study of individual responses to life-threatening events to explain organizational responses to extreme threats.

Details

Management Decision, vol. 58 no. 10
Type: Research Article
ISSN: 0025-1747

Keywords

1 – 7 of 7