Search results

1 – 10 of over 158000
To view the access options for this content please click here
Article
Publication date: 30 July 2021

Tanvi Garg, Navid Kagalwalla, Shubha Puthran, Prathamesh Churi and Ambika Pawar

This paper aims to design a secure and seamless system that ensures quick sharing of health-care data to improve the privacy of sensitive health-care data, the efficiency…

Abstract

Purpose

This paper aims to design a secure and seamless system that ensures quick sharing of health-care data to improve the privacy of sensitive health-care data, the efficiency of health-care infrastructure, effective treatment given to patients and encourage the development of new health-care technologies by researchers. These objectives are achieved through the proposed system, a “privacy-aware data tagging system using role-based access control for health-care data.”

Design/methodology/approach

Health-care data must be stored and shared in such a manner that the privacy of the patient is maintained. The method proposed, uses data tags to classify health-care data into various color codes which signify the sensitivity of data. It makes use of the ARX tool to anonymize raw health-care data and uses role-based access control as a means of ensuring only authenticated persons can access the data.

Findings

The system integrates the tagging and anonymizing of health-care data coupled with robust access control policies into one architecture. The paper discusses the proposed architecture, describes the algorithm used to tag health-care data, analyzes the metrics of the anonymized data against various attacks and devises a mathematical model for role-based access control.

Originality/value

The paper integrates three disparate topics – data tagging, anonymization and role-based access policies into one seamless architecture. Codifying health-care data into different tags based on International Classification of Diseases 10th Revision (ICD-10) codes and applying varying levels of anonymization for each data tag along with role-based access policies is unique to the system and also ensures the usability of data for research.

Details

World Journal of Engineering, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1708-5284

Keywords

Content available
Article
Publication date: 15 January 2019

Spyros E. Polykalas and George N. Prezerakos

Mobile devices (smartphones, tables etc.) have become the de facto means of accessing the internet. While traditional Web browsing is still quite popular, significant…

Abstract

Purpose

Mobile devices (smartphones, tables etc.) have become the de facto means of accessing the internet. While traditional Web browsing is still quite popular, significant interaction takes place via native mobile apps that can be downloaded either freely or at a cost. This has opened the door to a number of issues related to privacy protection since the smartphone stores and processes personal data. The purpose of this paper is to examine the extent of access to personal data, required by the most popular mobile apps available in Google Play store. In addition, it is examined whether the relevant procedure is in accordance with the provisions of the new EU Regulation.

Design/methodology/approach

The paper examines more than a thousand mobile apps, available from the Google Play store, with respect to the extent of the requests for access to personal data. In particular, for each available category in Google Play store, the most popular mobile apps have been examined both for free and paid apps. In addition, the permissions required by free and paid mobile apps are compared. Furthermore, a correlation analysis is carried out aiming to reveal any correlation between the extent of required access to personal data and the popularity and the rating of each mobile app.

Findings

The findings of this paper suggest that the majority of examined mobile apps require access to personal data to a high extent. In addition, it is found that free mobile apps request access to personal data in a higher extent compared to the relevant requests by paid apps, which indicates strongly that the business model of free mobile apps is based on personal data exploitation. The most popular types of access permissions are revealed for both free and paid apps. In addition, important questions are raised in relation to user awareness and behavior, data minimization and purpose limitation for free and paid mobile apps.

Originality/value

In this study, the process and the extent of access to personal data through mobile apps are analyzed. Although several studies analyzed relevant issues in the past, the originality of this research is mainly based on the following facts: first, this work took into account the recent Regulation of the EU in relation to personal data (GDPR); second, the authors analyzed a high number of the most popular mobile apps (more than a thousand); and third, the authors compare and analyze the different approaches followed between free and paid mobile apps.

Details

Digital Policy, Regulation and Governance, vol. 21 no. 2
Type: Research Article
ISSN: 2398-5038

Keywords

To view the access options for this content please click here
Article
Publication date: 1 February 1992

Alice Robbin

The purpose of this article is to contribute to our stock of knowledge about who uses networks, how they are used, and what contribution the networks make to advancing the…

Abstract

The purpose of this article is to contribute to our stock of knowledge about who uses networks, how they are used, and what contribution the networks make to advancing the scientific enterprise. Between 1985 and 1990, the Survey of Income and Program Participation (SIPP) ACCESS data facility at the University of Wisconsin‐Madison provided social scientists in the United States and elsewhere with access through the electronic networks to complex and dynamic statistical data; the 1984 SIPP is a longitudinal panel survey designed to examine economic well‐being in the United States. This article describes the conceptual framework and design of SIPP ACCESS; examines how network users communicated with the SIPP ACCESS project staff about the SIPP data; and evaluates one outcome derived from the communications, the improvement of the quality of the SIPP data. The direct and indirect benefits to social scientists of electronic networks are discussed. The author concludes with a series of policy recommendations that link the assessment of our inadequate knowledge base for evaluating how electronic networks advance the scientific enterprise and the SIPP ACCESS research network experience to the policy initiatives of the High Performance Computing Act of 1991 (P.L. 102–194) and the related extensive recommendations embodied in Grand Challenges 1993 High Performance Computing and Communications (The FY 1993 U.S. Research and Development Program).

Details

Internet Research, vol. 2 no. 2
Type: Research Article
ISSN: 1066-2243

To view the access options for this content please click here
Article
Publication date: 1 January 1995

Each of the agencies participating in GCDIS will play a role appropriate to its agency mission and consistent with the funds available to it. Descriptions of each agency's…

Abstract

Each of the agencies participating in GCDIS will play a role appropriate to its agency mission and consistent with the funds available to it. Descriptions of each agency's resources follow. Each agency will implement the GCDIS at its own pace.

Details

Library Hi Tech, vol. 13 no. 1/2
Type: Research Article
ISSN: 0737-8831

To view the access options for this content please click here
Book part
Publication date: 28 June 1991

Karen Horny

Abstract

Details

Library Technical Services: Operations and Management
Type: Book
ISBN: 978-1-84950-795-0

To view the access options for this content please click here
Article
Publication date: 20 June 2016

Sarath Tomy and Eric Pardede

The purpose of this paper is to analyse the problem of privacy disclosure of third party applications in online social networks (OSNs) through Facebook, investigate the…

Abstract

Purpose

The purpose of this paper is to analyse the problem of privacy disclosure of third party applications in online social networks (OSNs) through Facebook, investigate the limitations in the existing models to protect users privacy and propose a permission-based access control (PBAC) model, which gives users complete control over users’ data when accessing third party applications.

Design/methodology/approach

A practical model based on the defined permission policies is proposed to manage users information accessed by third party applications and improve user awareness in sharing sensitive information with them. This model is a combination of interfaces and internal mechanisms which can be adopted by any OSN having similar architecture to Facebook in managing third party applications, without much structural changes. The model implemented in Web interface connects with Facebook application programming interface and evaluates its efficacy using test cases.

Findings

The results show that the PBAC model can facilitate user awareness about privacy risks of data passed on to third party applications and allow users who are more concerned about their privacy from releasing such information to those applications.

Research limitations/implications

The study provides further research in protecting users’ privacy in OSNs and thus avoid the risks associated with that, thereby increasing users’ trust in using OSNs.

Originality/value

The research has proven to be useful in improving user awareness on the risk associated with sharing private information on OSNs, and the practically implemented PBAC model guarantees full user privacy from unwanted disclosure of personal information to third party applications.

To view the access options for this content please click here
Article
Publication date: 24 April 2020

Victoria L. Lemieux, Chris Rowell, Marc-David L. Seidel and Carson C. Woo

Distributed trust technologies, such as blockchain, propose to permit peer-to-peer transactions without trusted third parties. Yet not all implementations of such…

Abstract

Purpose

Distributed trust technologies, such as blockchain, propose to permit peer-to-peer transactions without trusted third parties. Yet not all implementations of such technologies fully decentralize. Information professionals make strategic choices about the level of decentralization when implementing such solutions, and many organizations are taking a hybrid (i.e. partially decentralized) approach to the implementation of distributed trust technologies. This paper conjectures that while hybrid approaches may resolve some challenges of decentralizing information governance, they also introduce others. To better understand these challenges, this paper aims first to elaborate a framework that conceptualizes a centralized–decentralized information governance continuum along three distinct dimensions: custody, ownership and right to access data. This paper then applies this framework to two illustrative blockchain case studies – a pilot Brazilian land transfer recording solution and a Canadian health data consent sharing project – to exemplify how the current transition state of blockchain pilots straddles both the old (centralized) and new (decentralized) worlds. Finally, this paper outlines the novel challenges that hybrid approaches introduce for information governance and what information professionals should do to navigate this thorny transition period. Counterintuitively, it may be much better for information professionals to embrace decentralization when implementing distributed trust technologies, as hybrid models could offer the worst of both the centralized and future decentralized worlds when consideration is given to the balance between information governance risks and new strategic business opportunities.

Design/methodology/approach

This paper illustrates how blockchain is transforming organizations and societies by highlighting new strategic information governance challenges using our original analytic framework in two detailed blockchain case studies – a pilot solution in Brazil to record land transfers (Flores et al., 2018) and another in Canada to handle health data sharing consent (Hofman et al., 2018). The two case studies represent research output of the first phase of an ongoing multidisciplinary research project focused on gaining an understanding of how blockchain technology generates organizational, societal and data transformations and challenges. The analytic framework was developed inductively from a thematic synthesis of the findings of the case studies conducted under the auspices of this research project. Each case discussed in detail in this paper was chosen from among the project's case studies, as it represents a desire to move away from the old centralized world of information governance to a new decentralized one. However, each case study also represents and embodies a transition state between the old and new worlds and highlights many of the associated strategic information governance challenges.

Findings

Decentralization continues to disrupt organizations and societies. New emerging distributed trust technologies such as blockchain break the old rules with respect to the trust and authority structures of organizations and how records and data are created, managed and used. While governments and businesses around the world clearly see value in this technology to drive business efficiency, open up new market opportunities and create new forms of value, these advantages will not come without challenges. For information executives then, the question is not if they will be disrupted, but how. Understanding the how as will be discussed in this paper provides the business know how to leverage the incredible innovation and transformation that decentralized trust technology enables before being leapfrogged by another organization. It requires a change of mindset to consider an organization as one part of a broader ecosystem, and for those who successfully do so, this paper views this as a strategic opportunity for those responsible for strategic information governance to design the future instead of being disrupted by it.

Research limitations/implications

This paper presents a novel analytic framework for strategic information governance challenges as we transition from a traditional world of centralized records and information management to a new decentralized world. This paper analyzes these transitions and their implications for strategic information governance along three trajectories: custody, ownership and right to access records and data, illustrating with reference to our case studies.

Practical implications

This paper predicts a large number of organizations will miss the opportunities of the new decentralized trust world, resulting in a rather major churning of organizations, as those who successfully participate in building the new model will outcompete those stuck in the old world or the extremely problematic hybrid transition state. Counterintuitively, this paper argues that it may be much less complex for information executives to embrace decentralization as fast as they can, as in some ways the hybrid model seems to offer the worst of both the centralized and future decentralized worlds with respect to information governance risks.

Social implications

This paper anticipates broader societal consequences of the predicted organization churn, in particular with respect to uncertainty about the evidence that records provide for public accountability and contractual rights and entitlements.

Originality/value

Decentralized trust technologies, such as blockchain, permit peer-to-peer transactions without trusted third parties. Of course, such radical shifts do not happen overnight. The current transition state of blockchain pilots straddles both the old and new worlds. This paper presents a theoretical framework categorizing strategic information governance challenges on a spectrum of centralized to decentralized in three primary areas: custody, ownership and right to access records and data. To illustrate how decentralized trust is transforming organizations and societies, this paper presents these strategic information governance challenges in two blockchain case studies – a pilot Brazilian land transfer recording solution and a Canadian health data consent sharing project. Drawing on the theoretical framework and case studies, this paper outlines what information executives should do to navigate this thorny transition period.

Details

Records Management Journal, vol. 30 no. 3
Type: Research Article
ISSN: 0956-5698

Keywords

To view the access options for this content please click here
Article
Publication date: 1 January 1994

Wendy Treadwell and James A. Cogswell

The University of Minnesota Libraries have established a full‐service information center to facilitate end‐user access to machine‐readable datafiles, particularly U.S…

Abstract

The University of Minnesota Libraries have established a full‐service information center to facilitate end‐user access to machine‐readable datafiles, particularly U.S. government datafiles such as the Census. The Machine Readable Data Center (MRDC), funded through a three‐year, $240,000 grant from the College Library Technology and Cooperation Grants Program (HEA Title II‐D), presents an alternative, library‐centered model for providing students, faculty, and independent researchers with direct access to machine‐readable data.

Details

Library Hi Tech, vol. 12 no. 1
Type: Research Article
ISSN: 0737-8831

To view the access options for this content please click here
Article
Publication date: 9 March 2015

Eugene Ferry, John O Raw and Kevin Curran

The interoperability of cloud data between web applications and mobile devices has vastly improved over recent years. The popularity of social media, smartphones and…

Abstract

Purpose

The interoperability of cloud data between web applications and mobile devices has vastly improved over recent years. The popularity of social media, smartphones and cloud-based web services have contributed to the level of integration that can be achieved between applications. This paper investigates the potential security issues of OAuth, an authorisation framework for granting third-party applications revocable access to user data. OAuth has rapidly become an interim de facto standard for protecting access to web API data. Vendors have implemented OAuth before the open standard was officially published. To evaluate whether the OAuth 2.0 specification is truly ready for industry application, an entire OAuth client server environment was developed and validated against the speciation threat model. The research also included the analysis of the security features of several popular OAuth integrated websites and comparing those to the threat model. High-impacting exploits leading to account hijacking were identified with a number of major online publications. It is hypothesised that the OAuth 2.0 specification can be a secure authorisation mechanism when implemented correctly.

Design/methodology/approach

To analyse the security of OAuth implementations in industry a list of the 50 most popular websites in Ireland was retrieved from the statistical website Alexa (Noureddine and Bashroush, 2011). Each site was analysed to identify if it utilised OAuth. Out of the 50 sites, 21 were identified with OAuth support. Each vulnerability in the threat model was then tested against each OAuth-enabled site. To test the robustness of the OAuth framework, an entire OAuth environment was required. The proposed solution would compose of three parts: a client application, an authorisation server and a resource server. The client application needed to consume OAuth-enabled services. The authorisation server had to manage access to the resource server. The resource server had to expose data from the database based on the authorisation the user would be given from the authorisation server. It was decided that the client application would consume emails from Google’s Gmail API. The authorisation and resource server were modelled around a basic task-tracking web application. The client application would also consume task data from the developed resource server. The client application would also support Single Sign On for Google and Facebook, as well as a developed identity provider “MyTasks”. The authorisation server delegated authorisation to the client application and stored cryptography information for each access grant. The resource server validated the supplied access token via public cryptography and returned the requested data.

Findings

Two sites out of the 21 were found to be susceptible to some form of attack, meaning that 10.5 per cent were vulnerable. In total, 18 per cent of the world’s 50 most popular sites were in the list of 21 OAuth-enabled sites. The OAuth 2.0 specification is still very much in its infancy, but when implemented correctly, it can provide a relatively secure and interoperable authentication delegation mechanism. The IETF are currently addressing issues and expansions in their working drafts. Once a strict level of conformity is achieved between vendors and vulnerabilities are mitigated, it is likely that the framework will change the way we access data on the web and other devices.

Originality/value

OAuth is flexible, in that it offers extensions to support varying situations and existing technologies. A disadvantage of this flexibility is that new extensions typically bring new security exploits. Members of the IETF OAuth Working Group are constantly refining the draft specifications and are identifying new threats to the expanding functionality. OAuth provides a flexible authentication mechanism to protect and delegate access to APIs. It solves the password re-use across multiple accounts problem and stops the user from having to disclose their credentials to third parties. Filtering access to information by scope and giving the user the option to revoke access at any point gives the user control of their data. OAuth does raise security concerns, such as defying phishing education, but there are always going to be security issues with any authentication technology. Although several high impacting vulnerabilities were identified in industry, the developed solution proves the predicted hypothesis that a secure OAuth environment can be built when implemented correctly. Developers must conform to the defined specification and are responsible for validating their implementation against the given threat model. OAuth is an evolving authorisation framework. It is still in its infancy, and much work needs to be done in the specification to achieve stricter validation and vendor conformity. Vendor implementations need to become better aligned in order to provider a rich and truly interoperable authorisation mechanism. Once these issues are resolved, OAuth will be on track for becoming the definitive authentication standard on the web.

Details

Information & Computer Security, vol. 23 no. 1
Type: Research Article
ISSN: 2056-4961

Keywords

To view the access options for this content please click here
Article
Publication date: 1 August 2000

Nijaz Bajgoric

Information agility, or informational efficiency, represents the major prerequisite for agile management and means eliminating inefficiencies in accessing, exchanging and…

Abstract

Information agility, or informational efficiency, represents the major prerequisite for agile management and means eliminating inefficiencies in accessing, exchanging and disseminating all kinds of information. Presents a framework for implementation of Web technology in enhancing information access for agile management. Web‐to‐host access tools as a specific subset of Web technology are used to improve and ease access to several types of information such as legacy data, messaging system, electronic documents, and business intelligence.

Details

International Journal of Agile Management Systems, vol. 2 no. 2
Type: Research Article
ISSN: 1465-4652

Keywords

1 – 10 of over 158000