Search results
1 – 10 of over 23000Mortaza S. Bargh, Sunil Choenni and Ronald Meijer
Information dissemination has become a means of transparency for governments to enable the visions of e-government and smart government, and eventually gain, among others, the…
Abstract
Purpose
Information dissemination has become a means of transparency for governments to enable the visions of e-government and smart government, and eventually gain, among others, the trust of various stakeholders such as citizens and enterprises. Information dissemination, on the other hand, may increase the chance of privacy breaches, which can undermine those stakeholders’ trust and thus the objectives of transparency. Moreover, fear of potential privacy breaches compels information disseminators to share minimum or no information. The purpose of this study is to address these contending issues of information disseminations, i.e. privacy versus transparency, when disseminating judicial information to gain (public) trust. Specifically, the main research questions are: What is the nature of the aforementioned “privacy–transparency” problem and how can we approach and address this class of problems?
Design/methodology/approach
To address these questions, the authors have carried out an explorative case study by reconsidering and analyzing a number of information dissemination cases within their research center for the past 10 years, reflecting upon the whole design research process, consulting peers through publishing a preliminary version of this contribution and embedding the work in an in-depth literature study on research methodologies, wicked problems and e-government topics.
Findings
The authors show that preserving privacy while disseminating information for transparency purposes is a typical wicked problem, propose an innovative designerly model called transitional action design research (TADR) to address the class of such wicked problems and describe three artifacts which are designed, intervened and evaluated according to the TADR model in a judicial research organization.
Originality/value
Classifying the privacy transparency problem in the judicial settings as wicked is new, the proposed designerly model is innovative and the realized artifacts are deployed and still operational in a real setting.
Details
Keywords
Dijana Peras and Renata Mekovec
The purpose of this paper is to improve the understanding of cloud service users’ privacy concerns, which are anticipated to considerably hinder cloud service market growth. The…
Abstract
Purpose
The purpose of this paper is to improve the understanding of cloud service users’ privacy concerns, which are anticipated to considerably hinder cloud service market growth. The researchers have explored privacy concerns from dimensions that were identified as relevant in the cloud context.
Design/methodology/approach
Content analysis was used to identify privacy problems that were most often raised in previous cloud research. Multidimensional developmental theory (MDT) was used to build a conceptual model of cloud privacy concerns. Literature review was made to identify the privacy-related constructs used to measure privacy concerns in previous cloud research.
Findings
The paper provides systematization of recent cloud privacy research, proposal of a conceptual model of cloud privacy concerns, identification of measuring instruments that were used to measure privacy concerns in previous cloud research and identification of categories of problems that need to be addressed in future cloud research.
Originality/value
This paper has identified the categories of privacy problems and dimensions that have not yet been measured in the cloud context, to the best of the authors’ knowledge. Their simultaneous examination could clarify the effects of different dimensions on the privacy concerns of cloud users. The conceptual model of cloud privacy concerns will allow cloud service providers to focus on key cloud problems affecting users’ privacy concerns and use the most appropriate privacy protection communication and preservation approaches.
Details
Keywords
Ruxia Ma, Xiaofeng Meng and Zhongyuan Wang
The Web is the largest repository of information. Personal information is usually scattered on various pages of different websites. Search engines have made it easier to find…
Abstract
Purpose
The Web is the largest repository of information. Personal information is usually scattered on various pages of different websites. Search engines have made it easier to find personal information. An attacker may collect a user's scattered information together via search engines, and infer some privacy information. The authors call this kind of privacy attack “Privacy Inference Attack via Search Engines”. The purpose of this paper is to provide a user‐side automatic detection service for detecting the privacy leakage before publishing personal information.
Design/methodology/approach
In this paper, the authors propose a user‐side automatic detection service. In the user‐side service, the authors construct a user information correlation (UICA) graph to model the association between user information returned by search engines. The privacy inference attack is mapped into a decision problem of searching a privacy inferring path with the maximal probability in the UICA graph and it is proved that it is a nondeterministic polynomial time (NP)‐complete problem by a two‐step reduction. A Privacy Leakage Detection Probability (PLD‐Probability) algorithm is proposed to find the privacy inferring path: it combines two significant factors which can influence the vertexes' probability in the UICA graph and uses greedy algorithm to find the privacy inferring path.
Findings
The authors reveal that privacy inferring attack via search engines is very serious in real life. In this paper, a user‐side automatic detection service is proposed to detect the risk of privacy inferring. The authors make three kinds of experiments to evaluate the seriousness of privacy leakage problem and the performance of methods proposed in this paper. The results show that the algorithm for the service is reasonable and effective.
Originality/value
The paper introduces a new family of privacy attacks on the Web: privacy inferring attack via search engines and presents a privacy inferring model to describe the process and principles of personal privacy inferring attack via search engines. A user‐side automatic detection service is proposed to detect the privacy inference before publishing personal information. In this user‐side service, the authors propose a Privacy Leakage Detection Probability (PLD‐Probability) algorithm. Extensive experiments show these methods are reasonable and effective.
Details
Keywords
Zongda Wu, Shigen Shen, Huxiong Li, Haiping Zhou and Dongdong Zou
First, the authors analyze the key problems faced by the protection of digital library readers' data privacy and behavior privacy. Second, the authors introduce the…
Abstract
Purpose
First, the authors analyze the key problems faced by the protection of digital library readers' data privacy and behavior privacy. Second, the authors introduce the characteristics of all kinds of existing approaches to privacy protection and their application limitations in the protection of readers' data privacy and behavior privacy. Lastly, the authors compare the advantages and disadvantages of each kind of existing approaches in terms of security, efficiency, accuracy and practicality and analyze the challenges faced by the protection of digital library reader privacy.
Design/methodology/approach
In this paper, the authors review a number of research achievements relevant to privacy protection and analyze and evaluate the application limitations of them in the reader privacy protection of a digital library, consequently, establishing the constraints that an ideal approach to library reader privacy protection should meet, so as to provide references for the follow-up research of the problem.
Findings
As a result, the authors conclude that an ideal approach to reader privacy protection should be able to comprehensively improve the security of all kinds of readers' privacy information on the untrusted server-side as a whole, under the premise of not changing the architecture, efficiency, accuracy and practicality of a digital library system.
Originality/value
Along with the rapid development of new network technologies, such as cloud computing, the server-side of a digital library is becoming more and more untrustworthy, thereby, posing a serious threat to the privacy of library readers. In fact, the problem of reader privacy has become one of the important obstacles to the further development and application of digital libraries.
Details
Keywords
This essay examines ethical aspects of the use of facial recognition technology for surveillance purposes in public and semipublic areas, focusing particularly on the balance…
Abstract
This essay examines ethical aspects of the use of facial recognition technology for surveillance purposes in public and semipublic areas, focusing particularly on the balance between security and privacy and civil liberties. As a case study, the FaceIt facial recognition engine of Identix Corporation will be analyzed, as well as its use in “Smart” video surveillance (CCTV) systems in city centers and airports. The ethical analysis will be based on a careful analysis of current facial recognition technology, of its use in Smart CCTV systems, and of the arguments used by proponents and opponents of such systems. It will be argued that Smart CCTV, which integrates video surveillance technology and biometric technology, faces ethical problems of error, function creep and privacy. In a concluding section on policy, it will be discussed whether such problems outweigh the security value of Smart CCTV in public places.
Details
Keywords
Ubiquitous computing and “big data” have been widely recognized as requiring new concepts of privacy and new mechanisms to protect it. While improved concepts of privacy have been…
Abstract
Purpose
Ubiquitous computing and “big data” have been widely recognized as requiring new concepts of privacy and new mechanisms to protect it. While improved concepts of privacy have been suggested, the paper aims to argue that people acting in full conformity to those privacy norms still can infringe the privacy of others in the context of ubiquitous computing and “big data”.
Design/methodology/approach
New threats to privacy are described. Helen Nissenbaum's concept of “privacy as contextual integrity” is reviewed concerning its capability to grasp these problems. The argument is based on the assumption that the technologies work, persons are fully informed and capable of deciding according to advanced privacy considerations.
Findings
Big data and ubiquitous computing enable privacy threats for persons whose data are only indirectly involved and even for persons about whom no data have been collected and processed. Those new problems are intrinsic to the functionality of these new technologies and need to be addressed on a social and political level. Furthermore, a concept of data minimization in terms of the quality of the data is proposed.
Originality/value
The use of personal data as a threat to the privacy of others is established. This new perspective is used to reassess and recontextualize Helen Nissenbaum's concept of privacy. Data minimization in terms of quality of data is proposed as a new concept.
Details
Keywords
The purpose of this paper is to solve the problem of information privacy and security of social users. Mobile internet and social network are more and more deeply integrated into…
Abstract
Purpose
The purpose of this paper is to solve the problem of information privacy and security of social users. Mobile internet and social network are more and more deeply integrated into people’s daily life, especially under the interaction of the fierce development momentum of the Internet of Things and diversified personalized services, more and more private information of social users is exposed to the network environment actively or unintentionally. In addition, a large amount of social network data not only brings more benefits to network application providers, but also provides motivation for malicious attackers. Therefore, under the social network environment, the research on the privacy protection of user information has great theoretical and practical significance.
Design/methodology/approach
In this study, based on the social network analysis, combined with the attribute reduction idea of rough set theory, the generalized reduction concept based on multi-level rough set from the perspectives of positive region, information entropy and knowledge granularity of rough set theory were proposed. Furthermore, it was traversed on the basis of the hierarchical compatible granularity space of the original information system and the corresponding attribute values are coarsened. The selected test data sets were tested, and the experimental results were analyzed.
Findings
The results showed that the algorithm can guarantee the anonymity requirement of data publishing and improve the effect of classification modeling on anonymous data in social network environment.
Research limitations/implications
In the test and verification of privacy protection algorithm and privacy protection scheme, the efficiency of algorithm and scheme needs to be tested on a larger data scale. However, the data in this study are not enough. In the following research, more data will be used for testing and verification.
Practical implications
In the context of social network, the hierarchical structure of data is introduced into rough set theory as domain knowledge by referring to human granulation cognitive mechanism, and rough set modeling for complex hierarchical data is studied for hierarchical data of decision table. The theoretical research results are applied to hierarchical decision rule mining and k-anonymous privacy protection data mining research, which enriches the connotation of rough set theory and has important theoretical and practical significance for further promoting the application of this theory. In addition, combined the theory of secure multi-party computing and the theory of attribute reduction in rough set, a privacy protection feature selection algorithm for multi-source decision table is proposed, which solves the privacy protection problem of feature selection in distributed environment. It provides a set of effective rough set feature selection method for privacy protection classification mining in distributed environment, which has practical application value for promoting the development of privacy protection data mining.
Originality/value
In this study, the proposed algorithm and scheme can effectively protect the privacy of social network data, ensure the availability of social network graph structure and realize the need of both protection and sharing of user attributes and relational data.
Details
Keywords
Zhaobin Meng, Yueheng Lu and Hongyue Duan
The purpose of this paper is to study the following two issues regarding blockchain crowdsourcing. First, to design smart contracts with lower consumption to meet the needs of…
Abstract
Purpose
The purpose of this paper is to study the following two issues regarding blockchain crowdsourcing. First, to design smart contracts with lower consumption to meet the needs of blockchain crowdsourcing services and also need to design better interaction modes to further reduce the cost of blockchain crowdsourcing services. Second, to design an effective privacy protection mechanism to protect user privacy while still providing high-quality crowdsourcing services for location-sensitive multiskilled mobile space crowdsourcing scenarios and blockchain exposure issues.
Design/methodology/approach
This paper proposes a blockchain-based privacy-preserving crowdsourcing model for multiskill mobile spaces. The model in this paper uses the zero-knowledge proof method to make the requester believe that the user is within a certain location without the user providing specific location information, thereby protecting the user’s location information and other privacy. In addition, through off-chain calculation and on-chain verification methods, gas consumption is also optimized.
Findings
This study deployed the model on Ethereum for testing. This study found that the privacy protection is feasible and the gas optimization is obvious.
Originality/value
This study designed a mobile space crowdsourcing based on a zero-knowledge proof privacy protection mechanism and optimized gas consumption.
Details
Keywords
Tore Hoel and Weiqin Chen
Privacy is a culturally universal process; however, in the era of Big Data privacy is handled very differently in different parts of the world. This is a challenge when designing…
Abstract
Purpose
Privacy is a culturally universal process; however, in the era of Big Data privacy is handled very differently in different parts of the world. This is a challenge when designing tools and approaches for the use of Educational Big Data (EBD) and learning analytics (LA) in a global market. The purpose of this paper is to explore the concept of information privacy in a cross-cultural setting to define a common point of reference for privacy engineering.
Design/methodology/approach
The paper follows a conceptual exploration approach. Conceptual work on privacy in EBD and LA in China and the west is contrasted with the general discussion of privacy in a large corpus of literature and recent research. As much of the discourse on privacy has an American or European bias, intimate knowledge of Chinese education is used to test the concept of privacy and to drive the exploration of how information privacy is perceived in different cultural and educational settings.
Findings
The findings indicate that there are problems using privacy concepts found in European and North-American theories to inform privacy engineering for a cross-cultural market in the era of Big Data. Theories based on individualism and ideas of control of private information do not capture current global digital practice. The paper discusses how a contextual and culture-aware understanding of privacy could be developed to inform privacy engineering without letting go of universally shared values. The paper concludes with questions that need further research to fully understand information privacy in education.
Originality/value
As far as the authors know, this paper is the first attempt to discuss – from a comparative and cross-cultural perspective – information privacy in an educational context in the era of Big Data. The paper presents initial explorations of a problem that needs urgent attention if good intentions of privacy supportive educational technologies are to be turned into more than political slogans.
Details
Keywords
The purpose of this paper is to analyse the problem of privacy disclosure of third party applications in online social networks (OSNs) through Facebook, investigate the…
Abstract
Purpose
The purpose of this paper is to analyse the problem of privacy disclosure of third party applications in online social networks (OSNs) through Facebook, investigate the limitations in the existing models to protect users privacy and propose a permission-based access control (PBAC) model, which gives users complete control over users’ data when accessing third party applications.
Design/methodology/approach
A practical model based on the defined permission policies is proposed to manage users information accessed by third party applications and improve user awareness in sharing sensitive information with them. This model is a combination of interfaces and internal mechanisms which can be adopted by any OSN having similar architecture to Facebook in managing third party applications, without much structural changes. The model implemented in Web interface connects with Facebook application programming interface and evaluates its efficacy using test cases.
Findings
The results show that the PBAC model can facilitate user awareness about privacy risks of data passed on to third party applications and allow users who are more concerned about their privacy from releasing such information to those applications.
Research limitations/implications
The study provides further research in protecting users’ privacy in OSNs and thus avoid the risks associated with that, thereby increasing users’ trust in using OSNs.
Originality/value
The research has proven to be useful in improving user awareness on the risk associated with sharing private information on OSNs, and the practically implemented PBAC model guarantees full user privacy from unwanted disclosure of personal information to third party applications.
Details