Search results

1 – 10 of 738
Open Access
Article
Publication date: 9 October 2023

Aya Khaled Youssef Sayed Mohamed, Dagmar Auer, Daniel Hofer and Josef Küng

Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are…

1037

Abstract

Purpose

Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are increasingly used in security-critical domains. Current survey works on databases and data security only consider authorization and access control in a very general way and do not regard most of today’s sophisticated requirements. Accordingly, the purpose of this paper is to discuss authorization and access control for relational and NoSQL database models in detail with respect to requirements and current state of the art.

Design/methodology/approach

This paper follows a systematic literature review approach to study authorization and access control for different database models. Starting with a research on survey works on authorization and access control in databases, the study continues with the identification and definition of advanced authorization and access control requirements, which are generally applicable to any database model. This paper then discusses and compares current database models based on these requirements.

Findings

As no survey works consider requirements for authorization and access control in different database models so far, the authors define their requirements. Furthermore, the authors discuss the current state of the art for the relational, key-value, column-oriented, document-based and graph database models in comparison to the defined requirements.

Originality/value

This paper focuses on authorization and access control for various database models, not concrete products. This paper identifies today’s sophisticated – yet general – requirements from the literature and compares them with research results and access control features of current products for the relational and NoSQL database models.

Details

International Journal of Web Information Systems, vol. 20 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 11 August 2020

Pantelis Chasapis, Sarandis Mitropoulos and Christos Douligeris

The continuous development of mobile platforms provides the opportunity to integrate and improve existing applications or to introduce new features to make life better. The…

2067

Abstract

The continuous development of mobile platforms provides the opportunity to integrate and improve existing applications or to introduce new features to make life better. The purpose of this paper is to investigate the use of mobile platforms in civilization (museums) and present the design and the implementation of a mobile application that satisfies various design criteria. Through this application the visitor can navigate and tour virtually in the museum through a smartphone. In addition, features have been included in the application that make it easier for a user to visit the museum. First, we present the operating parameters and the aesthetic presentation of the application, which delimits usability and ease of access through the interfaces of a smartphone. Then we highlight the cloud features that were exploited in the application. Then, an extended evaluation of the mobile application is presented, that proves its high applicability and user acceptability.

Details

Applied Computing and Informatics, vol. 19 no. 1/2
Type: Research Article
ISSN: 2210-8327

Keywords

Open Access
Article
Publication date: 21 February 2022

Héctor Rubén Morales, Marcela Porporato and Nicolas Epelbaum

The technical feasibility of using Benford's law to assist internal auditors in reviewing the integrity of high-volume data sets is analysed. This study explores whether Benford's…

2585

Abstract

Purpose

The technical feasibility of using Benford's law to assist internal auditors in reviewing the integrity of high-volume data sets is analysed. This study explores whether Benford's distribution applies to the set of numbers represented by the quantity of records (size) that comprise the different tables that make up a state-owned enterprise's (SOE) enterprise resource planning (ERP) relational database. The use of Benford's law streamlines the search for possible abnormalities within the ERP system's data set, increasing the ability of the internal audit functions (IAFs) to detect anomalies within the database. In the SOEs of emerging economies, where groups compete for power and resources, internal auditors are better off employing analytical tests to discharge their duties without getting involved in power struggles.

Design/methodology/approach

Records of eight databases of an SOE in Argentina are used to analyse the number of records of each table in periods of three to 12 years. The case develops step-by-step Benford's law application to test each ERP module records using Chi-squared (χ²) and mean absolute deviation (MAD) goodness-of-fit tests.

Findings

Benford's law is an adequate tool for performing integrity tests of high-volume databases. A minimum of 350 tables within each database are required for the MAD test to be effective; this threshold is higher than the 67 reported by earlier researches. Robust results are obtained for the complete ERP system and for large modules; modules with less than 350 tables show low conformity with Benford's law.

Research limitations/implications

This study is not about detecting fraud; it aims to help internal auditors red flag databases that will need further attention, making the most out of available limited resources in SOEs. The contribution is a simple, cheap and useful quantitative tool that can be employed by internal auditors in emerging economies to perform the first scan of the data contained in relational databases.

Practical implications

This paper provides a tool to test whether large amounts of data behave as expected, and if not, they can be pinpointed for future investigation. It offers tests and explanations on the tool's application so that internal auditors of SOEs in emerging economies can use it, particularly those that face divergent expectations from antagonist powerful interest groups.

Originality/value

This study demonstrates that even in the context of limited information technology tools available for internal auditors, there are simple and inexpensive tests to review the integrity of high-volume databases. It also extends the literature on high-volume database integrity tests and our knowledge of the IAF in Civil law countries, particularly emerging economies in Latin America.

Details

Journal of Economics, Finance and Administrative Science, vol. 27 no. 53
Type: Research Article
ISSN: 2218-0648

Keywords

Open Access
Article
Publication date: 10 August 2021

Tom A.E. Aben, Wendy van der Valk, Jens K. Roehrich and Kostas Selviaridis

Inter-organisational governance is an important enabler for information processing, particularly in relationships undergoing digital transformation (DT) where partners depend on…

7912

Abstract

Purpose

Inter-organisational governance is an important enabler for information processing, particularly in relationships undergoing digital transformation (DT) where partners depend on each other for information in decision-making. Based on information processing theory (IPT), the authors theoretically and empirically investigate how governance mechanisms address information asymmetry (uncertainty and equivocality) arising in capturing, sharing and interpreting information generated by digital technologies.

Design/methodology/approach

IPT is applied to four cases of public–private relationships in the Dutch infrastructure sector that aim to enhance the quantity and quality of information-based decision-making by implementing digital technologies. The investigated relationships are characterised by differing degrees and types of information uncertainty and equivocality. The authors build on rich data sets including archival data, observations, contract documents and interviews.

Findings

Addressing information uncertainty requires invoking contractual control and coordination. Contract clauses should be precise and incentive schemes functional in terms of information requirements. Information equivocality is best addressed by using relational governance. Identifying information requirements and reducing information uncertainty are a prerequisite for the transformation activities that organisations perform to reduce information equivocality.

Practical implications

The study offers insights into the roles of both governance mechanisms in managing information asymmetry in public–private relationships. The study uncovers key activities for gathering, sharing and transforming information when using digital technologies.

Originality/value

This study draws on IPT to study public–private relationships undergoing DT. The study links contractual control and coordination as well as relational governance mechanisms to information-processing activities that organisations deploy to reduce information uncertainty and equivocality.

Details

International Journal of Operations & Production Management, vol. 41 no. 7
Type: Research Article
ISSN: 0144-3577

Keywords

Open Access
Article
Publication date: 20 August 2021

Daniel Hofer, Markus Jäger, Aya Khaled Youssef Sayed Mohamed and Josef Küng

For aiding computer security experts in their study, log files are a crucial piece of information. Especially the time domain is very important for us because in most cases…

2188

Abstract

Purpose

For aiding computer security experts in their study, log files are a crucial piece of information. Especially the time domain is very important for us because in most cases, timestamps are the only linking points between events caused by attackers, faulty systems or simple errors and their corresponding entries in log files. With the idea of storing and analyzing this log information in graph databases, we need a suitable model to store and connect timestamps and their events. This paper aims to find and evaluate different approaches how to store timestamps in graph databases and their individual benefits and drawbacks.

Design/methodology/approach

We analyse three different approaches, how timestamp information can be represented and stored in graph databases. For checking the models, we set up four typical questions that are important for log file analysis and tested them for each of the models. During the evaluation, we used the performance and other properties as metrics, how suitable each of the models is for representing the log files’ timestamp information. In the last part, we try to improve one promising looking model.

Findings

We come to the conclusion, that the simplest model with the least graph database-specific concepts in use is also the one yielding the simplest and fastest queries.

Research limitations/implications

Limitations to this research are that only one graph database was studied and also improvements to the query engine might change future results.

Originality/value

In the study, we addressed the issue of storing timestamps in graph databases in a meaningful, practical and efficient way. The results can be used as a pattern for similar scenarios and applications.

Details

International Journal of Web Information Systems, vol. 17 no. 5
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 12 February 2020

Matthew Hanchard, Peter Merrington, Bridgette Wessels, Kathy Rogers, Michael Pidd, Simeon Yates, David Forrest, Andrew Higson, Nathan Townsend and Roderik Smits

In this article, we discuss an innovative audience research methodology developed for the AHRC-funded “Beyond the Multiplex: Audiences for Specialised Film in English Regions”…

Abstract

In this article, we discuss an innovative audience research methodology developed for the AHRC-funded “Beyond the Multiplex: Audiences for Specialised Film in English Regions” project (BtM). The project combines a computational ontology with a mixed-methods approach drawn from both the social sciences and the humanities, enabling research to be conducted both at scale and in depth, producing complex relational analyses of audiences. BtM aims to understand how we might enable a wide range of audiences to participate in a more diverse film culture, and embrace the wealth of films beyond the mainstream in order to optimise the cultural value of engaging with less familiar films. BtM collects data through a three-wave survey of film audience members’ practices, semi-structured interviews and film-elicitation groups with audience members alongside interviews with policy and industry experts, and analyses of key policy and industry documents. Bringing each of these datasets together within our ontology enables us to map relationships between them across a variety of different concerns. For instance, how cultural engagement in general relates to engagement with specialised films; how different audiences access and/or share films across different platforms and venues; how their engagement with those films enables them to make meaning and generate value; and how all of this is shaped by national and regional policy, film industry practices, and the decisions of cultural intermediaries across the fields of film production, distribution and exhibition. Alongside our analyses, the ontology enables us to produce data visualisations and a suite of analytical tools for audience development studies that stakeholders can use, ensuring the research has impact beyond the academy. This paper sets out our methodology for developing the BtM ontology, so that others may adapt it and develop their own ontologies from mixed-methods empirical data in their studies of other knowledge domains.

Details

Emerald Open Research, vol. 1 no. 1
Type: Research Article
ISSN: 2631-3952

Keywords

Open Access
Article
Publication date: 23 April 2020

Francesco Capone and Niccolò Innocenti

The purpose of this paper is to investigate the relational dynamics for innovation and, in particular, the impact of the openness of innovation process on the innovation capacity…

1278

Abstract

Purpose

The purpose of this paper is to investigate the relational dynamics for innovation and, in particular, the impact of the openness of innovation process on the innovation capacity of organisations in restricted geographical contexts.

Design/methodology/approach

Through a negative binomial regression, the work analyses how the characteristics of the openness of the organisation’s innovation process in the period 2004-2010 influence the firm’s patent productivity in the following period (2011-2016).

Findings

The breadth of the open innovation (OI) process, here measured by the number of external network ties that an organisation realises for the realisation of its patents, has a positive effect on patent productivity. The depth of the openness, that is, the intensity of external network ties, has an equally positive influence on the innovative performance. However, after a tipping point, the patent productivity tends to decrease, underlining the costs and problems of OI practices.

Research limitations/implications

This study considers only patent collaborations in the city of Florence. Therefore, it focusses on codified innovations and on a single territorial case study.

Practical implications

The results underline the importance of the adoption of OI practices in restricted geographical contexts (such as cities, clusters or industrial districts) but with several limitations. Only collaborating more with others does not foster the organisation’s invention productivity, but different types of evidence are found here.

Originality/value

An original database has been created, containing all the information on patents realised in the area of Florence from 2004 until 2016, and a social networks analysis was applied to identify the local innovation networks.

Details

Competitiveness Review: An International Business Journal , vol. 30 no. 4
Type: Research Article
ISSN: 1059-5422

Keywords

Open Access
Article
Publication date: 15 August 2022

Aya Khaled Youssef Sayed Mohamed, Dagmar Auer, Daniel Hofer and Josef Küng

Authorization and access control have been a topic of research for several decades. However, existing definitions are inconsistent and even contradicting each other. Furthermore…

6822

Abstract

Purpose

Authorization and access control have been a topic of research for several decades. However, existing definitions are inconsistent and even contradicting each other. Furthermore, there are numerous access control models and even more have recently evolved to conform with the challenging requirements of resource protection. That makes it hard to classify the models and decide for an appropriate one satisfying security needs. Therefore, this study aims to guide through the plenty of access control models in the current state of the art besides this opaque accumulation of terms meaning and how they are related.

Design/methodology/approach

This study follows the systematic literature review approach to investigate current research regarding access control models and illustrate the findings of the conducted review. To provide a detailed understanding of the topic, this study identified the need for an additional study on the terms related to the domain of authorization and access control.

Findings

The authors’ research results in this paper are the distinction between authorization and access control with respect to definition, strategies, and models in addition to the classification schema. This study provides a comprehensive overview of existing models and an analysis according to the proposed five classes of access control models.

Originality/value

Based on the authors’ definitions of authorization and access control along with their related terms, i.e. authorization strategy, model and policy as well as access control model and mechanism, this study gives an overview of authorization strategies and propose a classification of access control models providing examples for each category. In contrast to other comparative studies, this study discusses more access control models, including the conventional state-of-the-art models and novel ones. This study also summarizes each of the literature works after selecting the relevant ones focusing on the database system domain or providing a survey, a classification or evaluation criteria of access control models. Additionally, the introduced categories of models are analyzed with respect to various criteria that are partly selected from the standard access control system evaluation metrics by the National Institute of Standards and Technology.

Details

International Journal of Web Information Systems, vol. 18 no. 2/3
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 5 April 2023

Xinghua Shan, Zhiqiang Zhang, Fei Ning, Shida Li and Linlin Dai

With the yearly increase of mileage and passenger volume in China's high-speed railway, the problems of traditional paper railway tickets have become increasingly prominent…

1367

Abstract

Purpose

With the yearly increase of mileage and passenger volume in China's high-speed railway, the problems of traditional paper railway tickets have become increasingly prominent, including complexity of business handling process, low efficiency of ticket inspection and high cost of usage and management. This paper aims to make extensive references to successful experiences of electronic ticket applications both domestically and internationally. The research on key technologies and system implementation of railway electronic ticket with Chinese characteristics has been carried out.

Design/methodology/approach

Research in key technologies is conducted including synchronization technique in distributed heterogeneous database system, the grid-oriented passenger service record (PSR) data storage model, efficient access to massive PSR data under high concurrency condition, the linkage between face recognition service platforms and various terminals in large scenarios, and two-factor authentication of the e-ticket identification code based on the key and the user identity information. Focusing on the key technologies and architecture the of existing ticketing system, multiple service resources are expanded and developed such as electronic ticket clusters, PSR clusters, face recognition clusters and electronic ticket identification code clusters.

Findings

The proportion of paper ticket printed has dropped to 20%, saving more than 2 billion tickets annually since the launch of the application of E-ticketing nationwide. The average time for passengers to pass through the automatic ticket gates has decreased from 3 seconds to 1.3 seconds, significantly improving the efficiency of passenger transport organization. Meanwhile, problems of paper ticket counterfeiting, reselling and loss have been generally eliminated.

Originality/value

E-ticketing has laid a technical foundation for the further development of railway passenger transport services in the direction of digitalization and intelligence.

Details

Railway Sciences, vol. 2 no. 1
Type: Research Article
ISSN: 2755-0907

Keywords

Open Access
Article
Publication date: 20 July 2020

Abdelghani Bakhtouchi

With the progress of new technologies of information and communication, more and more producers of data exist. On the other hand, the web forms a huge support of all these kinds…

1847

Abstract

With the progress of new technologies of information and communication, more and more producers of data exist. On the other hand, the web forms a huge support of all these kinds of data. Unfortunately, existing data is not proper due to the existence of the same information in different sources, as well as erroneous and incomplete data. The aim of data integration systems is to offer to a user a unique interface to query a number of sources. A key challenge of such systems is to deal with conflicting information from the same source or from different sources. We present, in this paper, the resolution of conflict at the instance level into two stages: references reconciliation and data fusion. The reference reconciliation methods seek to decide if two data descriptions are references to the same entity in reality. We define the principles of reconciliation method then we distinguish the methods of reference reconciliation, first on how to use the descriptions of references, then the way to acquire knowledge. We finish this section by discussing some current data reconciliation issues that are the subject of current research. Data fusion in turn, has the objective to merge duplicates into a single representation while resolving conflicts between the data. We define first the conflicts classification, the strategies for dealing with conflicts and the implementing conflict management strategies. We present then, the relational operators and data fusion techniques. Likewise, we finish this section by discussing some current data fusion issues that are the subject of current research.

Details

Applied Computing and Informatics, vol. 18 no. 3/4
Type: Research Article
ISSN: 2634-1964

Keywords

1 – 10 of 738