Search results
1 – 10 of 45Aya Khaled Youssef Sayed Mohamed, Dagmar Auer, Daniel Hofer and Josef Küng
Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are…
Abstract
Purpose
Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are increasingly used in security-critical domains. Current survey works on databases and data security only consider authorization and access control in a very general way and do not regard most of today’s sophisticated requirements. Accordingly, the purpose of this paper is to discuss authorization and access control for relational and NoSQL database models in detail with respect to requirements and current state of the art.
Design/methodology/approach
This paper follows a systematic literature review approach to study authorization and access control for different database models. Starting with a research on survey works on authorization and access control in databases, the study continues with the identification and definition of advanced authorization and access control requirements, which are generally applicable to any database model. This paper then discusses and compares current database models based on these requirements.
Findings
As no survey works consider requirements for authorization and access control in different database models so far, the authors define their requirements. Furthermore, the authors discuss the current state of the art for the relational, key-value, column-oriented, document-based and graph database models in comparison to the defined requirements.
Originality/value
This paper focuses on authorization and access control for various database models, not concrete products. This paper identifies today’s sophisticated – yet general – requirements from the literature and compares them with research results and access control features of current products for the relational and NoSQL database models.
Details
Keywords
Xiaoyan Jiang, Sai Wang, Yong Liu, Bo Xia, Martin Skitmore, Madhav Nepal and Amir Naser Ghanbaripour
With the increasing complexity of public–private partnership (PPP) projects, the amount of data generated during the construction process is massive. This paper aims to develop a…
Abstract
Purpose
With the increasing complexity of public–private partnership (PPP) projects, the amount of data generated during the construction process is massive. This paper aims to develop a new information management method to cope with the risk problems involved in dealing with such data, based on domain ontologies of the construction industry, to help manage PPP risks, share and reuse risk knowledge.
Design/methodology/approach
Risk knowledge concepts are acquired and summarized through PPP failure cases and an extensive literature review to establish a domain framework for risk knowledge using ontology technology to help manage PPP risks.
Findings
The results indicate that the risk ontology is capable of capturing key concepts and relationships involved in managing PPP risks and can be used to facilitate knowledge reuse and storage beneficial to risk management.
Research limitations/implications
The classes in the risk knowledge ontology model constructed in this research do not yet cover all the information in PPP project risks and need to be further extended. Moreover, only the framework and basic methods needed are developed, while the construction of a working ontology model and the relationship between implicit and explicit knowledge is a complicated process that requires repeated modifications and evaluations before it can be implemented.
Practical implications
The ontology provides a basis for turning PPP risk information into risk knowledge to allow the effective sharing and communication of project risks between different project stakeholders. It can also have the potential to help reduce the dependence on subjectivity by mining, using and storing tacit knowledge in the risk management process.
Originality/value
The apparent suitability of the nine classes of PPP risk knowledge (project model, risk type, risk occurrence stage, risk source, risk consequence, risk likelihood, risk carrier, risk management measures and risk case) is identified, and the proposed construction method and steps for a complete domain ontology for PPP risk management are unique. A combination of criteria- and task-based evaluations is also developed for assessing the PPP risk ontology for the first time.
Details
Keywords
Gustavo Candela, Nele Gabriëls, Sally Chambers, Milena Dobreva, Sarah Ames, Meghan Ferriter, Neil Fitzgerald, Victor Harbo, Katrine Hofmann, Olga Holownia, Alba Irollo, Mahendra Mahey, Eileen Manchester, Thuy-An Pham, Abigail Potter and Ellen Van Keer
The purpose of this study is to offer a checklist that can be used for both creating and evaluating digital collections, which are also sometimes referred to as data sets as part…
Abstract
Purpose
The purpose of this study is to offer a checklist that can be used for both creating and evaluating digital collections, which are also sometimes referred to as data sets as part of the collections as data movement, suitable for computational use.
Design/methodology/approach
The checklist was built by synthesising and analysing the results of relevant research literature, articles and studies and the issues and needs obtained in an observational study. The checklist was tested and applied both as a tool for assessing a selection of digital collections made available by galleries, libraries, archives and museums (GLAM) institutions as proof of concept and as a supporting tool for creating collections as data.
Findings
Over the past few years, there has been a growing interest in making available digital collections published by GLAM organisations for computational use. Based on previous work, the authors defined a methodology to build a checklist for the publication of Collections as data. The authors’ evaluation showed several examples of applications that can be useful to encourage other institutions to publish their digital collections for computational use.
Originality/value
While some work on making available digital collections suitable for computational use exists, giving particular attention to data quality, planning and experimentation, to the best of the authors’ knowledge, none of the work to date provides an easy-to-follow and robust checklist to publish collection data sets in GLAM institutions. This checklist intends to encourage small- and medium-sized institutions to adopt the collection as data principles in daily workflows following best practices and guidelines.
Details
Keywords
Tulsi Pawan Fowdur, M.A.N. Shaikh Abdoolla and Lokeshwar Doobur
The purpose of this paper is to perform a comparative analysis of the delay associated in running two real-time machine learning-based applications, namely, a video quality…
Abstract
Purpose
The purpose of this paper is to perform a comparative analysis of the delay associated in running two real-time machine learning-based applications, namely, a video quality assessment (VQA) and a phishing detection application by using the edge, fog and cloud computing paradigms.
Design/methodology/approach
The VQA algorithm was developed using Android Studio and run on a mobile phone for the edge paradigm. For the fog paradigm, it was hosted on a Java server and for the cloud paradigm on the IBM and Firebase clouds. The phishing detection algorithm was embedded into a browser extension for the edge paradigm. For the fog paradigm, it was hosted on a Node.js server and for the cloud paradigm on Firebase.
Findings
For the VQA algorithm, the edge paradigm had the highest response time while the cloud paradigm had the lowest, as the algorithm was computationally intensive. For the phishing detection algorithm, the edge paradigm had the lowest response time, and the cloud paradigm had the highest, as the algorithm had a low computational complexity. Since the determining factor for the response time was the latency, the edge paradigm provided the smallest delay as all processing were local.
Research limitations/implications
The main limitation of this work is that the experiments were performed on a small scale due to time and budget constraints.
Originality/value
A detailed analysis with real applications has been provided to show how the complexity of an application can determine the best computing paradigm on which it can be deployed.
Dimitrios Kafetzopoulos, Spiridoula Margariti, Chrysostomos Stylios, Eleni Arvaniti and Panagiotis Kafetzopoulos
The objective of this study is to improve the food supply chain performance taking into consideration the fundamental concepts of traceability by combining the current frameworks…
Abstract
Purpose
The objective of this study is to improve the food supply chain performance taking into consideration the fundamental concepts of traceability by combining the current frameworks, its principles, its implications and the emerging technologies.
Design/methodology/approach
A narrative literature review of already existing empirical research on traceability systems was conducted resulting in 862 relevant papers. Following a step-by-step sampling process, the authors ended up with 46 final samples for the literature review.
Findings
The main findings of this study include the various descriptions of the architecture of traceability systems, the different sources enabling this practice, the common desirable attributes, and the enabling technologies for the deployment and implementation of traceability systems. Moreover, several technological solutions are presented, which are currently available for traceability systems, and finally, opportunities for future research are provided.
Practical implications
It provides an insight, which could affect the implementation process of traceability in the food supply chain and consequently the effective management of a food traceability system (FTS). Managers will be able to create a traceability system, which meets users' requirements, thus enhancing the value of products and food companies.
Originality/value
This study contributes to the food supply chain and the traceability systems literature by creating a holistic picture of where something has been and where it should go. It is a starting point for each food company to design and manage its traceability system more effectively.
Details
Keywords
Amer A. Hijazi, Srinath Perera, Rodrigo N. Calheiros and Ali Alashwal
Despite a large amount of BIM data at the handover stage, it is still difficult to identify and effectively isolate valuable construction supply chain (CSC) data that need to be…
Abstract
Purpose
Despite a large amount of BIM data at the handover stage, it is still difficult to identify and effectively isolate valuable construction supply chain (CSC) data that need to be reliably handed over for operation. Moreover, the role of reconciling disparate data is usually played by one party. The integration of blockchain and BIM is a plausible framework for building a reliable digital asset lifecycle. This paper proposes a BIM single source of truth (BIMSSoT) data model using blockchain for ensuring a reliable CSC data delivery.
Design/methodology/approach
This paper utilises a blended methodology, the foundation of which is ingrained in business and management research with elements of information and communication technology (ICT) research wherever required. Knowledge elicitation case studies utilising novel interventions such as a data flow diagram (DFD), taxonomy and entity-relationship diagram (ERD) were used in this paper to develop the BIMSSoT data model. The model was validated using an expert forum, and its technological feasibility was established by developing a proof of concept.
Findings
The practical contribution of this research leads to the progression of BIM towards digital engineering to go beyond object-based 3D modelling by building structured and reliable datasets, transitioning from project-centric records to a digital ecosystem of linked databases by utilizing blockchain's potential for ensuring trusted data.
Originality/value
To the best of the author's knowledge, prior to this paper, no research had investigated a detailed data model development leveraging blockchain and BIM to integrate an immutable and complete record of CSC data as another dimension of BIM for operations.
Details
Keywords
This study aims to identify the trending topics, emerging themes and future research directions in supply chain management (SCM) through multiple source of data. The insights…
Abstract
Purpose
This study aims to identify the trending topics, emerging themes and future research directions in supply chain management (SCM) through multiple source of data. The insights would be of use to academics, practitioners and policymakers to leverage latest developments in addressing current and future challenges.
Design/methodology/approach
This study uses a multiple source of data such as published literature and social media data including supply chain blogs and forums contents on business-to-business (B2B) firms to identify trending topics, emerging themes and future research directions in SCM. Topic modeling, a machine learning technique, is used to derive the topics and themes. Examining supply chain blogs and forums offer a valuable perspective on current issues and challenges faced by B2B firms. By analyzing the content of these online discussions, the study identifies emerging themes and topics of interest to practitioners and researchers.
Findings
The study synthesizes 1,648 published articles and more than 1.3 lakh tweets, discussions and expert views from social media, including various blogs and supply chain forums, and identifies six themes, of which three are trending, and the other three are emerging themes in the supply chain. Rather than aggregate implications, the study integrates findings from two databases and proposes a framework encompassing the drivers, processes and impacts on each theme and derives promising avenues for future research.
Originality/value
Prior literature has majorly used published research articles and reports as a primary source of information to identify the trending theme and emerging topics. To the best of the authors’ knowledge, this is the first study of its kind to examine the potential value of information from social media, such as blogs, websites, forums and published literature to discover new supply chain trends and themes related to B2B firms and derive encouraging possibilities for future research.
Details
Keywords
Margie Foster, Hossein Arvand, Hugh T. Graham and Denise Bedford
In this chapter, the authors define the new business term, future-proofing, and apply it to knowledge preservation and curation. The fundamental principles of future-proofing and…
Abstract
Chapter Summary
In this chapter, the authors define the new business term, future-proofing, and apply it to knowledge preservation and curation. The fundamental principles of future-proofing and the challenges and mechanics are discussed. These challenges are discussed in developing future-proofed knowledge preservation, and a curation strategy is identified. The authors identify four challenges to future-proofing a knowledge preservation and curation strategy – availability, visibility, accessibility, and consumability of knowledge assets. Ultimately, the greatest challenge to future-proofing these strategies lies in the channels we use to create, transmit, share, and store knowledge assets. At a minimum, the chapter speaks to the critical importance of future-proofing the preservation of knowledge assets, so there is a possibility of curation at some point in a known or unknown future.
Ayodeji E. Oke and Seyi S. Stephen
The interaction of systems through a designated control channel has improved communication, efficiency, management, storage, processing, etc. across several industries. The…
Abstract
The interaction of systems through a designated control channel has improved communication, efficiency, management, storage, processing, etc. across several industries. The construction industry is an industry that thrives on a well-planned workflow rhythm; a change in the environmental dynamism will either have a positive or negative impact on the output of the project planned for execution. More so, raising the need for effective collaboration through workflow and project planning, grid application in construction facilitates the relationship between the project reality and the end users, all with the aim of improving resources and value management. However, decentralisation of close-domain control can cause uncertainty and incompleteness of data. And this can be a big factor, especially when a complex project is being executed.
Details