Search results

1 – 10 of 22
Open Access
Article
Publication date: 9 October 2023

Aya Khaled Youssef Sayed Mohamed, Dagmar Auer, Daniel Hofer and Josef Küng

Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are…

1037

Abstract

Purpose

Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are increasingly used in security-critical domains. Current survey works on databases and data security only consider authorization and access control in a very general way and do not regard most of today’s sophisticated requirements. Accordingly, the purpose of this paper is to discuss authorization and access control for relational and NoSQL database models in detail with respect to requirements and current state of the art.

Design/methodology/approach

This paper follows a systematic literature review approach to study authorization and access control for different database models. Starting with a research on survey works on authorization and access control in databases, the study continues with the identification and definition of advanced authorization and access control requirements, which are generally applicable to any database model. This paper then discusses and compares current database models based on these requirements.

Findings

As no survey works consider requirements for authorization and access control in different database models so far, the authors define their requirements. Furthermore, the authors discuss the current state of the art for the relational, key-value, column-oriented, document-based and graph database models in comparison to the defined requirements.

Originality/value

This paper focuses on authorization and access control for various database models, not concrete products. This paper identifies today’s sophisticated – yet general – requirements from the literature and compares them with research results and access control features of current products for the relational and NoSQL database models.

Details

International Journal of Web Information Systems, vol. 20 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 9 April 2024

Ishrat Ayub Sofi, Ajra Bhat and Rahat Gulzar

The study aims to shed light on the current state of “Dataset repositories” indexed in Directory of Open Access Repositories (OpenDOAR).

Abstract

Purpose

The study aims to shed light on the current state of “Dataset repositories” indexed in Directory of Open Access Repositories (OpenDOAR).

Design/methodology/approach

From each repository/record information, the Open-Access Policies, Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH), year of creation and the number of data sets archived in the repositories were manually searched, documented and analyzed.

Findings

Developed countries like the United Kingdom and the USA are primarily involved in the development of institutional open-access repositories comprising significant components of OpenDOAR. The most extensively used software is DSpace. Most data set archives are OAI-PMH compliant but do not follow open-access rules. The study also highlights the sites’ embrace of Web 2.0 capabilities and discovers really simple syndication feeds and Atom integration. The use of social media has made its presence known. Furthermore, the study concludes that the number of data sets kept in repositories is insufficient, although the expansion of such repositories has been consistent over the years.

Practical implications

The work has the potential to benefit both researchers in general and policymakers in particular. Scholars interested in research data, data sharing and data reuse can learn about the present state of repositories that preserve data sets in OpenDOAR. At the same time, policymakers can develop recommendations and policies to assist in the construction and maintenance of repositories for data sets.

Originality/value

According to the literature, there have been numerous studies on open-access repositories and OpenDOAR internationally, but no research has focused on repositories preserving content-type data sets. As a result, the study attempts to uncover various characteristics of OpenDOAR Data set repositories.

Details

Digital Library Perspectives, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5816

Keywords

Article
Publication date: 28 February 2023

Tulsi Pawan Fowdur, M.A.N. Shaikh Abdoolla and Lokeshwar Doobur

The purpose of this paper is to perform a comparative analysis of the delay associated in running two real-time machine learning-based applications, namely, a video quality…

Abstract

Purpose

The purpose of this paper is to perform a comparative analysis of the delay associated in running two real-time machine learning-based applications, namely, a video quality assessment (VQA) and a phishing detection application by using the edge, fog and cloud computing paradigms.

Design/methodology/approach

The VQA algorithm was developed using Android Studio and run on a mobile phone for the edge paradigm. For the fog paradigm, it was hosted on a Java server and for the cloud paradigm on the IBM and Firebase clouds. The phishing detection algorithm was embedded into a browser extension for the edge paradigm. For the fog paradigm, it was hosted on a Node.js server and for the cloud paradigm on Firebase.

Findings

For the VQA algorithm, the edge paradigm had the highest response time while the cloud paradigm had the lowest, as the algorithm was computationally intensive. For the phishing detection algorithm, the edge paradigm had the lowest response time, and the cloud paradigm had the highest, as the algorithm had a low computational complexity. Since the determining factor for the response time was the latency, the edge paradigm provided the smallest delay as all processing were local.

Research limitations/implications

The main limitation of this work is that the experiments were performed on a small scale due to time and budget constraints.

Originality/value

A detailed analysis with real applications has been provided to show how the complexity of an application can determine the best computing paradigm on which it can be deployed.

Details

International Journal of Pervasive Computing and Communications, vol. 20 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 11 January 2023

Dimitrios Kafetzopoulos, Spiridoula Margariti, Chrysostomos Stylios, Eleni Arvaniti and Panagiotis Kafetzopoulos

The objective of this study is to improve the food supply chain performance taking into consideration the fundamental concepts of traceability by combining the current frameworks…

Abstract

Purpose

The objective of this study is to improve the food supply chain performance taking into consideration the fundamental concepts of traceability by combining the current frameworks, its principles, its implications and the emerging technologies.

Design/methodology/approach

A narrative literature review of already existing empirical research on traceability systems was conducted resulting in 862 relevant papers. Following a step-by-step sampling process, the authors ended up with 46 final samples for the literature review.

Findings

The main findings of this study include the various descriptions of the architecture of traceability systems, the different sources enabling this practice, the common desirable attributes, and the enabling technologies for the deployment and implementation of traceability systems. Moreover, several technological solutions are presented, which are currently available for traceability systems, and finally, opportunities for future research are provided.

Practical implications

It provides an insight, which could affect the implementation process of traceability in the food supply chain and consequently the effective management of a food traceability system (FTS). Managers will be able to create a traceability system, which meets users' requirements, thus enhancing the value of products and food companies.

Originality/value

This study contributes to the food supply chain and the traceability systems literature by creating a holistic picture of where something has been and where it should go. It is a starting point for each food company to design and manage its traceability system more effectively.

Details

International Journal of Productivity and Performance Management, vol. 73 no. 2
Type: Research Article
ISSN: 1741-0401

Keywords

Article
Publication date: 20 November 2023

Nkeiru A. Emezie, Scholastica A.J. Chukwu, Ngozi M. Nwaohiri, Nancy Emerole and Ijeoma I. Bernard

University intellectual output such as theses and dissertations are valuable resources containing rigorous research results. Library staff who are key players in promoting…

Abstract

Purpose

University intellectual output such as theses and dissertations are valuable resources containing rigorous research results. Library staff who are key players in promoting intellectual output through institutional repositories require skills to promote content visibility, create wider outreach and facilitate easy access and use of these resources. This study aims to determine the skills of library staff to enhance the visibility of intellectual output in federal university libraries in southeast Nigeria.

Design/methodology/approach

A survey research design was adopted for the study. The questionnaire was used to obtain responses from library staff on the extent of computer skills and their abilities for digital conversion, metadata creation and preservation of digital content.

Findings

Library staff at the university libraries had high skills in basic computer operations. They had moderate skills in digital conversion, preservation and storage. However, they had low skills in metadata creation.

Practical implications

The study has implications for addressing the digital skills and professional expertise of library staff, especially as it concerns metadata creation, digital conversion, preservation and storage. It also has implications for the university management to prioritize the training of their library staff in other to increase the visibility of indigenous resources and university Web ranking.

Originality/value

This study serves as a lens to identify library staff skill gaps in many critical areas that require expertise and stimulate conscious effort toward developing adequate skills for effective digital information provision. It sheds light on the challenges that many Nigerian university libraries face in their pursuit of global visibility and university Web ranking.

Details

Digital Library Perspectives, vol. 40 no. 1
Type: Research Article
ISSN: 2059-5816

Keywords

Article
Publication date: 3 October 2023

Renan Ribeiro Do Prado, Pedro Antonio Boareto, Joceir Chaves and Eduardo Alves Portela Santos

The aim of this paper is to explore the possibility of using the Define-Measure-Analyze-Improve-Control (DMAIC) cycle, process mining (PM) and multi-criteria decision methods in…

Abstract

Purpose

The aim of this paper is to explore the possibility of using the Define-Measure-Analyze-Improve-Control (DMAIC) cycle, process mining (PM) and multi-criteria decision methods in an integrated way so that these three elements combined result in a methodology called the Agile DMAIC cycle, which brings more agility and reliability in the execution of the Six Sigma process.

Design/methodology/approach

The approach taken by the authors in this study was to analyze the studies arising from this union of concepts and to focus on using PM tools where appropriate to accelerate the DMAIC cycle by improving the first two steps, and to test using the AHP as a decision-making process, to bring more excellent reliability in the definition of indicators.

Findings

It was indicated that there was a gain with acquiring indicators and process maps generated by PM. And through the AHP, there was a greater accuracy in determining the importance of the indicators.

Practical implications

Through the results and findings of this study, more organizations can understand the potential of integrating Six Sigma and PM. It was just developed for the first two steps of the DMAIC cycle, and it is also a replicable method for any Six Sigma project where data acquisition through mining is possible.

Originality/value

The authors develop a fully applicable and understandable methodology which can be replicated in other settings and expanded in future research.

Details

International Journal of Lean Six Sigma, vol. 15 no. 3
Type: Research Article
ISSN: 2040-4166

Keywords

Article
Publication date: 29 March 2024

Anil Kumar Goswami, Anamika Sinha, Meghna Goswami and Prashant Kumar

This study aims to extend and explore patterns and trends of research in the linkage of big data and knowledge management (KM) by identifying growth in terms of numbers of papers…

Abstract

Purpose

This study aims to extend and explore patterns and trends of research in the linkage of big data and knowledge management (KM) by identifying growth in terms of numbers of papers and current and emerging themes and to propose areas of future research.

Design/methodology/approach

The study was conducted by systematically extracting, analysing and synthesizing the literature related to linkage between big data and KM published in top-tier journals in Web of Science (WOS) and Scopus databases by exploiting bibliometric techniques along with theory, context, characteristics, methodology (TCCM) analysis.

Findings

The study unfolds four major themes of linkage between big data and KM research, namely (1) conceptual understanding of big data as an enabler for KM, (2) big data–based models and frameworks for KM, (3) big data as a predictor variable in KM context and (4) big data applications and capabilities. It also highlights TCCM of big data and KM research through which it integrates a few previously reported themes and suggests some new themes.

Research limitations/implications

This study extends advances in the previous reviews by adding a new time line, identifying new themes and helping in the understanding of complex and emerging field of linkage between big data and KM. The study outlines a holistic view of the research area and suggests future directions for flourishing in this research area.

Practical implications

This study highlights the role of big data in KM context resulting in enhancement of organizational performance and efficiency. A summary of existing literature and future avenues in this direction will help, guide and motivate managers to think beyond traditional data and incorporate big data into organizational knowledge infrastructure in order to get competitive advantage.

Originality/value

To the best of authors’ knowledge, the present study is the first study to go deeper into understanding of big data and KM research using bibliometric and TCCM analysis and thus adds a new theoretical perspective to existing literature.

Details

Benchmarking: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-5771

Keywords

Open Access
Article
Publication date: 5 April 2024

Miquel Centelles and Núria Ferran-Ferrer

Develop a comprehensive framework for assessing the knowledge organization systems (KOSs), including the taxonomy of Wikipedia and the ontologies of Wikidata, with a specific…

Abstract

Purpose

Develop a comprehensive framework for assessing the knowledge organization systems (KOSs), including the taxonomy of Wikipedia and the ontologies of Wikidata, with a specific focus on enhancing management and retrieval with a gender nonbinary perspective.

Design/methodology/approach

This study employs heuristic and inspection methods to assess Wikipedia’s KOS, ensuring compliance with international standards. It evaluates the efficiency of retrieving non-masculine gender-related articles using the Catalan Wikipedian category scheme, identifying limitations. Additionally, a novel assessment of Wikidata ontologies examines their structure and coverage of gender-related properties, comparing them to Wikipedia’s taxonomy for advantages and enhancements.

Findings

This study evaluates Wikipedia’s taxonomy and Wikidata’s ontologies, establishing evaluation criteria for gender-based categorization and exploring their structural effectiveness. The evaluation process suggests that Wikidata ontologies may offer a viable solution to address Wikipedia’s categorization challenges.

Originality/value

The assessment of Wikipedia categories (taxonomy) based on KOS standards leads to the conclusion that there is ample room for improvement, not only in matters concerning gender identity but also in the overall KOS to enhance search and retrieval for users. These findings bear relevance for the design of tools to support information retrieval on knowledge-rich websites, as they assist users in exploring topics and concepts.

Article
Publication date: 18 October 2022

Stefania Stellacci, Leonor Domingos and Ricardo Resende

The purpose of this research is to test the effectiveness of integrating Grasshopper 3D and measuring attractiveness by a categorical based evaluation technique (M-MACBETH) for…

Abstract

Purpose

The purpose of this research is to test the effectiveness of integrating Grasshopper 3D and measuring attractiveness by a categorical based evaluation technique (M-MACBETH) for building energy simulation analysis within a virtual environment. Set of energy retrofitting solutions is evaluated against performance-based criteria (energy consumption, weight and carbon footprint), and considering the preservation of the cultural value of the building, its architectural and spatial configuration.

Design/methodology/approach

This research addresses the building energy performance analysis before and after the design of retrofitting solutions in extreme climate environments (2030–2100). The proposed model integrates data obtained from an advanced parametric tool (Grasshopper) and a multi-criteria decision analysis (M-MACBETH) to score different energy retrofitting solutions against energy consumption, weight, carbon footprint and impact on architectural configuration. The proposed model is tested for predicting the performance of a traditional timber-framed dwelling in a historic parish in Lisbon. The performance of distinct solutions is compared in digitally simulated climate conditions (design scenarios) considering different criteria weights.

Findings

This study shows the importance of conducting building energy simulation linking physical and digital environments and then, identifying a set of evaluation criteria in the analysed context. Architects, environmental engineers and urban planners should use computational environment in the development design phase to identify design solutions and compare their expected impact on the building configuration and performance-based behaviour.

Research limitations/implications

The unavailability of local weather data (EnergyPlus Weather File (EPW) file), the high time-resource effort, and the number/type of the energy retrofit measures tested in this research limit the scope of this study. In energy simulation procedures, the baseline generally covers a period of thirty, ten or five years. In this research, due to the fact that weather data is unavailable in the format required in the simulation process (.EPW file), the input data in the baseline is the average climatic data from EnergyPlus (2022). Additionally, this workflow is time-consuming due to the low interoperability of the software. Grasshopper requires a high-skilled analyst to obtain accurate results. To calculate the values for the energy consumption, i.e. the values of energy per day of simulation, all the values given per hour are manually summed. The values of weight are obtained by calculating the amount of material required (whose dimensions are provided by Grasshopper), while the amount of carbon footprint is calculated per kg of material. Then this set of data is introduced into M-MACBETH. Another relevant limitation is related to the techniques proposed for retrofitting this case study, all based on wood-fibre boards.

Practical implications

The proposed method for energy simulation and climate change adaptation can be applied to other historic buildings considering different evaluation criteria and context-based priorities.

Social implications

Context-based adaptation measures of the built environment are necessary for the coming years due to the projected extreme temperature changes following the 2015 Paris Agreement and the 2030 Agenda. Built environments include historical sites that represent irreplaceable cultural legacies and factors of the community's identity to be preserved over time.

Originality/value

This study shows the importance of conducting building energy simulation using physical and digital environments. Computational environment should be used during the development design phase by architects, engineers and urban planners to rank design solutions against a set of performance criteria and compare the expected impact on the building configuration and performance-based behaviour. This study integrates Grasshopper 3D and M-MACBETH.

Details

International Journal of Building Pathology and Adaptation, vol. 42 no. 1
Type: Research Article
ISSN: 2398-4708

Keywords

Article
Publication date: 1 April 2024

Zoubeir Lafhaj, Slim Rebai, Olfa Hamdi, Rateb Jabbar, Hamdi Ayech and Pascal Yim

This study aims to introduce and evaluate the COPULA framework, a construction project monitoring solution based on blockchain designed to address the inherent challenges of…

Abstract

Purpose

This study aims to introduce and evaluate the COPULA framework, a construction project monitoring solution based on blockchain designed to address the inherent challenges of construction project monitoring and management. This research aims to enhance efficiency, transparency and trust within the dynamic and collaborative environment of the construction industry by leveraging the decentralized, secure and immutable nature of blockchain technology.

Design/methodology/approach

This paper employs a comprehensive approach encompassing the formulation of the COPULA model, the development of a digital solution using the ethereum blockchain and extensive testing to assess performance in terms of execution cost, time, integrity, immutability and security. A case analysis is conducted to demonstrate the practical application and benefits of blockchain technology in real-world construction project monitoring scenarios.

Findings

The findings reveal that the COPULA framework effectively addresses critical issues such as centralization, privacy and security vulnerabilities in construction project management. It facilitates seamless data exchange among stakeholders, ensuring real-time transparency and the creation of a tamper-proof communication channel. The framework demonstrates the potential to significantly enhance project efficiency and foster trust among all parties involved.

Research limitations/implications

While the study provides promising insights into the application of blockchain technology in construction project monitoring, future research could explore the integration of COPULA with existing project management methodologies to broaden its applicability and impact. Further investigations into the solution’s scalability and adaptation to various construction project types and sizes are also suggested.

Originality/value

This research offers a comprehensive blockchain solution specifically tailored for the construction industry. Unlike prior studies focusing on theoretical aspects, this paper presents a practical, end-to-end solution encompassing model formulation, digital implementation, proof-of-concept testing and validation analysis. The COPULA framework marks a significant advancement in the digital transformation of construction project monitoring, providing a novel approach to overcoming longstanding industry challenges.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

1 – 10 of 22