Search results

1 – 10 of over 1000
Open Access
Article
Publication date: 6 September 2021

Gerd Hübscher, Verena Geist, Dagmar Auer, Nicole Hübscher and Josef Küng

Knowledge- and communication-intensive domains still long for a better support of creativity that considers legal requirements, compliance rules and administrative tasks as well…

895

Abstract

Purpose

Knowledge- and communication-intensive domains still long for a better support of creativity that considers legal requirements, compliance rules and administrative tasks as well, because current systems focus either on knowledge representation or business process management. The purpose of this paper is to discuss our model of integrated knowledge and business process representation and its presentation to users.

Design/methodology/approach

The authors follow a design science approach in the environment of patent prosecution, which is characterized by a highly standardized, legally prescribed process and individual knowledge study. Thus, the research is based on knowledge study, BPM, graph-based knowledge representation and user interface design. The authors iteratively designed and built a model and a prototype. To evaluate the approach, the authors used analytical proof of concept, real-world test scenarios and case studies in real-world settings, where the authors conducted observations and open interviews.

Findings

The authors designed a model and implemented a prototype for evolving and storing static and dynamic aspects of knowledge. The proposed solution leverages the flexibility of a graph-based model to enable open and not only continuously developing user-centered processes but also pre-defined ones. The authors further propose a user interface concept which supports users to benefit from the richness of the model but provides sufficient guidance.

Originality/value

The balanced integration of the data and task perspectives distinguishes the model significantly from other approaches such as BPM or knowledge graphs. The authors further provide a sophisticated user interface design, which allows the users to effectively and efficiently use the graph-based knowledge representation in their daily study.

Details

International Journal of Web Information Systems, vol. 17 no. 6
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 9 October 2023

Aya Khaled Youssef Sayed Mohamed, Dagmar Auer, Daniel Hofer and Josef Küng

Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are…

1218

Abstract

Purpose

Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are increasingly used in security-critical domains. Current survey works on databases and data security only consider authorization and access control in a very general way and do not regard most of today’s sophisticated requirements. Accordingly, the purpose of this paper is to discuss authorization and access control for relational and NoSQL database models in detail with respect to requirements and current state of the art.

Design/methodology/approach

This paper follows a systematic literature review approach to study authorization and access control for different database models. Starting with a research on survey works on authorization and access control in databases, the study continues with the identification and definition of advanced authorization and access control requirements, which are generally applicable to any database model. This paper then discusses and compares current database models based on these requirements.

Findings

As no survey works consider requirements for authorization and access control in different database models so far, the authors define their requirements. Furthermore, the authors discuss the current state of the art for the relational, key-value, column-oriented, document-based and graph database models in comparison to the defined requirements.

Originality/value

This paper focuses on authorization and access control for various database models, not concrete products. This paper identifies today’s sophisticated – yet general – requirements from the literature and compares them with research results and access control features of current products for the relational and NoSQL database models.

Details

International Journal of Web Information Systems, vol. 20 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 20 August 2021

Daniel Hofer, Markus Jäger, Aya Khaled Youssef Sayed Mohamed and Josef Küng

For aiding computer security experts in their study, log files are a crucial piece of information. Especially the time domain is very important for us because in most cases…

2240

Abstract

Purpose

For aiding computer security experts in their study, log files are a crucial piece of information. Especially the time domain is very important for us because in most cases, timestamps are the only linking points between events caused by attackers, faulty systems or simple errors and their corresponding entries in log files. With the idea of storing and analyzing this log information in graph databases, we need a suitable model to store and connect timestamps and their events. This paper aims to find and evaluate different approaches how to store timestamps in graph databases and their individual benefits and drawbacks.

Design/methodology/approach

We analyse three different approaches, how timestamp information can be represented and stored in graph databases. For checking the models, we set up four typical questions that are important for log file analysis and tested them for each of the models. During the evaluation, we used the performance and other properties as metrics, how suitable each of the models is for representing the log files’ timestamp information. In the last part, we try to improve one promising looking model.

Findings

We come to the conclusion, that the simplest model with the least graph database-specific concepts in use is also the one yielding the simplest and fastest queries.

Research limitations/implications

Limitations to this research are that only one graph database was studied and also improvements to the query engine might change future results.

Originality/value

In the study, we addressed the issue of storing timestamps in graph databases in a meaningful, practical and efficient way. The results can be used as a pattern for similar scenarios and applications.

Details

International Journal of Web Information Systems, vol. 17 no. 5
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 31 December 2019

Maneerat Kanrak, Hong Oanh Nguyen and Yuquan Du

This paper presents a critical review of the economic network analysis methods and their applications to maritime transport. A network can be presented in terms of its structure…

Abstract

This paper presents a critical review of the economic network analysis methods and their applications to maritime transport. A network can be presented in terms of its structure, topology, characteristics as well as the connectivity with different measures such as density, degree distribution, centrality (degree, betweenness, closeness, eigenvector and strength), clustering coefficient, average shortest path length and assortative. Various models such as the random graph model, block model, and ERGM can be used to analyse and explore the formation of a network and interaction between nodes. The review of the existing theories and models has found that, while these models are rather computationally intensive, they are based on some rather restrictive assumption on network formation and relationship between ports in the network at the local and global levels that require further investigation. Based on the review, a conceptual framework for maritime transport network research is developed, and the applications for future research are also discussed.

Details

Journal of International Logistics and Trade, vol. 17 no. 4
Type: Research Article
ISSN: 1738-2122

Keywords

Open Access
Article
Publication date: 31 December 2019

Maneerat Kanrak, Hong Oanh Nguyen and Yuquan Du

This paper presents a critical review of the economic network analysis methods and their applications to maritime transport. A network can be presented in terms of its structure…

Abstract

This paper presents a critical review of the economic network analysis methods and their applications to maritime transport. A network can be presented in terms of its structure, topology, characteristics as well as the connectivity with different measures such as density, degree distribution, centrality (degree, betweenness, closeness, eigenvector and strength), clustering coefficient, average shortest path length and assortative. Various models such as the random graph model, block model, and ERGM can be used to analyse and explore the formation of a network and interaction between nodes. The review of the existing theories and models has found that, while these models are rather computationally intensive, they are based on some rather restrictive assumption on network formation and relationship between ports in the network at the local and global levels that require further investigation. Based on the review, a conceptual framework for maritime transport network research is developed, and the applications for future research are also discussed.

Details

Journal of International Logistics and Trade, vol. 17 no. 4
Type: Research Article
ISSN: 1738-2122

Keywords

Open Access
Article
Publication date: 29 June 2020

Paolo Manghi, Claudio Atzori, Michele De Bonis and Alessia Bardi

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate…

4628

Abstract

Purpose

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate scholarly/scientific communication entities such as publications, authors, datasets, organizations, projects, funders, etc. Depending on the target users, access can vary from search and browse content to the consumption of statistics for monitoring and provision of feedback. Such graphs are populated over time as aggregations of multiple sources and therefore suffer from major entity-duplication problems. Although deduplication of graphs is a known and actual problem, existing solutions are dedicated to specific scenarios, operate on flat collections, local topology-drive challenges and cannot therefore be re-used in other contexts.

Design/methodology/approach

This work presents GDup, an integrated, scalable, general-purpose system that can be customized to address deduplication over arbitrary large information graphs. The paper presents its high-level architecture, its implementation as a service used within the OpenAIRE infrastructure system and reports numbers of real-case experiments.

Findings

GDup provides the functionalities required to deliver a fully-fledged entity deduplication workflow over a generic input graph. The system offers out-of-the-box Ground Truth management, acquisition of feedback from data curators and algorithms for identifying and merging duplicates, to obtain an output disambiguated graph.

Originality/value

To our knowledge GDup is the only system in the literature that offers an integrated and general-purpose solution for the deduplication graphs, while targeting big data scalability issues. GDup is today one of the key modules of the OpenAIRE infrastructure production system, which monitors Open Science trends on behalf of the European Commission, National funders and institutions.

Details

Data Technologies and Applications, vol. 54 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Open Access
Article
Publication date: 18 October 2022

Ramy Shaheen, Suhail Mahfud and Ali Kassem

This paper aims to study Irreversible conversion processes, which examine the spread of a one way change of state (from state 0 to state 1) through a specified society (the spread…

502

Abstract

Purpose

This paper aims to study Irreversible conversion processes, which examine the spread of a one way change of state (from state 0 to state 1) through a specified society (the spread of disease through populations, the spread of opinion through social networks, etc.) where the conversion rule is determined at the beginning of the study. These processes can be modeled into graph theoretical models where the vertex set V(G) represents the set of individuals on which the conversion is spreading.

Design/methodology/approach

The irreversible k-threshold conversion process on a graph G=(V,E) is an iterative process which starts by choosing a set S_0?V, and for each step t (t = 1, 2,…,), S_t is obtained from S_(t−1) by adjoining all vertices that have at least k neighbors in S_(t−1). S_0 is called the seed set of the k-threshold conversion process and is called an irreversible k-threshold conversion set (IkCS) of G if S_t = V(G) for some t = 0. The minimum cardinality of all the IkCSs of G is referred to as the irreversible k-threshold conversion number of G and is denoted by C_k (G).

Findings

In this paper the authors determine C_k (G) for generalized Jahangir graph J_(s,m) for 1 < k = m and s, m are arbitraries. The authors also determine C_k (G) for strong grids P_2? P_n when k = 4, 5. Finally, the authors determine C_2 (G) for P_n? P_n when n is arbitrary.

Originality/value

This work is 100% original and has important use in real life problems like Anti-Bioterrorism.

Details

Arab Journal of Mathematical Sciences, vol. 30 no. 1
Type: Research Article
ISSN: 1319-5166

Keywords

Open Access
Article
Publication date: 7 December 2021

Benshuo Yang and Haojun Xu

Japan's decision to release nuclear wastewater into the Pacific Ocean in 2023 has sparked strong opposition at home and abroad. In this study, Graph Model for Conflict Resolution…

3025

Abstract

Purpose

Japan's decision to release nuclear wastewater into the Pacific Ocean in 2023 has sparked strong opposition at home and abroad. In this study, Graph Model for Conflict Resolution (GMCR) method is adopted to analyze the conflict problem, and reasonable equilibrium solutions are given to solve the conflict event.

Design/methodology/approach

In this study, GMCR is adopted to solve the conflict problem. First, identify the key decision-makers (DMs) on the issue of nuclear effluent and the relevant options they might adopt. Second, the options of each DM are arranged and combined to form a set of feasible states. Thirdly, the graph model is constructed according to the change of DM's options, and the relative preference of each DM is determined. Finally, the conflict problem is solved according to the definition of GMCR equilibrium.

Findings

Discharging nuclear wastewater into the ocean is not the right choice to solve the problem. Developing more space to store nuclear wastewater is more conducive to the protection of the ocean environment.

Practical implications

It is undesirable for the Japanese government to unilaterally discharge nuclear wastewater into the ocean. Objectively assessing the radioactivity of nuclear wastewater and the cooperation of relevant stakeholders can better solve this conflict.

Originality/value

The problem arising from Japan's releasing plan is complicated because of a lack of information and the existence of multiple stakeholders, while GMCR can help us with a better view of the current circumstance in the conflict.

Details

Marine Economics and Management, vol. 5 no. 1
Type: Research Article
ISSN: 2516-158X

Keywords

Open Access
Article
Publication date: 17 August 2021

Abeer A. Zaki, Nesma A. Saleh and Mahmoud A. Mahmoud

This study aims to assess the effect of updating the Phase I data – to enhance the parameters' estimates – on the control charts' detection power designed to monitor social…

Abstract

Purpose

This study aims to assess the effect of updating the Phase I data – to enhance the parameters' estimates – on the control charts' detection power designed to monitor social networks.

Design/methodology/approach

A dynamic version of the degree corrected stochastic block model (DCSBM) is used to model the network. Both the Shewhart and exponentially weighted moving average (EWMA) control charts are used to monitor the model parameters. A performance comparison is conducted for each chart when designed using both fixed and moving windows of networks.

Findings

Our results show that continuously updating the parameters' estimates during the monitoring phase delays the Shewhart chart's detection of networks' anomalies; as compared to the fixed window approach. While the EWMA chart performance is either indifferent or worse, based on the updating technique, as compared to the fixed window approach. Generally, the EWMA chart performs uniformly better than the Shewhart chart for all shift sizes. We recommend the use of the EWMA chart when monitoring networks modeled with the DCSBM, with sufficiently small to moderate fixed window size to estimate the unknown model parameters.

Originality/value

This study shows that the excessive recommendations in literature regarding the continuous updating of Phase I data during the monitoring phase to enhance the control chart performance cannot generally be extended to social network monitoring; especially when using the DCSBM. That is to say, the effect of continuously updating the parameters' estimates highly depends on the nature of the process being monitored.

Details

Review of Economics and Political Science, vol. 6 no. 4
Type: Research Article
ISSN: 2356-9980

Keywords

Open Access
Article
Publication date: 8 February 2023

Edoardo Ramalli and Barbara Pernici

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model

Abstract

Purpose

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model performance. Uncertainty inherently affects experiment measurements and is often missing in the available data sets due to its estimation cost. For similar reasons, experiments are very few compared to other data sources. Discarding experiments based on the missing uncertainty values would preclude the development of predictive models. Data profiling techniques are fundamental to assess data quality, but some data quality dimensions are challenging to evaluate without knowing the uncertainty. In this context, this paper aims to predict the missing uncertainty of the experiments.

Design/methodology/approach

This work presents a methodology to forecast the experiments’ missing uncertainty, given a data set and its ontological description. The approach is based on knowledge graph embeddings and leverages the task of link prediction over a knowledge graph representation of the experiments database. The validity of the methodology is first tested in multiple conditions using synthetic data and then applied to a large data set of experiments in the chemical kinetic domain as a case study.

Findings

The analysis results of different test case scenarios suggest that knowledge graph embedding can be used to predict the missing uncertainty of the experiments when there is a hidden relationship between the experiment metadata and the uncertainty values. The link prediction task is also resilient to random noise in the relationship. The knowledge graph embedding outperforms the baseline results if the uncertainty depends upon multiple metadata.

Originality/value

The employment of knowledge graph embedding to predict the missing experimental uncertainty is a novel alternative to the current and more costly techniques in the literature. Such contribution permits a better data quality profiling of scientific repositories and improves the development process of data-driven models based on scientific experiments.

1 – 10 of over 1000