Search results

1 – 10 of 156
Open Access
Article
Publication date: 20 July 2020

Abdelghani Bakhtouchi

With the progress of new technologies of information and communication, more and more producers of data exist. On the other hand, the web forms a huge support of all these kinds…

1847

Abstract

With the progress of new technologies of information and communication, more and more producers of data exist. On the other hand, the web forms a huge support of all these kinds of data. Unfortunately, existing data is not proper due to the existence of the same information in different sources, as well as erroneous and incomplete data. The aim of data integration systems is to offer to a user a unique interface to query a number of sources. A key challenge of such systems is to deal with conflicting information from the same source or from different sources. We present, in this paper, the resolution of conflict at the instance level into two stages: references reconciliation and data fusion. The reference reconciliation methods seek to decide if two data descriptions are references to the same entity in reality. We define the principles of reconciliation method then we distinguish the methods of reference reconciliation, first on how to use the descriptions of references, then the way to acquire knowledge. We finish this section by discussing some current data reconciliation issues that are the subject of current research. Data fusion in turn, has the objective to merge duplicates into a single representation while resolving conflicts between the data. We define first the conflicts classification, the strategies for dealing with conflicts and the implementing conflict management strategies. We present then, the relational operators and data fusion techniques. Likewise, we finish this section by discussing some current data fusion issues that are the subject of current research.

Details

Applied Computing and Informatics, vol. 18 no. 3/4
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 8 June 2015

Elisabeth Ilie-Zudor, Anikó Ekárt, Zsolt Kemeny, Christopher Buckingham, Philip Welch and Laszlo Monostori

– The purpose of this paper is to examine challenges and potential of big data in heterogeneous business networks and relate these to an implemented logistics solution.

7824

Abstract

Purpose

The purpose of this paper is to examine challenges and potential of big data in heterogeneous business networks and relate these to an implemented logistics solution.

Design/methodology/approach

The paper establishes an overview of challenges and opportunities of current significance in the area of big data, specifically in the context of transparency and processes in heterogeneous enterprise networks. Within this context, the paper presents how existing components and purpose-driven research were combined for a solution implemented in a nationwide network for less-than-truckload consignments.

Findings

Aside from providing an extended overview of today’s big data situation, the findings have shown that technical means and methods available today can comprise a feasible process transparency solution in a large heterogeneous network where legacy practices, reporting lags and incomplete data exist, yet processes are sensitive to inadequate policy changes.

Practical implications

The means introduced in the paper were found to be of utility value in improving process efficiency, transparency and planning in logistics networks. The particular system design choices in the presented solution allow an incremental introduction or evolution of resource handling practices, incorporating existing fragmentary, unstructured or tacit knowledge of experienced personnel into the theoretically founded overall concept.

Originality/value

The paper extends previous high-level view on the potential of big data, and presents new applied research and development results in a logistics application.

Details

Supply Chain Management: An International Journal, vol. 20 no. 4
Type: Research Article
ISSN: 1359-8546

Keywords

Open Access
Article
Publication date: 21 June 2019

Pengze Li, Ran Zhang, Lei Liu, Lizhen Cui, Qingzhong Li and Guangpeng Zhou

Science of the Crowd is a new paradigm. The research on the relationship between provision and requirement arising from the behavior of the crowd under the interconnected…

Abstract

Purpose

Science of the Crowd is a new paradigm. The research on the relationship between provision and requirement arising from the behavior of the crowd under the interconnected environment is a promising topic. This paper aims at studying a new type of interconnected architecture.

Design/methodology/approach

This study is a pioneer work on the establishment of a new type of interconnected architecture – rim chain. The rim chain aims at supporting prompt matching between provision and requirements.

Findings

The analytical results suggest that requirements can be fulfilled in accordance with six degrees of separation. In other words, the matching between the requirements and provision takes place with six hops in the rim chain framework.

Originality/value

Knowledge graph is used to implement the rim chain.

Details

International Journal of Crowd Science, vol. 3 no. 2
Type: Research Article
ISSN: 2398-7294

Keywords

Open Access
Article
Publication date: 29 June 2020

Paolo Manghi, Claudio Atzori, Michele De Bonis and Alessia Bardi

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate…

4533

Abstract

Purpose

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate scholarly/scientific communication entities such as publications, authors, datasets, organizations, projects, funders, etc. Depending on the target users, access can vary from search and browse content to the consumption of statistics for monitoring and provision of feedback. Such graphs are populated over time as aggregations of multiple sources and therefore suffer from major entity-duplication problems. Although deduplication of graphs is a known and actual problem, existing solutions are dedicated to specific scenarios, operate on flat collections, local topology-drive challenges and cannot therefore be re-used in other contexts.

Design/methodology/approach

This work presents GDup, an integrated, scalable, general-purpose system that can be customized to address deduplication over arbitrary large information graphs. The paper presents its high-level architecture, its implementation as a service used within the OpenAIRE infrastructure system and reports numbers of real-case experiments.

Findings

GDup provides the functionalities required to deliver a fully-fledged entity deduplication workflow over a generic input graph. The system offers out-of-the-box Ground Truth management, acquisition of feedback from data curators and algorithms for identifying and merging duplicates, to obtain an output disambiguated graph.

Originality/value

To our knowledge GDup is the only system in the literature that offers an integrated and general-purpose solution for the deduplication graphs, while targeting big data scalability issues. GDup is today one of the key modules of the OpenAIRE infrastructure production system, which monitors Open Science trends on behalf of the European Commission, National funders and institutions.

Details

Data Technologies and Applications, vol. 54 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Open Access
Article
Publication date: 8 July 2021

Johann Eder and Vladimir A. Shekhovtsov

Medical research requires biological material and data collected through biobanks in reliable processes with quality assurance. Medical studies based on data with unknown or…

1553

Abstract

Purpose

Medical research requires biological material and data collected through biobanks in reliable processes with quality assurance. Medical studies based on data with unknown or questionable quality are useless or even dangerous, as evidenced by recent examples of withdrawn studies. Medical data sets consist of highly sensitive personal data, which has to be protected carefully and is available for research only after the approval of ethics committees. The purpose of this research is to propose an architecture to support researchers to efficiently and effectively identify relevant collections of material and data with documented quality for their research projects while observing strict privacy rules.

Design/methodology/approach

Following a design science approach, this paper develops a conceptual model for capturing and relating metadata of medical data in biobanks to support medical research.

Findings

This study describes the landscape of biobanks as federated medical data lakes such as the collections of samples and their annotations in the European federation of biobanks (Biobanking and Biomolecular Resources Research Infrastructure – European Research Infrastructure Consortium, BBMRI-ERIC) and develops a conceptual model capturing schema information with quality annotation. This paper discusses the quality dimensions for data sets for medical research in-depth and proposes representations of both the metadata and data quality documentation with the aim to support researchers to effectively and efficiently identify suitable data sets for medical studies.

Originality/value

This novel conceptual model for metadata for medical data lakes has a unique focus on the high privacy requirements of the data sets contained in medical data lakes and also stands out in the detailed representation of data quality and metadata quality of medical data sets.

Details

International Journal of Web Information Systems, vol. 17 no. 5
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 15 October 2021

Bangxi Li, Chong Liu, Feng Zhao and Yanghua Huang

In the current literature, there is little systematic research on the relationship among adjustment of the income distribution, change in economic structure and improvement of…

Abstract

Purpose

In the current literature, there is little systematic research on the relationship among adjustment of the income distribution, change in economic structure and improvement of macroeconomic efficiency.

Design/methodology/approach

This paper expands Marx's reproduction schema into the “Marx–Sraffa” three-department structure table comprising fixed capital, general means of production and means of consumption and employs China's input–output table from 1987 to 2015 to portray the relationship between income distribution and macroeconomic efficiency under investment-driven growth.

Findings

This paper calculates the wage–profit curve of China's economy and evaluates the space of macroeconomic efficiency improvement in China based on the deviation between actual and potential income distribution structure.

Originality/value

The results show that there is a downward trend of the profit rate, which meets Marx's theoretical prediction, and the decline in the profit rate is mainly attributed to an increase in the organic composition of capital arising from the rapid growth of fixed capital investment under extended growth. The analysis of macroeconomic efficiency shows that the space for improving macroeconomic efficiency is extremely limited under traditional growth pattern and that China must transform its economic development pattern and foster new economic growth drivers.

Details

China Political Economy, vol. 4 no. 1
Type: Research Article
ISSN: 2516-1652

Keywords

Open Access
Article
Publication date: 25 September 2020

Julian Hocker, Christoph Schindler and Marc Rittberger

The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the…

2502

Abstract

Purpose

The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations.

Design/methodology/approach

This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews.

Findings

The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis.

Practical implications

The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels.

Originality/value

In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.

Details

Aslib Journal of Information Management, vol. 72 no. 4
Type: Research Article
ISSN: 2050-3806

Keywords

Open Access
Article
Publication date: 9 October 2023

Aya Khaled Youssef Sayed Mohamed, Dagmar Auer, Daniel Hofer and Josef Küng

Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are…

1037

Abstract

Purpose

Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are increasingly used in security-critical domains. Current survey works on databases and data security only consider authorization and access control in a very general way and do not regard most of today’s sophisticated requirements. Accordingly, the purpose of this paper is to discuss authorization and access control for relational and NoSQL database models in detail with respect to requirements and current state of the art.

Design/methodology/approach

This paper follows a systematic literature review approach to study authorization and access control for different database models. Starting with a research on survey works on authorization and access control in databases, the study continues with the identification and definition of advanced authorization and access control requirements, which are generally applicable to any database model. This paper then discusses and compares current database models based on these requirements.

Findings

As no survey works consider requirements for authorization and access control in different database models so far, the authors define their requirements. Furthermore, the authors discuss the current state of the art for the relational, key-value, column-oriented, document-based and graph database models in comparison to the defined requirements.

Originality/value

This paper focuses on authorization and access control for various database models, not concrete products. This paper identifies today’s sophisticated – yet general – requirements from the literature and compares them with research results and access control features of current products for the relational and NoSQL database models.

Details

International Journal of Web Information Systems, vol. 20 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 1 September 2012

Chung Sheng-Hung

This paper focuses on the integration of the learning objects and knowledge map as the learning sequence suggestion in the mobile learning environment and explains the…

1841

Abstract

This paper focuses on the integration of the learning objects and knowledge map as the learning sequence suggestion in the mobile learning environment and explains the technologies involved, the applications and the issues of usability, accessibility, evaluation and effectiveness. Mobile learning has open up new path for learning support and opportunities to reach wider audience (learner) for education. This research focuses on using the knowledge map to store the characteristics of each learning object via concept schemas and represent the corresponding learning accessibility in the mobile learning environment. The proposed architecture provides a medium for the learning accessibility of learners through mobile applications and wireless portable devices such as smart phones, PDAs and tablet PCs. The approach using the combination of "touch" and "observe" spatial learning objects provides an intelligent solution to creating, sharing and improving the efficiency of mobile learning. The proposed mobile learning environment architecture consists of knowledge map components mainly, navigation, concept schemas and learning object path. By using these knowledge structures, this study may enhance and enrich the concept and activity of adaptive learning in different individuals and communities. The spatial knowledge map constructed was useful in identifying the characteristics of the learning objects (e.g., learning object 1: lesson with navigating sentences, learning object 2: lesson with navigating sentence and code explanation, etc) and automatically matches the most appropriate learning contentand path suitable for learners. The architecture of the platform discussed in this study using the learning objects approach and knowledge map would facilitate a more widespread use of mobile learning, including courses or modules delivery of individualised learning path and learning style analysis.

Details

Asian Association of Open Universities Journal, vol. 7 no. 1
Type: Research Article
ISSN: 1858-3431

Open Access
Article
Publication date: 6 September 2021

Gerd Hübscher, Verena Geist, Dagmar Auer, Nicole Hübscher and Josef Küng

Knowledge- and communication-intensive domains still long for a better support of creativity that considers legal requirements, compliance rules and administrative tasks as well…

880

Abstract

Purpose

Knowledge- and communication-intensive domains still long for a better support of creativity that considers legal requirements, compliance rules and administrative tasks as well, because current systems focus either on knowledge representation or business process management. The purpose of this paper is to discuss our model of integrated knowledge and business process representation and its presentation to users.

Design/methodology/approach

The authors follow a design science approach in the environment of patent prosecution, which is characterized by a highly standardized, legally prescribed process and individual knowledge study. Thus, the research is based on knowledge study, BPM, graph-based knowledge representation and user interface design. The authors iteratively designed and built a model and a prototype. To evaluate the approach, the authors used analytical proof of concept, real-world test scenarios and case studies in real-world settings, where the authors conducted observations and open interviews.

Findings

The authors designed a model and implemented a prototype for evolving and storing static and dynamic aspects of knowledge. The proposed solution leverages the flexibility of a graph-based model to enable open and not only continuously developing user-centered processes but also pre-defined ones. The authors further propose a user interface concept which supports users to benefit from the richness of the model but provides sufficient guidance.

Originality/value

The balanced integration of the data and task perspectives distinguishes the model significantly from other approaches such as BPM or knowledge graphs. The authors further provide a sophisticated user interface design, which allows the users to effectively and efficiently use the graph-based knowledge representation in their daily study.

Details

International Journal of Web Information Systems, vol. 17 no. 6
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of 156