Search results

1 – 10 of over 5000
Open Access
Article
Publication date: 3 October 2022

Igor Perko

Artificial intelligence (AI) reasoning is fuelled by high-quality, detailed behavioural data. These can usually be obtained by the biometrical sensors embedded in smart devices…

Abstract

Purpose

Artificial intelligence (AI) reasoning is fuelled by high-quality, detailed behavioural data. These can usually be obtained by the biometrical sensors embedded in smart devices. The currently used data collecting approach, where data ownership and property rights are taken by the data scientists, designers of a device or a related application, delivers multiple ethical, sociological and governance concerns. In this paper, the author is opening a systemic examination of a data sharing concept in which data producers execute their data property rights.

Design/methodology/approach

Since data sharing concept delivers a substantially different alternative, it needs to be thoroughly examined from multiple perspectives, among them: the ethical, social and feasibility. At this stage, theoretical examination modes in the form of literature analysis and mental model development are being performed.

Findings

Data sharing concepts, framework, mechanisms and swift viability are examined. The author determined that data sharing could lead to virtuous data science by augmenting data producers' capacity to govern their data and regulators' capacity to interact in the process. Truly interdisciplinary research is proposed to follow up on this research.

Research limitations/implications

Since the research proposal is theoretical, the proposal may not provide direct applicative value but is largely focussed on fuelling the research directions.

Practical implications

For the researchers, data sharing concepts will provide an alternative approach and help resolve multiple ethical considerations related to the internet of things (IoT) data collecting approach. For the practitioners in data science, it will provide numerous new challenges, such as distributed data storing, distributed data analysis and intelligent data sharing protocols.

Social implications

Data sharing may post significant implications in research and development. Since ethical, legislative moral and trust-related issues are managed in the negotiation process, data can be shared freely, which in a practical sense expands the data pool for virtuous research in social sciences.

Originality/value

The paper opens new research directions of data sharing concepts and space for a new field of research.

Details

Kybernetes, vol. 52 no. 9
Type: Research Article
ISSN: 0368-492X

Keywords

Open Access
Article
Publication date: 29 June 2020

Paolo Manghi, Claudio Atzori, Michele De Bonis and Alessia Bardi

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate…

5047

Abstract

Purpose

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate scholarly/scientific communication entities such as publications, authors, datasets, organizations, projects, funders, etc. Depending on the target users, access can vary from search and browse content to the consumption of statistics for monitoring and provision of feedback. Such graphs are populated over time as aggregations of multiple sources and therefore suffer from major entity-duplication problems. Although deduplication of graphs is a known and actual problem, existing solutions are dedicated to specific scenarios, operate on flat collections, local topology-drive challenges and cannot therefore be re-used in other contexts.

Design/methodology/approach

This work presents GDup, an integrated, scalable, general-purpose system that can be customized to address deduplication over arbitrary large information graphs. The paper presents its high-level architecture, its implementation as a service used within the OpenAIRE infrastructure system and reports numbers of real-case experiments.

Findings

GDup provides the functionalities required to deliver a fully-fledged entity deduplication workflow over a generic input graph. The system offers out-of-the-box Ground Truth management, acquisition of feedback from data curators and algorithms for identifying and merging duplicates, to obtain an output disambiguated graph.

Originality/value

To our knowledge GDup is the only system in the literature that offers an integrated and general-purpose solution for the deduplication graphs, while targeting big data scalability issues. GDup is today one of the key modules of the OpenAIRE infrastructure production system, which monitors Open Science trends on behalf of the European Commission, National funders and institutions.

Details

Data Technologies and Applications, vol. 54 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Open Access
Article
Publication date: 28 September 2021

Maria Vincenza Ciasullo, Mariarosaria Carli, Weng Marc Lim and Rocco Palumbo

The article applies the citizen science phenomenon – i.e. lay people involvement in research endeavours aimed at pushing forward scientific knowledge – to healthcare. Attention is…

3727

Abstract

Purpose

The article applies the citizen science phenomenon – i.e. lay people involvement in research endeavours aimed at pushing forward scientific knowledge – to healthcare. Attention is paid to initiatives intended to tackle the COVID-19 pandemic as an illustrative case to exemplify the contribution of citizen science to system-wide innovation in healthcare.

Design/methodology/approach

A mixed methodology consisting of three sequential steps was developed. Firstly, a realist literature review was carried out to contextualize citizen science to healthcare. Then, an account of successfully completed large-scale, online citizen science projects dealing with healthcare and medicine has been conducted in order to obtain preliminary information about distinguishing features of citizen science in healthcare. Thirdly, a broad search of citizen science initiatives targeted to tackling the COVID-19 pandemic has been performed. A comparative case study approach has been undertaken to examine the attributes of such projects and to unravel their peculiarities.

Findings

Citizen science enacts the development of a lively healthcare ecosystem, which takes its nourishment from the voluntary contribution of lay people. Citizen scientists play different roles in accomplishing citizen science initiatives, ranging from data collectors to data analysts. Alongside enabling big data management, citizen science contributes to lay people's education and empowerment, soliciting their active involvement in service co-production and value co-creation.

Practical implications

Citizen science is still underexplored in healthcare. Even though further evidence is needed to emphasize the value of lay people's involvement in scientific research applied to healthcare, citizen science is expected to revolutionize the way innovation is pursued and achieved in the healthcare ecosystem. Engaging lay people in a co-creating partnership with expert scientist can help us to address unprecedented health-related challenges and to shape the future of healthcare. Tailored health policy and management interventions are required to empower lay people and to stimulate their active engagement in value co-creation.

Originality/value

Citizen science relies on the wisdom of the crowd to address major issues faced by healthcare organizations. The article comes up with a state of the art investigation of citizen science in healthcare, shedding light on its attributes and envisioning avenues for further development.

Details

European Journal of Innovation Management, vol. 25 no. 6
Type: Research Article
ISSN: 1460-1060

Keywords

Open Access
Article
Publication date: 29 August 2024

Marjut Hirvonen, Katri Kauppi and Juuso Liesiö

Although it is commonly agreed that prescriptive analytics can benefit organizations by enabling better decision-making, the deployment of prescriptive analytics tools can be…

Abstract

Purpose

Although it is commonly agreed that prescriptive analytics can benefit organizations by enabling better decision-making, the deployment of prescriptive analytics tools can be challenging. Previous studies have primarily focused on methodological issues rather than the organizational deployment of analytics. However, successful deployment is key to achieving the intended benefits of prescriptive analytics tools. Therefore, this study aims to identify the enablers of successful deployment of prescriptive analytics.

Design/methodology/approach

The authors examine the enablers for the successful deployment of prescriptive analytics through five organizational case studies. To provide a comprehensive view of the deployment process, each case includes interviews with users, managers and top management.

Findings

The findings suggest the key enablers for successful analytics deployment are strong leadership and management support, sufficient resources, user participation in development and a common dialogue between users, managers and top management. However, contrary to the existing literature, the authors found little evidence of external pressures to develop and deploy analytics. Importantly, the success of deployment in each case was related to the similarity with which different actors within the organization viewed the deployment process. Furthermore, end users tended to highlight user participation, skills and training, whereas managers and top management placed greater emphasis on the importance of organizational changes.

Originality/value

The results will help practitioners ensure that key enablers are in place to increase the likelihood of the successful deployment of prescriptive analytics.

Details

European Business Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0955-534X

Keywords

Open Access
Article
Publication date: 11 June 2024

Julian Rott, Markus Böhm and Helmut Krcmar

Process mining (PM) has emerged as a leading technology for gaining data-based insights into organizations’ business processes. As processes increasingly cross-organizational…

Abstract

Purpose

Process mining (PM) has emerged as a leading technology for gaining data-based insights into organizations’ business processes. As processes increasingly cross-organizational boundaries, firms need to conduct PM jointly with multiple organizations to optimize their operations. However, current knowledge on cross-organizational process mining (coPM) is widely dispersed. Therefore, we synthesize current knowledge on coPM, identify challenges and enablers of coPM, and build a socio-technical framework and agenda for future research.

Design/methodology/approach

We conducted a literature review of 66 articles and summarized the findings according to the framework for Information Technology (IT)-enabled inter-organizational coordination (IOC) and the refined PM framework. The former states that within inter-organizational relationships, uncertainty sources determine information processing needs and coordination mechanisms determine information processing capabilities, while the fit between needs and capabilities determines the relationships’ performance. The latter distinguishes three categories of PM activities: cartography, auditing and navigation.

Findings

Past literature focused on coPM techniques, for example, algorithms for ensuring privacy and PM for cartography. Future research should focus on socio-technical aspects and follow four steps: First, determine uncertainty sources within coPM. Second, design, develop and evaluate coordination mechanisms. Third, investigate how the mechanisms assist with handling uncertainty. Fourth, analyze the impact on coPM performance. In addition, we present 18 challenges (e.g. integrating distributed data) and 9 enablers (e.g. aligning different strategies) for coPM application.

Originality/value

This is the first article to systematically investigate the status quo of coPM research and lay out a socio-technical research agenda building upon the well-established framework for IT-enabled IOC.

Details

Business Process Management Journal, vol. 30 no. 8
Type: Research Article
ISSN: 1463-7154

Keywords

Open Access
Article
Publication date: 4 November 2022

Bianca Caiazzo, Teresa Murino, Alberto Petrillo, Gianluca Piccirillo and Stefania Santini

This work aims at proposing a novel Internet of Things (IoT)-based and cloud-assisted monitoring architecture for smart manufacturing systems able to evaluate their overall status…

2966

Abstract

Purpose

This work aims at proposing a novel Internet of Things (IoT)-based and cloud-assisted monitoring architecture for smart manufacturing systems able to evaluate their overall status and detect eventual anomalies occurring into the production. A novel artificial intelligence (AI) based technique, able to identify the specific anomalous event and the related risk classification for possible intervention, is hence proposed.

Design/methodology/approach

The proposed solution is a five-layer scalable and modular platform in Industry 5.0 perspective, where the crucial layer is the Cloud Cyber one. This embeds a novel anomaly detection solution, designed by leveraging control charts, autoencoders (AE) long short-term memory (LSTM) and Fuzzy Inference System (FIS). The proper combination of these methods allows, not only detecting the products defects, but also recognizing their causalities.

Findings

The proposed architecture, experimentally validated on a manufacturing system involved into the production of a solar thermal high-vacuum flat panel, provides to human operators information about anomalous events, where they occur, and crucial information about their risk levels.

Practical implications

Thanks to the abnormal risk panel; human operators and business managers are able, not only of remotely visualizing the real-time status of each production parameter, but also to properly face with the eventual anomalous events, only when necessary. This is especially relevant in an emergency situation, such as the COVID-19 pandemic.

Originality/value

The monitoring platform is one of the first attempts in leading modern manufacturing systems toward the Industry 5.0 concept. Indeed, it combines human strengths, IoT technology on machines, cloud-based solutions with AI and zero detect manufacturing strategies in a unified framework so to detect causalities in complex dynamic systems by enabling the possibility of products’ waste avoidance.

Details

Journal of Manufacturing Technology Management, vol. 34 no. 4
Type: Research Article
ISSN: 1741-038X

Keywords

Open Access
Article
Publication date: 5 April 2023

Xinghua Shan, Zhiqiang Zhang, Fei Ning, Shida Li and Linlin Dai

With the yearly increase of mileage and passenger volume in China's high-speed railway, the problems of traditional paper railway tickets have become increasingly prominent…

2142

Abstract

Purpose

With the yearly increase of mileage and passenger volume in China's high-speed railway, the problems of traditional paper railway tickets have become increasingly prominent, including complexity of business handling process, low efficiency of ticket inspection and high cost of usage and management. This paper aims to make extensive references to successful experiences of electronic ticket applications both domestically and internationally. The research on key technologies and system implementation of railway electronic ticket with Chinese characteristics has been carried out.

Design/methodology/approach

Research in key technologies is conducted including synchronization technique in distributed heterogeneous database system, the grid-oriented passenger service record (PSR) data storage model, efficient access to massive PSR data under high concurrency condition, the linkage between face recognition service platforms and various terminals in large scenarios, and two-factor authentication of the e-ticket identification code based on the key and the user identity information. Focusing on the key technologies and architecture the of existing ticketing system, multiple service resources are expanded and developed such as electronic ticket clusters, PSR clusters, face recognition clusters and electronic ticket identification code clusters.

Findings

The proportion of paper ticket printed has dropped to 20%, saving more than 2 billion tickets annually since the launch of the application of E-ticketing nationwide. The average time for passengers to pass through the automatic ticket gates has decreased from 3 seconds to 1.3 seconds, significantly improving the efficiency of passenger transport organization. Meanwhile, problems of paper ticket counterfeiting, reselling and loss have been generally eliminated.

Originality/value

E-ticketing has laid a technical foundation for the further development of railway passenger transport services in the direction of digitalization and intelligence.

Details

Railway Sciences, vol. 2 no. 1
Type: Research Article
ISSN: 2755-0907

Keywords

Open Access
Article
Publication date: 25 January 2022

Ijaz Ul Haq, James Andrew Colwill, Chris Backhouse and Fiorenzo Franceschini

Lean distributed manufacturing (LDM) is being considered as an enabler of achieving sustainability and resilience in manufacturing and supply chain operations. The purpose of this…

2776

Abstract

Purpose

Lean distributed manufacturing (LDM) is being considered as an enabler of achieving sustainability and resilience in manufacturing and supply chain operations. The purpose of this paper is to enhance the understanding of how LDM characteristics affect the resilience of manufacturing companies by drawing upon the experience of food manufacturing companies operating in the UK.

Design/methodology/approach

The paper develops a conceptual model to analyse the impact of LDM on the operational resilience of food manufacturing companies. A triangulation research methodology (secondary data analysis, field observations and structured interviews) is used in this study. In a first step, LDM enablers and resilience elements are identified from literature. In a second step, empirical evidence is collected from six food sub-sectors aimed at identifying LDM enablers being practised in companies.

Findings

The analysis reveals that LDM enablers can improve the resilience capabilities of manufacturing companies at different stages of resilience action cycle, whereas the application status of different LDM enablers varies in food manufacturing companies. The findings include the development of a conceptual model (based on literature) and a relationship matrix between LDM enablers and resilience elements.

Practical implications

The developed relationship matrix is helpful for food manufacturing companies to assess their resilience capability in terms of LDM characteristics and then formulate action plans to incorporate relevant LDM enablers to enhance operational resilience.

Originality/value

Based on the literature review, no studies exist that investigate the effects of LDM on factory’s resilience, despite many research studies suggesting distributed manufacturing as an enabler of sustainability and resilience.

Details

International Journal of Lean Six Sigma, vol. 13 no. 5
Type: Research Article
ISSN: 2040-4166

Keywords

Open Access
Article
Publication date: 11 May 2020

Zhizhao Zhang, Tianzhi Yang and Yuan Liu

The purpose of this work is to bridge FL and blockchain technology through designing a blockchain-based smart agent system architecture and applying in FL. and blockchain…

2401

Abstract

Purpose

The purpose of this work is to bridge FL and blockchain technology through designing a blockchain-based smart agent system architecture and applying in FL. and blockchain technology through designing a blockchain-based smart agent system architecture and applying in FL. FL is an emerging collaborative machine learning technique that trains a model across multiple devices or servers holding private data samples without exchanging their data. The locally trained results are aggregated by a centralized server in a privacy-preserving way. However, there is an assumption where the centralized server is trustworthy, which is impractical. Fortunately, blockchain technology has opened a new era of data exchange among trustless strangers because of its decentralized architecture and cryptography-supported techniques.

Design/methodology/approach

In this study, the author proposes a novel design of a smart agent inspired by the smart contract concept. Specifically, based on the proposed smart agent, a fully decentralized, privacy-preserving and fair deep learning blockchain-FL framework is designed, where the agent network is consistent with the blockchain network and each smart agent is a participant in the FL task. During the whole training process, both the data and the model are not at the risk of leakage.

Findings

A demonstration of the proposed architecture is designed to train a neural network. Finally, the implementation of the proposed architecture is conducted in the Ethereum development, showing the effectiveness and applicability of the design.

Originality/value

The author aims to investigate the feasibility and practicality of linking the three areas together, namely, multi-agent system, FL and blockchain. A blockchain-FL framework, which is based on a smart agent system, has been proposed. The author has made several contributions to the state-of-the-art. First of all, a concrete design of a smart agent model is proposed, inspired by the smart contract concept in blockchain. The smart agent is autonomous and is able to disseminate, verify the information and execute the supported protocols. Based on the proposed smart agent model, a new architecture composed by these agents is formed, which is a blockchain network. Then, a fully decentralized, privacy-preserving and smart agent blockchain-FL framework has been proposed, where a smart agent acts as both a peer in a blockchain network and a participant in a FL task at the same time. Finally, a demonstration to train an artificial neural network is implemented to prove the effectiveness of the proposed framework.

Details

International Journal of Crowd Science, vol. 4 no. 2
Type: Research Article
ISSN: 2398-7294

Keywords

Open Access
Article
Publication date: 4 August 2020

Kanak Meena, Devendra K. Tayal, Oscar Castillo and Amita Jain

The scalability of similarity joins is threatened by the unexpected data characteristic of data skewness. This is a pervasive problem in scientific data. Due to skewness, the…

787

Abstract

The scalability of similarity joins is threatened by the unexpected data characteristic of data skewness. This is a pervasive problem in scientific data. Due to skewness, the uneven distribution of attributes occurs, and it can cause a severe load imbalance problem. When database join operations are applied to these datasets, skewness occurs exponentially. All the algorithms developed to date for the implementation of database joins are highly skew sensitive. This paper presents a new approach for handling data-skewness in a character- based string similarity join using the MapReduce framework. In the literature, no such work exists to handle data skewness in character-based string similarity join, although work for set based string similarity joins exists. Proposed work has been divided into three stages, and every stage is further divided into mapper and reducer phases, which are dedicated to a specific task. The first stage is dedicated to finding the length of strings from a dataset. For valid candidate pair generation, MR-Pass Join framework has been suggested in the second stage. MRFA concepts are incorporated for string similarity join, which is named as “MRFA-SSJ” (MapReduce Frequency Adaptive – String Similarity Join) in the third stage which is further divided into four MapReduce phases. Hence, MRFA-SSJ has been proposed to handle skewness in the string similarity join. The experiments have been implemented on three different datasets namely: DBLP, Query log and a real dataset of IP addresses & Cookies by deploying Hadoop framework. The proposed algorithm has been compared with three known algorithms and it has been noticed that all these algorithms fail when data is highly skewed, whereas our proposed method handles highly skewed data without any problem. A set-up of the 15-node cluster has been used in this experiment, and we are following the Zipf distribution law for the analysis of skewness factor. Also, a comparison among existing and proposed techniques has been shown. Existing techniques survived till Zipf factor 0.5 whereas the proposed algorithm survives up to Zipf factor 1. Hence the proposed algorithm is skew insensitive and ensures scalability with a reasonable query processing time for string similarity database join. It also ensures the even distribution of attributes.

Details

Applied Computing and Informatics, vol. 18 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

1 – 10 of over 5000