Search results

1 – 10 of over 16000
Open Access
Article
Publication date: 10 August 2022

Jie Ma, Zhiyuan Hao and Mo Hu

The density peak clustering algorithm (DP) is proposed to identify cluster centers by two parameters, i.e. ρ value (local density) and δ value (the distance between a point and…

Abstract

Purpose

The density peak clustering algorithm (DP) is proposed to identify cluster centers by two parameters, i.e. ρ value (local density) and δ value (the distance between a point and another point with a higher ρ value). According to the center-identifying principle of the DP, the potential cluster centers should have a higher ρ value and a higher δ value than other points. However, this principle may limit the DP from identifying some categories with multi-centers or the centers in lower-density regions. In addition, the improper assignment strategy of the DP could cause a wrong assignment result for the non-center points. This paper aims to address the aforementioned issues and improve the clustering performance of the DP.

Design/methodology/approach

First, to identify as many potential cluster centers as possible, the authors construct a point-domain by introducing the pinhole imaging strategy to extend the searching range of the potential cluster centers. Second, they design different novel calculation methods for calculating the domain distance, point-domain density and domain similarity. Third, they adopt domain similarity to achieve the domain merging process and optimize the final clustering results.

Findings

The experimental results on analyzing 12 synthetic data sets and 12 real-world data sets show that two-stage density peak clustering based on multi-strategy optimization (TMsDP) outperforms the DP and other state-of-the-art algorithms.

Originality/value

The authors propose a novel DP-based clustering method, i.e. TMsDP, and transform the relationship between points into that between domains to ultimately further optimize the clustering performance of the DP.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Open Access
Article
Publication date: 10 February 2022

Fei Xie, Jun Yan and Jun Shen

Although proactive fault handling plans are widely spread, many unexpected data center outages still occurred. To rescue the jobs from faulty data centers, the authors propose a…

Abstract

Purpose

Although proactive fault handling plans are widely spread, many unexpected data center outages still occurred. To rescue the jobs from faulty data centers, the authors propose a novel independent job rescheduling strategy for cloud resilience to reschedule the task from the faulty data center to other working-proper cloud data centers, by jointly considering job nature, timeline scenario and overall cloud performance.

Design/methodology/approach

A job parsing system and a priority assignment system are developed to identify the eligible time slots for the jobs and prioritize the jobs, respectively. A dynamic job rescheduling algorithm is proposed.

Findings

The simulation results show that our proposed approach has better cloud resiliency and load balancing performance than the HEFT series approaches.

Originality/value

This paper contributes to the cloud resilience by developing a novel job prioritizing, task rescheduling and timeline allocation method when facing faults.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 19 May 2022

Akhilesh S Thyagaturu, Giang Nguyen, Bhaskar Prasad Rimal and Martin Reisslein

Cloud computing originated in central data centers that are connected to the backbone of the Internet. The network transport to and from a distant data center incurs long…

1048

Abstract

Purpose

Cloud computing originated in central data centers that are connected to the backbone of the Internet. The network transport to and from a distant data center incurs long latencies that hinder modern low-latency applications. In order to flexibly support the computing demands of users, cloud computing is evolving toward a continuum of cloud computing resources that are distributed between the end users and a distant data center. The purpose of this review paper is to concisely summarize the state-of-the-art in the evolving cloud computing field and to outline research imperatives.

Design/methodology/approach

The authors identify two main dimensions (or axes) of development of cloud computing: the trend toward flexibility of scaling computing resources, which the authors denote as Flex-Cloud, and the trend toward ubiquitous cloud computing, which the authors denote as Ubi-Cloud. Along these two axes of Flex-Cloud and Ubi-Cloud, the authors review the existing research and development and identify pressing open problems.

Findings

The authors find that extensive research and development efforts have addressed some Ubi-Cloud and Flex-Cloud challenges resulting in exciting advances to date. However, a wide array of research challenges remains open, thus providing a fertile field for future research and development.

Originality/value

This review paper is the first to define the concept of the Ubi-Flex-Cloud as the two-dimensional research and design space for cloud computing research and development. The Ubi-Flex-Cloud concept can serve as a foundation and reference framework for planning and positioning future cloud computing research and development efforts.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Content available
Article
Publication date: 7 November 2018

Nathan Parker, Jonathan Alt, Samuel Buttrey and Jeffrey House

This research develops a data-driven statistical model capable of predicting a US Army Reserve (USAR) unit staffing levels based on unit location demographics. This model provides…

Abstract

Purpose

This research develops a data-driven statistical model capable of predicting a US Army Reserve (USAR) unit staffing levels based on unit location demographics. This model provides decision makers an assessment of a proposed station location’s ability to support a unit’s personnel requirements from the local population.

Design/methodology/approach

This research first develops an allocation method to overcome challenges caused by overlapping unit boundaries to prevent over-counting the population. Once populations are accurately allocated to each location, we then then develop and compare the performance of statistical models to estimate a location’s likelihood of meeting staffing requirements.

Findings

This research finds that local demographic factors prove essential to a location’s ability to meet staffing requirements. We recommend that the USAR and US Army Recruiting Command (USAREC) use the logistic regression model developed here to support USAR unit stationing decisions; this should improve the ability of units to achieve required staffing levels.

Originality/value

This research meets a direct request from the USAREC, in conjunction with the USAR, for assistance in developing models to aid decision makers during the unit stationing process.

Details

Journal of Defense Analytics and Logistics, vol. 2 no. 2
Type: Research Article
ISSN: 2399-6439

Keywords

Open Access
Article
Publication date: 18 April 2023

Patience Mpofu, Solomon Hopewell Kembo, Marlvern Chimbwanda, Saulo Jacques, Nevil Chitiyo and Kudakwashe Zvarevashe

In response to food supply constraints resulting from coronavirus disease 2019 (COVID-19) restrictions, in the year 2020, the project developed automated household Aquaponics…

Abstract

Purpose

In response to food supply constraints resulting from coronavirus disease 2019 (COVID-19) restrictions, in the year 2020, the project developed automated household Aquaponics units to guarantee food self-sufficiency. However, the automated aquaponics solution did not fully comply with data privacy and portability best practices to protect the data of household owners. The purpose of this study is to develop a data privacy and portability layer on top of the previously developed automated Aquaponics units.

Design/methodology/approach

Design Science Research (DSR) is the research method implemented in this study.

Findings

General Data Protection and Privacy Regulations (GDPR)-inspired principles empowering data subjects including data minimisation, purpose limitation, storage limitation as well as integrity and confidentiality can be implemented in a federated learning (FL) architecture using Pinecone Matrix home servers and edge devices.

Research limitations/implications

The literature reviewed for this study demonstrates that the GDPR right to data portability can have a positive impact on data protection by giving individuals more control over their own data. This is achieved by allowing data subjects to obtain their personal information from a data controller in a format that makes it simple to reuse it in another context and to transmit this information freely to any other data controller of their choice. Data portability is not strictly governed or enforced by data protection laws in the developing world, such as Zimbabwe's Data Protection Act of 2021.

Practical implications

Privacy requirements can be implemented in end-point technology such as smartphones, microcontrollers and single board computer clusters enabling data subjects to be incentivised whilst unlocking the value of their own data in the process fostering competition among data controllers and processors.

Originality/value

The use of end-to-end encryption with Matrix Pinecone on edge endpoints and fog servers, as well as the practical implementation of data portability, are currently not adequately covered in the literature. The study acts as a springboard for a future conversation on the topic.

Details

International Journal of Industrial Engineering and Operations Management, vol. 5 no. 2
Type: Research Article
ISSN: 2690-6090

Keywords

Open Access
Article
Publication date: 5 April 2023

Xinghua Shan, Zhiqiang Zhang, Fei Ning, Shida Li and Linlin Dai

With the yearly increase of mileage and passenger volume in China's high-speed railway, the problems of traditional paper railway tickets have become increasingly prominent…

1474

Abstract

Purpose

With the yearly increase of mileage and passenger volume in China's high-speed railway, the problems of traditional paper railway tickets have become increasingly prominent, including complexity of business handling process, low efficiency of ticket inspection and high cost of usage and management. This paper aims to make extensive references to successful experiences of electronic ticket applications both domestically and internationally. The research on key technologies and system implementation of railway electronic ticket with Chinese characteristics has been carried out.

Design/methodology/approach

Research in key technologies is conducted including synchronization technique in distributed heterogeneous database system, the grid-oriented passenger service record (PSR) data storage model, efficient access to massive PSR data under high concurrency condition, the linkage between face recognition service platforms and various terminals in large scenarios, and two-factor authentication of the e-ticket identification code based on the key and the user identity information. Focusing on the key technologies and architecture the of existing ticketing system, multiple service resources are expanded and developed such as electronic ticket clusters, PSR clusters, face recognition clusters and electronic ticket identification code clusters.

Findings

The proportion of paper ticket printed has dropped to 20%, saving more than 2 billion tickets annually since the launch of the application of E-ticketing nationwide. The average time for passengers to pass through the automatic ticket gates has decreased from 3 seconds to 1.3 seconds, significantly improving the efficiency of passenger transport organization. Meanwhile, problems of paper ticket counterfeiting, reselling and loss have been generally eliminated.

Originality/value

E-ticketing has laid a technical foundation for the further development of railway passenger transport services in the direction of digitalization and intelligence.

Details

Railway Sciences, vol. 2 no. 1
Type: Research Article
ISSN: 2755-0907

Keywords

Content available
Article
Publication date: 5 June 2017

Professor Samuel Fosso Wamba

9247

Abstract

Details

Business Process Management Journal, vol. 23 no. 3
Type: Research Article
ISSN: 1463-7154

Open Access
Article
Publication date: 25 September 2020

Julian Hocker, Christoph Schindler and Marc Rittberger

The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the…

2523

Abstract

Purpose

The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations.

Design/methodology/approach

This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews.

Findings

The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis.

Practical implications

The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels.

Originality/value

In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.

Details

Aslib Journal of Information Management, vol. 72 no. 4
Type: Research Article
ISSN: 2050-3806

Keywords

Open Access
Article
Publication date: 15 May 2020

Horst Treiblmaier, Kristijan Mirkovski, Paul Benjamin Lowry and Zach G. Zacharia

The physical internet (PI) is an emerging logistics and supply chain management (SCM) concept that draws on different technologies and areas of research, such as the Internet of…

10099

Abstract

Purpose

The physical internet (PI) is an emerging logistics and supply chain management (SCM) concept that draws on different technologies and areas of research, such as the Internet of Things (IoT) and key performance indicators, with the purpose of revolutionizing existing logistics and SCM practices. The growing literature on the PI and its noteworthy potential to be a disruptive innovation in the logistics industry call for a systematic literature review (SLR), which we conducted that defines the current state of the literature and outlines future research directions and approaches.

Design/methodology/approach

The SLR that was undertaken included journal publications, conference papers and proceedings, book excerpts, industry reports and white papers. We conducted descriptive, citation, thematic and methodological analyses to understand the evolution of PI literature.

Findings

Based on the literature review and analyses, we proposed a comprehensive framework that structures the PI domain and outlines future directions for logistics and SCM researchers.

Research limitations/implications

Our research findings are limited by the relatively low number of journal publications, as the PI is a new field of inquiry that is composed primarily of conference papers and proceedings.

Originality/value

The proposed PI-based framework identifies seven PI themes, including the respective facilitators and barriers, which can inform researchers and practitioners on future potentially disruptive SC strategies.

Details

The International Journal of Logistics Management, vol. 31 no. 2
Type: Research Article
ISSN: 0957-4093

Keywords

Open Access
Article
Publication date: 28 March 2023

Khadijeh Momeni, Chris Raddats and Miia Martinsuo

Digital servitization concerns how manufacturers utilize digital technologies to enhance their provision of services. Although digital servitization requires that manufacturers…

2455

Abstract

Purpose

Digital servitization concerns how manufacturers utilize digital technologies to enhance their provision of services. Although digital servitization requires that manufacturers possess new capabilities, in contrast to strategic (or dynamic) capabilities, little is known about how they develop the required operational capabilities. The paper investigates the mechanisms for developing operational capabilities in digital servitization.

Design/methodology/approach

This paper presents an exploratory study based on 15 large manufacturers operating in Europe engaged in digital servitization.

Findings

Three operational capability development mechanisms are set out that manufacturers use to facilitate digital servitization: learning (developing capabilities in-house), building (bringing the requisite capabilities into the manufacturer), and acquiring (utilizing the capabilities of other actors). These mechanisms emphasize exploitation and exploration efforts within manufacturers and in collaborations with upstream and downstream partners. The findings demonstrate the need to combine these mechanisms for digital servitization according to combinations that match each manufacturer’s traditional servitization phase: (1) initial phase - building and acquiring, (2) middle phase - learning, building and acquiring, and (3) advanced phase - learning and building.

Originality/value

This study reveals three operational capability development mechanisms, highlighting the parallel use of these mechanisms for digital servitization. It provides a holistic understanding of operational capability development mechanisms used by manufacturers by combining three theoretical perspectives (organizational learning, absorptive capacity, and network perspectives). The paper demonstrates that digital servitization requires the significant application of building and acquiring mechanisms to develop the requisite operational capabilities.

Details

International Journal of Operations & Production Management, vol. 43 no. 13
Type: Research Article
ISSN: 0144-3577

Keywords

1 – 10 of over 16000