Search results

1 – 10 of over 5000
Open Access
Article
Publication date: 10 August 2022

Jie Ma, Zhiyuan Hao and Mo Hu

The density peak clustering algorithm (DP) is proposed to identify cluster centers by two parameters, i.e. ρ value (local density) and δ value (the distance between a point and…

Abstract

Purpose

The density peak clustering algorithm (DP) is proposed to identify cluster centers by two parameters, i.e. ρ value (local density) and δ value (the distance between a point and another point with a higher ρ value). According to the center-identifying principle of the DP, the potential cluster centers should have a higher ρ value and a higher δ value than other points. However, this principle may limit the DP from identifying some categories with multi-centers or the centers in lower-density regions. In addition, the improper assignment strategy of the DP could cause a wrong assignment result for the non-center points. This paper aims to address the aforementioned issues and improve the clustering performance of the DP.

Design/methodology/approach

First, to identify as many potential cluster centers as possible, the authors construct a point-domain by introducing the pinhole imaging strategy to extend the searching range of the potential cluster centers. Second, they design different novel calculation methods for calculating the domain distance, point-domain density and domain similarity. Third, they adopt domain similarity to achieve the domain merging process and optimize the final clustering results.

Findings

The experimental results on analyzing 12 synthetic data sets and 12 real-world data sets show that two-stage density peak clustering based on multi-strategy optimization (TMsDP) outperforms the DP and other state-of-the-art algorithms.

Originality/value

The authors propose a novel DP-based clustering method, i.e. TMsDP, and transform the relationship between points into that between domains to ultimately further optimize the clustering performance of the DP.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Open Access
Article
Publication date: 10 February 2022

Fei Xie, Jun Yan and Jun Shen

Although proactive fault handling plans are widely spread, many unexpected data center outages still occurred. To rescue the jobs from faulty data centers, the authors propose a…

Abstract

Purpose

Although proactive fault handling plans are widely spread, many unexpected data center outages still occurred. To rescue the jobs from faulty data centers, the authors propose a novel independent job rescheduling strategy for cloud resilience to reschedule the task from the faulty data center to other working-proper cloud data centers, by jointly considering job nature, timeline scenario and overall cloud performance.

Design/methodology/approach

A job parsing system and a priority assignment system are developed to identify the eligible time slots for the jobs and prioritize the jobs, respectively. A dynamic job rescheduling algorithm is proposed.

Findings

The simulation results show that our proposed approach has better cloud resiliency and load balancing performance than the HEFT series approaches.

Originality/value

This paper contributes to the cloud resilience by developing a novel job prioritizing, task rescheduling and timeline allocation method when facing faults.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 19 May 2022

Akhilesh S Thyagaturu, Giang Nguyen, Bhaskar Prasad Rimal and Martin Reisslein

Cloud computing originated in central data centers that are connected to the backbone of the Internet. The network transport to and from a distant data center incurs long…

1034

Abstract

Purpose

Cloud computing originated in central data centers that are connected to the backbone of the Internet. The network transport to and from a distant data center incurs long latencies that hinder modern low-latency applications. In order to flexibly support the computing demands of users, cloud computing is evolving toward a continuum of cloud computing resources that are distributed between the end users and a distant data center. The purpose of this review paper is to concisely summarize the state-of-the-art in the evolving cloud computing field and to outline research imperatives.

Design/methodology/approach

The authors identify two main dimensions (or axes) of development of cloud computing: the trend toward flexibility of scaling computing resources, which the authors denote as Flex-Cloud, and the trend toward ubiquitous cloud computing, which the authors denote as Ubi-Cloud. Along these two axes of Flex-Cloud and Ubi-Cloud, the authors review the existing research and development and identify pressing open problems.

Findings

The authors find that extensive research and development efforts have addressed some Ubi-Cloud and Flex-Cloud challenges resulting in exciting advances to date. However, a wide array of research challenges remains open, thus providing a fertile field for future research and development.

Originality/value

This review paper is the first to define the concept of the Ubi-Flex-Cloud as the two-dimensional research and design space for cloud computing research and development. The Ubi-Flex-Cloud concept can serve as a foundation and reference framework for planning and positioning future cloud computing research and development efforts.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 22 February 2024

Ranjeet Kumar Singh

Although the challenges associated with big data are increasing, the question of the most suitable big data analytics (BDA) platform in libraries is always significant. The…

52

Abstract

Purpose

Although the challenges associated with big data are increasing, the question of the most suitable big data analytics (BDA) platform in libraries is always significant. The purpose of this study is to propose a solution to this problem.

Design/methodology/approach

The current study identifies relevant literature and provides a review of big data adoption in libraries. It also presents a step-by-step guide for the development of a BDA platform using the Apache Hadoop Ecosystem. To test the system, an analysis of library big data using Apache Pig, which is a tool from the Apache Hadoop Ecosystem, was performed. It establishes the effectiveness of Apache Hadoop Ecosystem as a powerful BDA solution in libraries.

Findings

It can be inferred from the literature that libraries and librarians have not taken the possibility of big data services in libraries very seriously. Also, the literature suggests that there is no significant effort made to establish any BDA architecture in libraries. This study establishes the Apache Hadoop Ecosystem as a possible solution for delivering BDA services in libraries.

Research limitations/implications

The present work suggests adapting the idea of providing various big data services in a library by developing a BDA platform, for instance, providing assistance to the researchers in understanding the big data, cleaning and curation of big data by skilled and experienced data managers and providing the infrastructural support to store, process, manage, analyze and visualize the big data.

Practical implications

The study concludes that Apache Hadoops’ Hadoop Distributed File System and MapReduce components significantly reduce the complexities of big data storage and processing, respectively, and Apache Pig, using Pig Latin scripting language, is very efficient in processing big data and responding to queries with a quick response time.

Originality/value

According to the study, there are significantly fewer efforts made to analyze big data from libraries. Furthermore, it has been discovered that acceptance of the Apache Hadoop Ecosystem as a solution to big data problems in libraries are not widely discussed in the literature, although Apache Hadoop is regarded as one of the best frameworks for big data handling.

Details

Digital Library Perspectives, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5816

Keywords

Article
Publication date: 18 January 2024

Jin Xu, Pei Hua Shi and Xi Chen

This study aims to unveil the pivotal components and implementation pathways in the digital innovation of smart tourism destinations, while constructing a theoretical framework…

Abstract

Purpose

This study aims to unveil the pivotal components and implementation pathways in the digital innovation of smart tourism destinations, while constructing a theoretical framework from a holistic perspective.

Design/methodology/approach

The research focuses on 31 significant urban smart tourism destinations in China. Secondary data was collected through manual search supplemented by big data scraping, whereas primary data was obtained from interviews with municipal tourism authorities. Grounded theory was used to theoretically construct the phenomenon of digital innovation in smart tourism destinations.

Findings

This research has formulated a data-driven knowledge framework for digital innovation in smart tourism destinations. Core components include digital organizational innovation, smart data platforms, multi-stakeholder digital collaborative ecosystem and smart tourism scenario systems. Destinations can achieve smart tourism scene innovation through closed innovation driven by smart data platforms or open innovation propelled by a multi-stakeholder digital collaborative ecosystem.

Practical implications

Based on insights from digital innovation practices, this study proposes a series of concrete recommendations aimed at assisting Destination Management Organizations in formulating and implementing more effective digital innovation strategies to enhance the sustainable digital competitiveness of destinations.

Originality/value

This study advances smart tourism destination innovation research from localized thinking to systemic thinking; extends digital innovation theory into the realm of smart tourism destination innovation; repositions the significance of knowledge in smart tourism destination innovation; and constructs a comprehensive framework for digital innovation in smart tourism destinations.

目的

本研究致力于揭示智能旅游目的地数字创新中的核心组件及实施路径, 并创建一个整体视角下的理论框架。

设计/方法/方法

研究选定中国31座重要城市型智能旅游目的地为研究对象。通过人工检索结合大数据抓取的方式收集二手资料, 以各市旅游主管部门为访谈对象收集一手资料。运用扎根理论对智能旅游目的地的数字创新现象进行理论构建。

发现

本研究构建了一个数据型知识驱动的智能旅游目的地数字创新框架。其中, 核心组件包括数字组织创新、智慧数据平台、多主体数字协同生态和智慧旅游场景体系。目的地可通过智慧数据平台驱动的内生型创新或多主体数字协同生态推动的开放式创新, 实现智能旅游场景创新。

原创性/价值

本研究将智能旅游目的地创新相关研究由局部思考推向系统思考; 将数字创新理论扩展到智能旅游目的地创新的研究中; 重新定位知识在智能旅游目的地创新中的重要地位; 以及构建了一个智能旅游目的地数字创新整体框架。

实践意义

本研究基于数字创新实践洞察, 提出了一系列具体建议。旨在帮助目的地管理组织更有效地制定和实施数字创新策略, 以增强旅游目的地可持竞争力。

Diseño/metodología/enfoque

La investigación se centra en 31 destacados destinos turísticos urbanos inteligentes de China. Los datos secundarios se recopilaron mediante una búsqueda manual complementada con técnicas de big data, mientras que los datos primarios se obtuvieron a partir de entrevistas con las autoridades turísticas municipales. Se empleó la teoría fundamentada para construir teóricamente el fenómeno de la innovación digital en los destinos turísticos inteligentes.

Objetivo

Esta investigación tiene como objetivo identificar los componentes esenciales y las rutas de implementación de la innovación digital en destinos turísticos inteligentes, y construir un marco teórico desde una perspectiva holística.

Resultados

Este estudio ha desarrollado un marco de conocimiento basado en datos para la innovación digital en destinos turísticos inteligentes. Los componentes centrales incluyen la innovación organizativa digital, la plataforma de datos inteligentes, el ecosistema digital colaborativo de múltiples actores y el sistema de escenarios turísticos inteligentes. Además, tanto la innovación endógena impulsada por la plataforma de datos inteligentes como la innovación abierta impulsada por el ecosistema digital colaborativo de múltiples actores contribuyen a la innovación por escenarios en destinos turísticos inteligentes.

Implicaciones prácticas

A partir de las prácticas de innovación digital, este estudio ofrece una serie de recomendaciones dirigidas a las Organizaciones de Gestión de Destinos (DMOs) para la formulación e implementación de estrategias de innovación digital de manera más efectiva, y mejorar la competitividad digital sostenible de los destinos turísticos.

Originalidad/valor

Este estudio avanza la investigación sobre innovación en destinos turísticos inteligentes desde el pensamiento localizado hasta el pensamiento sistémico; extiende la teoría de la innovación digital al ámbito de la innovación en destinos turísticos inteligentes; reposiciona la importancia del conocimiento en la innovación de destinos turísticos inteligentes; y construye un marco integral para la innovación digital en destinos turísticos inteligentes.

Article
Publication date: 18 September 2023

Mohammadreza Akbari

The purpose of this study is to examine how the implementation of edge computing can enhance the progress of the circular economy within supply chains and to address the…

Abstract

Purpose

The purpose of this study is to examine how the implementation of edge computing can enhance the progress of the circular economy within supply chains and to address the challenges and best practices associated with this emerging technology.

Design/methodology/approach

This study utilized a streamlined evaluation technique that employed Latent Dirichlet Allocation modeling for thorough content analysis. Extensive searches were conducted among prominent publishers, including IEEE, Elsevier, Springer, Wiley, MDPI and Hindawi, utilizing pertinent keywords associated with edge computing, circular economy, sustainability and supply chain. The search process yielded a total of 103 articles, with the keywords being searched specifically within the titles or abstracts of these articles.

Findings

There has been a notable rise in the volume of scholarly articles dedicated to edge computing in the circular economy and supply chain management. After conducting a thorough examination of the published papers, three main research themes were identified, focused on technology, optimization and circular economy and sustainability. Edge computing adoption in supply chains results in a more responsive, efficient and agile supply chain, leading to enhanced decision-making capabilities and improved customer satisfaction. However, the adoption also poses challenges, such as data integration, security concerns, device management, connectivity and cost.

Originality/value

This paper offers valuable insights into the research trends of edge computing in the circular economy and supply chains, highlighting its significant role in optimizing supply chain operations and advancing the circular economy by processing and analyzing real time data generated by the internet of Things, sensors and other state-of-the-art tools and devices.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 8 June 2023

Jean C. Essila and Jaideep Motwani

This study aims to focus on the supply chain (SC) cost drivers of healthcare industries in the USA, as SC costs have increased 40% over the last decade. The second-most…

Abstract

Purpose

This study aims to focus on the supply chain (SC) cost drivers of healthcare industries in the USA, as SC costs have increased 40% over the last decade. The second-most significant expense, the SC, accounts for 38% of total expenses in a typical hospital, while most other industries can operate within 10% of their operating cost. This makes healthcare centers supply-chain-sensitive organizations with limited facilities for high-quality healthcare services. As the cost drivers of healthcare SC are almost unknown to managers, their jobs become more complex.

Design/methodology/approach

Guided by pragmatism and positivism paradigms, a cross-sectional study has been designed using quantitative and deductive approaches. Both primary and secondary data were used. Primary data were collected from health centers across the country, and secondary data were from healthcare-related databases. This study examined the attributes that explain the most significant variation in each contributing factor. With multiple regression analysis for predicting cost and Student's t-tests for the significance of contributing factors, the authors of this study examined different theories, including the market-based view and five-forces, network and transaction cost analysis.

Findings

This study revealed that supply, materials and services represent the most significant expenses in primary care. Supply-chain cost breakdown results in four critical factors: facility, inventory, information and transportation.

Research limitations/implications

This study examined the data from primary and secondary care institutions. Tertiary and quaternary care systems were not included. Although tertiary and quaternary care systems represent a small portion of the healthcare system, future research should address the supply chain costs of highly specialized organizations.

Practical implications

This study suggests methods that can help to improve supply chain operations in healthcare organizations worldwide.

Originality/value

This study presents an empirically proven methodology for testing the statistical significance of the primary factors contributing to healthcare supply chain costs. The results of this study may lead to positive policy changes to improve healthcare organizations' efficiency and increase access to high-quality healthcare.

Details

Benchmarking: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 30 January 2024

Li Si and Xianrui Liu

This research aims to explore the research data ethics governance framework and collaborative network to optimize research data ethics governance practices, to balance the…

Abstract

Purpose

This research aims to explore the research data ethics governance framework and collaborative network to optimize research data ethics governance practices, to balance the relationship between data development and utilization, open sharing, data security and to reduce the ethical risks that may arise from data sharing and utilization.

Design/methodology/approach

This study explores the framework and collaborative network of research data ethics policies by using the UK as an example. 78 policies from the UK government, university, research institution, funding agency, publisher, database, library and third-party organization are obtained. Adopting grounded theory (GT) and social network analysis (SNA), Nvivo12 is used to analyze these samples and summarize the research data ethics governance framework. Ucinet and Netdraw are used to reveal collaborative networks in policy.

Findings

Results indicate that the framework covers governance context, subject and measure. The content of governance context contains context description and data ethics issues analysis. Governance subject consists of defining subjects and facilitating their collaboration. Governance measure includes governance guidance and ethics governance initiatives in the data lifecycle. The collaborative network indicates that research institution plays a central role in ethics governance. The core of the governance content are ethics governance initiatives, governance guidance and governance context description.

Research limitations/implications

This research provides new insights for policy analysis by combining GT and SNA methods. Research data ethics and its governance are conceptualized to complete data governance and research ethics theory.

Practical implications

A research data ethics governance framework and collaborative network are revealed, and actionable guidance for addressing essential aspects of research data ethics and multiple subjects to confer their functions in collaborative governance is provided.

Originality/value

This study analyzes policy text using qualitative and quantitative methods, ensuring fine-grained content profiling and improving policy research. A typical research data ethics governance framework is revealed. Various stakeholders' roles and priorities in collaborative governance are explored. These contribute to improving governance policies and governance levels in both theory and practice.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 12 February 2024

Lei Ma, Ben Zhang, Kaitong Liang, Yang Cheng and Chaonan Yi

The embedding of digital technology and the fuzzy organizational boundary have changed the operation of platform innovation ecosystem (PIE). Specifically, as an important energy…

Abstract

Purpose

The embedding of digital technology and the fuzzy organizational boundary have changed the operation of platform innovation ecosystem (PIE). Specifically, as an important energy of PIE, the internal logic of knowledge flow needs to be reconsidered in the context of digital age, which will be helpful to select the cultivation and governance strategy of PIE.

Design/methodology/approach

A dual case-analysis is applied to open the “black box” of knowledge flow in the PIE from the perspective of enabled by digital technology, by taking the intellectual property (IP) operation platform as cases.

Findings

The research findings are as follow: (1) The knowledge flow mechanism of PIE is mainly demonstrated through the processes of knowledge acquisition, knowledge integration and knowledge spillover. During this process, connectivity empowerment and scenario empowerment realize the digital empowerment of the platform. (2) Connectivity empowerment provides a channel of knowledge acquisition for the digital connection between participants in PIE. In the process of knowledge integration, scenario empowerment improves the opportunities for accurate matching and collaborative innovation between knowledge supplier and demander, and enhance the value of knowledge. The dual effect of connectivity empowerment and scenario empowerment has accelerated the knowledge spillover in PIE. Particularly, connectivity empowerment expands the range of knowledge spillover, and scenario empowerment affects the generativity of the platform, resulting in the enhancement of platform’s capability to embed and expand its value network. (3) Participants have been benefitted from the PIE enabled by digital technology through three key modules (knowledge acquisition, knowledge integration and knowledge spillover), as the result of knowledge flow.

Originality/value

This study focuses on the knowledge flow mechanism of PIE enabled by digital technology, which enriches the PIE theory, and has enlightenments for the cultivation of digital platform ecosystem.

Details

European Journal of Innovation Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1460-1060

Keywords

Article
Publication date: 29 September 2023

Alberto Cavazza, Francesca Dal Mas, Maura Campra and Valerio Brescia

This study aims to investigate the use of Artificial Intelligence (AI) applied to vertical farms to evaluate whether disrupting technology supports sustainability and increases…

Abstract

Purpose

This study aims to investigate the use of Artificial Intelligence (AI) applied to vertical farms to evaluate whether disrupting technology supports sustainability and increases strategic business model choices in the agricultural sector. The study responds through empirical analysis to the gap on the subject of AI-driven business models present in the growing sector literature.

Design/methodology/approach

The paper analyzes the case of “ZERO”, a company linked to the strategy innovation ecosystem of the Ca’ Foscari University of Venice, Italy. The empirical data were collected through a semi-structured questionnaire, interviews and the analysis of public news on the business model available in the analyzed case study. The research is empirical and uses exploratory, descriptive analysis to interpret the findings. The article focuses on the evaluation of AI impact on the agricultural sector and its potential to create new business models.

Findings

The study identified how AI can support the decision-making process leading to an increase in productivity, efficiency, product quality and cost reduction. AI helps increase these parameters through a continuous learning process and local production, and the possible decrease in prices directed toward the goal of zero km food with fresh products. AI is a winning technology to support the key elements of the vertical farm business model. However, it must be coupled with other devices, such as robots, sensors and drones, to collect enough data to enable continuous learning and improvement.

Research limitations/implications

The research supports new research trends in AI applied to agriculture. The major implication is the construction of ecosystems between farms, technology providers, policymakers, universities, research centers and local consumer communities.

Practical implications

The ZERO case study underlines the potential of AI as a destructive technology that, especially in vertical farms, eliminates external conditions by increasing productivity, reducing costs and responding to production needs with adequate consumption of raw materials, boosting both environmental and social sustainability.

Originality/value

The study is original, as the current literature presents few empirical case studies on AI-supporting business models in agriculture. The study also favors valuable strategic implications for the policies to be adopted in favor of new business models in agriculture.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

1 – 10 of over 5000