Search results
1 – 4 of 4Jie Ma, Zhiyuan Hao and Mo Hu
The density peak clustering algorithm (DP) is proposed to identify cluster centers by two parameters, i.e. ρ value (local density) and δ value (the distance between a point and…
Abstract
Purpose
The density peak clustering algorithm (DP) is proposed to identify cluster centers by two parameters, i.e. ρ value (local density) and δ value (the distance between a point and another point with a higher ρ value). According to the center-identifying principle of the DP, the potential cluster centers should have a higher ρ value and a higher δ value than other points. However, this principle may limit the DP from identifying some categories with multi-centers or the centers in lower-density regions. In addition, the improper assignment strategy of the DP could cause a wrong assignment result for the non-center points. This paper aims to address the aforementioned issues and improve the clustering performance of the DP.
Design/methodology/approach
First, to identify as many potential cluster centers as possible, the authors construct a point-domain by introducing the pinhole imaging strategy to extend the searching range of the potential cluster centers. Second, they design different novel calculation methods for calculating the domain distance, point-domain density and domain similarity. Third, they adopt domain similarity to achieve the domain merging process and optimize the final clustering results.
Findings
The experimental results on analyzing 12 synthetic data sets and 12 real-world data sets show that two-stage density peak clustering based on multi-strategy optimization (TMsDP) outperforms the DP and other state-of-the-art algorithms.
Originality/value
The authors propose a novel DP-based clustering method, i.e. TMsDP, and transform the relationship between points into that between domains to ultimately further optimize the clustering performance of the DP.
Details
Keywords
Luan Thanh Le and Trang Xuan-Thi-Thu
To achieve the Sustainable Development Goals (SDGs) in the era of Logistics 4.0, machine learning (ML) techniques and simulations have emerged as highly optimized tools. This…
Abstract
Purpose
To achieve the Sustainable Development Goals (SDGs) in the era of Logistics 4.0, machine learning (ML) techniques and simulations have emerged as highly optimized tools. This study examines the operational dynamics of a supply chain (SC) in Vietnam as a case study utilizing an ML simulation approach.
Design/methodology/approach
A robust fuel consumption estimation model is constructed by leveraging multiple linear regression (MLR) and artificial neural network (ANN). Subsequently, the proposed model is seamlessly integrated into a cutting-edge SC simulation framework.
Findings
This paper provides valuable insights and actionable recommendations, empowering SC practitioners to optimize operational efficiencies and fostering an avenue for further scholarly investigations and advancements in this field.
Originality/value
This study introduces a novel approach assessing sustainable SC performance by utilizing both traditional regression and ML models to estimate transportation costs, which are then inputted into the discrete event simulation (DES) model.
Details
Keywords
This paper summarizes and synthesizes existing research while critically assessing findings for future studies to advance the scholarship of maritime logistics and digital…
Abstract
Purpose
This paper summarizes and synthesizes existing research while critically assessing findings for future studies to advance the scholarship of maritime logistics and digital transformation with big data.
Design/methodology/approach
A bibliometric analysis was conducted on 159 journal articles from the Scopus database with search keywords “maritime*” and “big data.” This analysis helps identify research gaps by identifying themes via keyword co-occurrence, co-citation and bibliographic coupling analysis. The Theory-Context-Characteristics-Methodology (TCCM) framework was applied to understand the findings of bibliometric analysis and provide a research agenda.
Findings
The analyses identified emerging themes of the scholarship of maritime logistics and digital transformation with big data and their relationships to identify research clusters. Future research directions were provided by examining existing research's theory, context, characteristics and method.
Originality/value
This research is grounded in bibliometric analysis and the TCCM framework to understand the scholarly evolution, giving managers and academics retrospective and prospective insights.
Details
Keywords
Francesca Bartolacci, Roberto Del Gobbo and Michela Soverchia
This paper contributes to the field of public services’ performance measurement systems by proposing a benchmarking-based methodology that improves the effective use of big and…
Abstract
Purpose
This paper contributes to the field of public services’ performance measurement systems by proposing a benchmarking-based methodology that improves the effective use of big and open data in analyzing and evaluating efficiency, for supporting internal decision-making processes of public entities.
Design/methodology/approach
The proposed methodology uses data envelopment analysis in combination with a multivariate outlier detection algorithm—local outlier factor—to ensure the proper exploitation of the data available for efficiency evaluation in the presence of the multidimensional datasets with anomalous values that often characterize big and open data. An empirical implementation of the proposed methodology was conducted on waste management services provided in Italy.
Findings
The paper addresses the problem of misleading targets for entities that are erroneously deemed inefficient when applying data envelopment analysis to real-life datasets containing outliers. The proposed approach makes big and open data useful in evaluating relative efficiency, and it supports the development of performance-based strategies and policies by public entities from a data-driven public sector perspective.
Originality/value
Few empirical studies have explored how to make the use of big and open data more feasible for performance measurement systems in the public sector, addressing the challenges related to data quality and the need for analytical tools readily usable from a managerial perspective, given the poor diffusion of technical skills in public organizations. The paper fills this research gap by proposing a methodology that allows for exploiting the opportunities offered by big and open data for supporting internal decision-making processes within the public services context.
Details