Search results
1 – 10 of 19Jianhua Zhang, Liangchen Li, Fredrick Ahenkora Boamah, Dandan Wen, Jiake Li and Dandan Guo
Traditional case-adaptation methods have poor accuracy, low efficiency and limited applicability, which cannot meet the needs of knowledge users. To address the shortcomings of…
Abstract
Purpose
Traditional case-adaptation methods have poor accuracy, low efficiency and limited applicability, which cannot meet the needs of knowledge users. To address the shortcomings of the existing research in the industry, this paper proposes a case-adaptation optimization algorithm to support the effective application of tacit knowledge resources.
Design/methodology/approach
The attribute simplification algorithm based on the forward search strategy in the neighborhood decision information system is implemented to realize the vertical dimensionality reduction of the case base, and the fuzzy C-mean (FCM) clustering algorithm based on the simulated annealing genetic algorithm (SAGA) is implemented to compress the case base horizontally with multiple decision classes. Then, the subspace K-nearest neighbors (KNN) algorithm is used to induce the decision rules for the set of adapted cases to complete the optimization of the adaptation model.
Findings
The findings suggest the rapid enrichment of data, information and tacit knowledge in the field of practice has led to low efficiency and low utilization of knowledge dissemination, and this algorithm can effectively alleviate the problems of users falling into “knowledge disorientation” in the era of the knowledge economy.
Practical implications
This study provides a model with case knowledge that meets users’ needs, thereby effectively improving the application of the tacit knowledge in the explicit case base and the problem-solving efficiency of knowledge users.
Social implications
The adaptation model can serve as a stable and efficient prediction model to make predictions for the effects of the many logistics and e-commerce enterprises' plans.
Originality/value
This study designs a multi-decision class case-adaptation optimization study based on forward attribute selection strategy-neighborhood rough sets (FASS-NRS) and simulated annealing genetic algorithm-fuzzy C-means (SAGA-FCM) for tacit knowledgeable exogenous cases. By effectively organizing and adjusting tacit knowledge resources, knowledge service organizations can maintain their competitive advantages. The algorithm models established in this study develop theoretical directions for a multi-decision class case-adaptation optimization study of tacit knowledge.
Details
Keywords
Yulong Li, Ziwen Yao, Jing Wu, Saixing Zeng and Guobin Wu
The numerous spoil grounds brought about by mega transportation infrastructure projects which can be influenced by the ecological environment. To achieve better management of…
Abstract
Purpose
The numerous spoil grounds brought about by mega transportation infrastructure projects which can be influenced by the ecological environment. To achieve better management of spoil grounds, this paper aims to assess their comprehensive risk levels and categorize them into different categories based on ecological environmental risks.
Design/methodology/approach
Based on analysis of the environmental characteristics of spoil grounds, this paper first comprehensively identified the ecological environmental risk factors and developed a risk assessment index system to quantitatively describe the comprehensive risk levels. Second, this paper proposed a comprehensive model to determine the risk assessment and categorization of spoil ground group in mega projects integrating improved projection pursuit clustering (PPC) method and K-means clustering algorithm. Finally, a case study of a spoil ground group (includes 50 spoil grounds) in a mega infrastructure project in western China is presented to demonstrate and validate the proposed method.
Findings
The results show that our proposed comprehensive model can efficiently assess and categorize the spoil grounds in the group based on their comprehensive ecological environmental risk. In addition, during the process of risk assessment and categorization of spoil grounds, it is necessary to distinguish between sensitive factors and nonsensitive factors. The differences between different categories of spoil grounds can be recognized based on nonsensitive factors, and high-risk spoil grounds which need to be focused more on can be identified according to sensitive factors.
Originality/value
This paper develops a comprehensive model of risk assessment and categorization of a group of spoil grounds based on their ecological environmental risks, which can provide a reference for the management of spoil grounds in mega projects.
Details
Keywords
Chandresh Kumbhani and Ravi Kant
Strategic integration of enablers and the realization of drone delivery benefits emerge as essential strategies for business organizations to enhance operational efficiency and…
Abstract
Purpose
Strategic integration of enablers and the realization of drone delivery benefits emerge as essential strategies for business organizations to enhance operational efficiency and stay competitive in last-mile logistics. This paper aims to explore the benefits of drone-based last-mile delivery in the Indian logistic sector by providing a framework for ranking drone delivery benefits (DDBs) due to the adoption of its enablers.
Design/methodology/approach
This study proposes a novel hybrid framework applied in the Indian logistic sector by integrating a sentence boundary extraction algorithm for extracting benefits from literature, a spherical fuzzy analytical hierarchy process (SF-AHP) for evaluating primary enablers, unsupervised fuzzy C-means clustering (FCM) for clustering benefits and a spherical combined compromised solution (SF-CoCoSo) for ranking benefits with respect to primary enablers.
Findings
The results reveal that technological and infrastructure enablers (TIE), government and legislation enablers (GLE) and operational and service quality enablers (OSE) are the most significant enablers for drone implementation in logistics. Top-ranked benefits increase the efficiency of last-mile delivery (DDB10), foster supply chain management and logistic sustainability (DDB16) and increase delivery access to rural area and vulnerable people (DDB17).
Practical implications
This research assists scholars, entrepreneurs and policymakers in the sustainable deployment of drone delivery in the logistics sector. This study facilitates the use of drones in delivery services and provides a foundation for all stakeholders in logistics.
Originality/value
The assessments involve considering judgment from a highly knowledgeable and experienced group in India, characterized by a large volume of inputs and a high level of expertise.
Details
Keywords
Volkan Yasin Pehlivanoglu and Perihan Pehlivanoğlu
The purpose of this paper is to present an efficient path planning method for the multi-UAV system in target coverage problems.
Abstract
Purpose
The purpose of this paper is to present an efficient path planning method for the multi-UAV system in target coverage problems.
Design/methodology/approach
An enhanced particle swarm optimizer (PSO) is used to solve the path planning problem, which concerns the two-dimensional motion of multirotor unmanned aerial vehicles (UAVs) in a three-dimensional environment. Enhancements include an improved initial swarm generation and prediction strategy for succeeding generations. Initial swarm improvements include the clustering process managed by fuzzy c-means clustering method, ordering procedure handled by ant colony optimizer and design vector change. Local solutions form the foundation of a prediction strategy.
Findings
Numerical simulations show that the proposed method could find near-optimal paths for multi-UAVs effectively.
Practical implications
Simulations indicate the proposed method could be deployed for autonomous multi-UAV systems with target coverage problems.
Originality/value
The proposed method combines intelligent methods in the early phase of PSO, handles obstacle avoidance problems with a unique approach and accelerates the process by adding a prediction strategy.
Details
Keywords
Long Li, Binyang Chen and Jiangli Yu
The selection of sensitive temperature measurement points is the premise of thermal error modeling and compensation. However, most of the sensitive temperature measurement point…
Abstract
Purpose
The selection of sensitive temperature measurement points is the premise of thermal error modeling and compensation. However, most of the sensitive temperature measurement point selection methods do not consider the influence of the variability of thermal sensitive points on thermal error modeling and compensation. This paper considers the variability of thermal sensitive points, and aims to propose a sensitive temperature measurement point selection method and thermal error modeling method that can reduce the influence of thermal sensitive point variability.
Design/methodology/approach
Taking the truss robot as the experimental object, the finite element method is used to construct the simulation model of the truss robot, and the temperature measurement point layout scheme is designed based on the simulation model to collect the temperature and thermal error data. After the clustering of the temperature measurement point data is completed, the improved attention mechanism is used to extract the temperature data of the key time steps of the temperature measurement points in each category for thermal error modeling.
Findings
By comparing with the thermal error modeling method of the conventional fixed sensitive temperature measurement points, it is proved that the method proposed in this paper is more flexible in the processing of sensitive temperature measurement points and more stable in prediction accuracy.
Originality/value
The Grey Attention-Long Short Term Memory (GA-LSTM) thermal error prediction model proposed in this paper can reduce the influence of the variability of thermal sensitive points on the accuracy of thermal error modeling in long-term processing, and improve the accuracy of thermal error prediction model, which has certain application value. It has guiding significance for thermal error compensation prediction.
Details
Keywords
Esmat Taghipour Anari, Seyed Hessameddin Zegordi and Amir Albadvi
This paper aims to determine the type of supplier involvement in terms of time and extent of supplier involvement in automobile product development based on the characteristics of…
Abstract
Purpose
This paper aims to determine the type of supplier involvement in terms of time and extent of supplier involvement in automobile product development based on the characteristics of parts in the Iranian automotive industry.
Design/methodology/approach
The paper proposes the clustering and analytic hierarchy process (AHP) methods. Combining the K-means clustering method and metaheuristic algorithms, the genetic algorithm (GA) and particle swarm optimization (PSO) algorithm are applied to achieve better clustering results.
Findings
The results show that lack of internal knowledge, high technology change and complexity of parts increase the need to outsource the design process. In addition to these reasons, high development costs and high interface complexity justify suppliers’ early involvement.
Originality/value
Most research only presents a conceptual framework for understanding the various levels of supplier involvement in new product development (NPD). However, in the automotive industry, numerous parts have differing degrees of importance and priority, and experts may have varying opinions based on different criteria. Therefore, the existing conceptual model for analyzing the types of involvement of each supplier is not practical. We have formulated a problem-solving approach that utilizes the clustering and AHP methods to analyze data obtained from qualitative research and determine the type of supplier involvement.
Details
Keywords
This study aims to construct a sentiment series generation method for danmu comments based on deep learning, and explore the features of sentiment series after clustering.
Abstract
Purpose
This study aims to construct a sentiment series generation method for danmu comments based on deep learning, and explore the features of sentiment series after clustering.
Design/methodology/approach
This study consisted of two main parts: danmu comment sentiment series generation and clustering. In the first part, the authors proposed a sentiment classification model based on BERT fine-tuning to quantify danmu comment sentiment polarity. To smooth the sentiment series, they used methods, such as comprehensive weights. In the second part, the shaped-based distance (SBD)-K-shape method was used to cluster the actual collected data.
Findings
The filtered sentiment series or curves of the microfilms on the Bilibili website could be divided into four major categories. There is an apparently stable time interval for the first three types of sentiment curves, while the fourth type of sentiment curve shows a clear trend of fluctuation in general. In addition, it was found that “disputed points” or “highlights” are likely to appear at the beginning and the climax of films, resulting in significant changes in the sentiment curves. The clustering results show a significant difference in user participation, with the second type prevailing over others.
Originality/value
Their sentiment classification model based on BERT fine-tuning outperformed the traditional sentiment lexicon method, which provides a reference for using deep learning as well as transfer learning for danmu comment sentiment analysis. The BERT fine-tuning–SBD-K-shape algorithm can weaken the effect of non-regular noise and temporal phase shift of danmu text.
Details
Keywords
Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern…
Abstract
Purpose
Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc. However, an accurate segmentation is a critical task since finding a correct model that fits a different type of image processing application is a persistent problem. This paper develops a novel segmentation model that aims to be a unified model using any kind of image processing application. The proposed precise and parallel segmentation model (PPSM) combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions. Moreover, a parallel boosting algorithm is proposed to improve the performance of the developed segmentation algorithm and minimize its computational cost. To evaluate the effectiveness of the proposed PPSM, different benchmark data sets for image segmentation are used such as Planet Hunters 2 (PH2), the International Skin Imaging Collaboration (ISIC), Microsoft Research in Cambridge (MSRC), the Berkley Segmentation Benchmark Data set (BSDS) and Common Objects in COntext (COCO). The obtained results indicate the efficacy of the proposed model in achieving high accuracy with significant processing time reduction compared to other segmentation models and using different types and fields of benchmarking data sets.
Design/methodology/approach
The proposed PPSM combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions.
Findings
On the basis of the achieved results, it can be observed that the proposed PPSM–minimum cross-entropy thresholding (PPSM–MCET)-based segmentation model is a robust, accurate and highly consistent method with high-performance ability.
Originality/value
A novel hybrid segmentation model is constructed exploiting a combination of Gaussian, gamma and lognormal distributions using MCET. Moreover, and to provide an accurate and high-performance thresholding with minimum computational cost, the proposed PPSM uses a parallel processing method to minimize the computational effort in MCET computing. The proposed model might be used as a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc.
Details
Keywords
Walaa Metwally Kandil, Fawzi H. Zarzoura, Mahmoud Salah Goma and Mahmoud El-Mewafi El-Mewafi Shetiwi
This study aims to present a new rapid enhancement digital elevation model (DEM) framework using Google Earth Engine (GEE), machine learning, weighted interpolation and spatial…
Abstract
Purpose
This study aims to present a new rapid enhancement digital elevation model (DEM) framework using Google Earth Engine (GEE), machine learning, weighted interpolation and spatial interpolation techniques with ground control points (GCPs), where high-resolution DEMs are crucial spatial data that find extensive use in many analyses and applications.
Design/methodology/approach
First, rapid-DEM imports Shuttle Radar Topography Mission (SRTM) data and Sentinel-2 multispectral imagery from a user-defined time and area of interest into GEE. Second, SRTM with the feature attributes from Sentinel-2 multispectral imagery is generated and used as input data in support vector machine classification algorithm. Third, the inverse probability weighted interpolation (IPWI) approach uses 12 fixed GCPs as additional input data to assign the probability to each pixel of the image and generate corrected SRTM elevations. Fourth, gridding the enhanced DEM consists of regular points (E, N and H), and the contour interval is 5 m. Finally, densification of enhanced DEM data with GCPs is obtained using global positioning system technique through spatial interpolations such as Kriging, inverse distance weighted, modified Shepard’s method and triangulation with linear interpolation techniques.
Findings
The results were compared to a 1-m vertically accurate reference DEM (RD) obtained by image matching with Worldview-1 stereo satellite images. The results of this study demonstrated that the root mean square error (RMSE) of the original SRTM DEM was 5.95 m. On the other hand, the RMSE of the estimated elevations by the IPWI approach has been improved to 2.01 m, and the generated DEM by Kriging technique was 1.85 m, with a reduction of 68.91%.
Originality/value
A comparison with the RD demonstrates significant SRTM improvements. The suggested method clearly reduces the elevation error of the original SRTM DEM.
Details
Keywords
Government organizations often store large amounts of data and need to choose effective data governance service to achieve digital government. This paper aims to propose a novel…
Abstract
Purpose
Government organizations often store large amounts of data and need to choose effective data governance service to achieve digital government. This paper aims to propose a novel multi-attribute group decision-making (MAGDM) method with multigranular uncertain linguistic variables for the selection of data governance service provider.
Design/methodology/approach
This paper presents a MAGDM method based on multigranular uncertain linguistic variables and minimum adjustment consensus. First, a novel transformation function is proposed to unify the multigranular uncertain linguistic variables. Then, the weights of the criteria are determined by building a linear programming model with positive and negative ideal solutions. To obtain the consensus opinion, a minimum adjustment consensus model with multigranular uncertain linguistic variables is established. Furthermore, the consensus opinion is aggregated to obtain the best data governance service provider. Finally, the proposed method is demonstrated by the application of the selection of data governance service provider.
Findings
The proposed consensus model with minimum adjustments could facilitate the consensus building and obtain a higher group consensus, while traditional consensus methods often need multiple rounds of modifications. Due to different backgrounds and professional fields, decision-makers (DMs) often provide multigranular uncertain linguistic variables. The proposed transformation function based on the positive ideal solution could help DMs understand each other and facilitate the interactions among DMs.
Originality/value
The minimum adjustment consensus-based MAGDM method with multigranular uncertain linguistic variables is proposed to achieve the group consensus. The application of the proposed method in the selection of data governance service provider is also investigated.
Details