Search results
1 – 10 of 19Charitha Sasika Hettiarachchi, Nanfei Sun, Trang Minh Quynh Le and Naveed Saleem
The COVID-19 pandemic has posed many challenges in almost all sectors around the globe. Because of the pandemic, government entities responsible for managing health-care resources…
Abstract
Purpose
The COVID-19 pandemic has posed many challenges in almost all sectors around the globe. Because of the pandemic, government entities responsible for managing health-care resources face challenges in managing and distributing their limited and valuable health resources. In addition, severe outbreaks may occur in a small or large geographical area. Therefore, county-level preparation is crucial for officials and organizations who manage such disease outbreaks. However, most COVID-19-related research projects have focused on either state- or country-level. Only a few studies have considered county-level preparations, such as identifying high-risk counties of a particular state to fight against the COVID-19 pandemic. Therefore, the purpose of this research is to prioritize counties in a state based on their COVID-19-related risks to manage the COVID outbreak effectively.
Design/methodology/approach
In this research, the authors use a systematic hybrid approach that uses a clustering technique to group counties that share similar COVID conditions and use a multi-criteria decision-making approach – the analytic hierarchy process – to rank clusters with respect to the severity of the pandemic. The clustering was performed using two methods, k-means and fuzzy c-means, but only one of them was used at a time during the experiment.
Findings
The results of this study indicate that the proposed approach can effectively identify and rank the most vulnerable counties in a particular state. Hence, state health resources managing entities can identify counties in desperate need of more attention before they allocate their resources and better prepare those counties before another surge.
Originality/value
To the best of the authors’ knowledge, this study is the first to use both an unsupervised learning approach and the analytic hierarchy process to identify and rank state counties in accordance with the severity of COVID-19.
Details
Keywords
Xiumei Cai, Xi Yang and Chengmao Wu
Multi-view fuzzy clustering algorithms are not widely used in image segmentation, and many of these algorithms are lacking in robustness. The purpose of this paper is to…
Abstract
Purpose
Multi-view fuzzy clustering algorithms are not widely used in image segmentation, and many of these algorithms are lacking in robustness. The purpose of this paper is to investigate a new algorithm that can segment the image better and retain as much detailed information about the image as possible when segmenting noisy images.
Design/methodology/approach
The authors present a novel multi-view fuzzy c-means (FCM) clustering algorithm that includes an automatic view-weight learning mechanism. Firstly, this algorithm introduces a view-weight factor that can automatically adjust the weight of different views, thereby allowing each view to obtain the best possible weight. Secondly, the algorithm incorporates a weighted fuzzy factor, which serves to obtain local spatial information and local grayscale information to preserve image details as much as possible. Finally, in order to weaken the effects of noise and outliers in image segmentation, this algorithm employs the kernel distance measure instead of the Euclidean distance.
Findings
The authors added different kinds of noise to images and conducted a large number of experimental tests. The results show that the proposed algorithm performs better and is more accurate than previous multi-view fuzzy clustering algorithms in solving the problem of noisy image segmentation.
Originality/value
Most of the existing multi-view clustering algorithms are for multi-view datasets, and the multi-view fuzzy clustering algorithms are unable to eliminate noise points and outliers when dealing with noisy images. The algorithm proposed in this paper has stronger noise immunity and can better preserve the details of the original image.
Details
Keywords
Jianhua Zhang, Liangchen Li, Fredrick Ahenkora Boamah, Dandan Wen, Jiake Li and Dandan Guo
Traditional case-adaptation methods have poor accuracy, low efficiency and limited applicability, which cannot meet the needs of knowledge users. To address the shortcomings of…
Abstract
Purpose
Traditional case-adaptation methods have poor accuracy, low efficiency and limited applicability, which cannot meet the needs of knowledge users. To address the shortcomings of the existing research in the industry, this paper proposes a case-adaptation optimization algorithm to support the effective application of tacit knowledge resources.
Design/methodology/approach
The attribute simplification algorithm based on the forward search strategy in the neighborhood decision information system is implemented to realize the vertical dimensionality reduction of the case base, and the fuzzy C-mean (FCM) clustering algorithm based on the simulated annealing genetic algorithm (SAGA) is implemented to compress the case base horizontally with multiple decision classes. Then, the subspace K-nearest neighbors (KNN) algorithm is used to induce the decision rules for the set of adapted cases to complete the optimization of the adaptation model.
Findings
The findings suggest the rapid enrichment of data, information and tacit knowledge in the field of practice has led to low efficiency and low utilization of knowledge dissemination, and this algorithm can effectively alleviate the problems of users falling into “knowledge disorientation” in the era of the knowledge economy.
Practical implications
This study provides a model with case knowledge that meets users’ needs, thereby effectively improving the application of the tacit knowledge in the explicit case base and the problem-solving efficiency of knowledge users.
Social implications
The adaptation model can serve as a stable and efficient prediction model to make predictions for the effects of the many logistics and e-commerce enterprises' plans.
Originality/value
This study designs a multi-decision class case-adaptation optimization study based on forward attribute selection strategy-neighborhood rough sets (FASS-NRS) and simulated annealing genetic algorithm-fuzzy C-means (SAGA-FCM) for tacit knowledgeable exogenous cases. By effectively organizing and adjusting tacit knowledge resources, knowledge service organizations can maintain their competitive advantages. The algorithm models established in this study develop theoretical directions for a multi-decision class case-adaptation optimization study of tacit knowledge.
Details
Keywords
Nehal Elshaboury, Tarek Zayed and Eslam Mohammed Abdelkader
Water pipes degrade over time for a variety of pipe-related, soil-related, operational, and environmental factors. Hence, municipalities are necessitated to implement effective…
Abstract
Purpose
Water pipes degrade over time for a variety of pipe-related, soil-related, operational, and environmental factors. Hence, municipalities are necessitated to implement effective maintenance and rehabilitation strategies for water pipes based on reliable deterioration models and cost-effective inspection programs. In the light of foregoing, the paramount objective of this research study is to develop condition assessment and deterioration prediction models for saltwater pipes in Hong Kong.
Design/methodology/approach
As a perquisite to the development of condition assessment models, spherical fuzzy analytic hierarchy process (SFAHP) is harnessed to analyze the relative importance weights of deterioration factors. Afterward, the relative importance weights of deterioration factors coupled with their effective values are leveraged using the measurement of alternatives and ranking according to the compromise solution (MARCOS) algorithm to analyze the performance condition of water pipes. A condition rating system is then designed counting on the generalized entropy-based probabilistic fuzzy C means (GEPFCM) algorithm. A set of fourth order multiple regression functions are constructed to capture the degradation trends in condition of pipelines overtime covering their disparate characteristics.
Findings
Analytical results demonstrated that the top five influential deterioration factors comprise age, material, traffic, soil corrosivity and material. In addition, it was derived that developed deterioration models accomplished correlation coefficient, mean absolute error and root mean squared error of 0.8, 1.33 and 1.39, respectively.
Originality/value
It can be argued that generated deterioration models can assist municipalities in formulating accurate and cost-effective maintenance, repair and rehabilitation programs.
Details
Keywords
Jie Yang, Manman Zhang, Linjian Shangguan and Jinfa Shi
The possibility function-based grey clustering model has evolved into a complete approach for dealing with uncertainty evaluation problems. Existing models still have problems…
Abstract
Purpose
The possibility function-based grey clustering model has evolved into a complete approach for dealing with uncertainty evaluation problems. Existing models still have problems with the choice dilemma of the maximum criteria and instances when the possibility function may not accurately capture the data's randomness. This study aims to propose a multi-stage skewed grey cloud clustering model that blends grey and randomness to overcome these problems.
Design/methodology/approach
First, the skewed grey cloud possibility (SGCP) function is defined, and its digital characteristics demonstrate that a normal cloud is a particular instance of a skewed cloud. Second, the border of the decision paradox of the maximum criterion is established. Third, using the skewed grey cloud kernel weight (SGCKW) transformation as a tool, the multi-stage skewed grey cloud clustering coefficient (SGCCC) vector is calculated and research items are clustered according to this multi-stage SGCCC vector with overall features. Finally, the multi-stage skewed grey cloud clustering model's solution steps are then provided.
Findings
The results of applying the model to the assessment of college students' capacity for innovation and entrepreneurship revealed that, in comparison to the traditional grey clustering model and the two-stage grey cloud clustering evaluation model, the proposed model's clustering results have higher identification and stability, which partially resolves the decision paradox of the maximum criterion.
Originality/value
Compared with current models, the proposed model in this study can dynamically depict the clustering process through multi-stage clustering, ensuring the stability and integrity of the clustering results and advancing grey system theory.
Details
Keywords
This study aims to construct a sentiment series generation method for danmu comments based on deep learning, and explore the features of sentiment series after clustering.
Abstract
Purpose
This study aims to construct a sentiment series generation method for danmu comments based on deep learning, and explore the features of sentiment series after clustering.
Design/methodology/approach
This study consisted of two main parts: danmu comment sentiment series generation and clustering. In the first part, the authors proposed a sentiment classification model based on BERT fine-tuning to quantify danmu comment sentiment polarity. To smooth the sentiment series, they used methods, such as comprehensive weights. In the second part, the shaped-based distance (SBD)-K-shape method was used to cluster the actual collected data.
Findings
The filtered sentiment series or curves of the microfilms on the Bilibili website could be divided into four major categories. There is an apparently stable time interval for the first three types of sentiment curves, while the fourth type of sentiment curve shows a clear trend of fluctuation in general. In addition, it was found that “disputed points” or “highlights” are likely to appear at the beginning and the climax of films, resulting in significant changes in the sentiment curves. The clustering results show a significant difference in user participation, with the second type prevailing over others.
Originality/value
Their sentiment classification model based on BERT fine-tuning outperformed the traditional sentiment lexicon method, which provides a reference for using deep learning as well as transfer learning for danmu comment sentiment analysis. The BERT fine-tuning–SBD-K-shape algorithm can weaken the effect of non-regular noise and temporal phase shift of danmu text.
Details
Keywords
In the final step, the trust model is applied to the on-demand federated multipath distance vector routing protocol (AOMDV) to introduce path trust as a foundation for routing…
Abstract
Purpose
In the final step, the trust model is applied to the on-demand federated multipath distance vector routing protocol (AOMDV) to introduce path trust as a foundation for routing selection in the route discovery phase, construct a trusted path, and implement a path warning mechanism to detect malicious nodes in the route maintenance phase, respectively.
Design/methodology/approach
A trust-based on-demand multipath distance vector routing protocol is being developed to address the problem of flying ad-hoc network being subjected to internal attacks and experiencing frequent connection interruptions. Following the construction of the node trust assessment model and the presentation of trust evaluation criteria, the data packet forwarding rate, trusted interaction degree and detection packet receipt rate are discussed. In the next step, the direct trust degree of the adaptive fuzzy trust aggregation network compute node is constructed. After then, rely on the indirect trust degree of neighbouring nodes to calculate the trust degree of the node in the network. Design a trust fluctuation penalty mechanism, as a second step, to defend against the switch attack in the trust model.
Findings
When compared to the lightweight trust-enhanced routing protocol (TEAOMDV), it significantly improves the data packet delivery rate and throughput of the network significantly.
Originality/value
Additionally, it reduces the amount of routing overhead and the average end-to-end delay.
Details
Keywords
Nicola Castellano, Roberto Del Gobbo and Lorenzo Leto
The concept of productivity is central to performance management and decision-making, although it is complex and multifaceted. This paper aims to describe a methodology based on…
Abstract
Purpose
The concept of productivity is central to performance management and decision-making, although it is complex and multifaceted. This paper aims to describe a methodology based on the use of Big Data in a cluster analysis combined with a data envelopment analysis (DEA) that provides accurate and reliable productivity measures in a large network of retailers.
Design/methodology/approach
The methodology is described using a case study of a leading kitchen furniture producer. More specifically, Big Data is used in a two-step analysis prior to the DEA to automatically cluster a large number of retailers into groups that are homogeneous in terms of structural and environmental factors and assess a within-the-group level of productivity of the retailers.
Findings
The proposed methodology helps reduce the heterogeneity among the units analysed, which is a major concern in DEA applications. The data-driven factorial and clustering technique allows for maximum within-group homogeneity and between-group heterogeneity by reducing subjective bias and dimensionality, which is embedded with the use of Big Data.
Practical implications
The use of Big Data in clustering applied to productivity analysis can provide managers with data-driven information about the structural and socio-economic characteristics of retailers' catchment areas, which is important in establishing potential productivity performance and optimizing resource allocation. The improved productivity indexes enable the setting of targets that are coherent with retailers' potential, which increases motivation and commitment.
Originality/value
This article proposes an innovative technique to enhance the accuracy of productivity measures through the use of Big Data clustering and DEA. To the best of the authors’ knowledge, no attempts have been made to benefit from the use of Big Data in the literature on retail store productivity.
Details
Keywords
Samrat Gupta and Swanand Deodhar
Communities representing groups of agents with similar interests or functions are one of the essential features of complex networks. Finding communities in real-world networks is…
Abstract
Purpose
Communities representing groups of agents with similar interests or functions are one of the essential features of complex networks. Finding communities in real-world networks is critical for analyzing complex systems in various areas ranging from collaborative information to political systems. Given the different characteristics of networks and the capability of community detection in handling a plethora of societal problems, community detection methods represent an emerging area of research. Contributing to this field, the authors propose a new community detection algorithm based on the hybridization of node and link granulation.
Design/methodology/approach
The proposed algorithm utilizes a rough set-theoretic concept called closure on networks. Initial sets are constructed by using neighborhood topology around the nodes as well as links and represented as two different categories of granules. Subsequently, the authors iteratively obtain the constrained closure of these sets. The authors use node mutuality and link mutuality as merging criteria for node and link granules, respectively, during the iterations. Finally, the constrained closure subsets of nodes and links are combined and refined using the Jaccard similarity coefficient and a local density function to obtain communities in a binary network.
Findings
Extensive experiments conducted on twelve real-world networks followed by a comparison with state-of-the-art methods demonstrate the viability and effectiveness of the proposed algorithm.
Research limitations/implications
The study also contributes to the ongoing effort related to the application of soft computing techniques to model complex systems. The extant literature has integrated a rough set-theoretic approach with a fuzzy granular model (Kundu and Pal, 2015) and spectral clustering (Huang and Xiao, 2012) for node-centric community detection in complex networks. In contributing to this stream of work, the proposed algorithm leverages the unexplored synergy between rough set theory, node granulation and link granulation in the context of complex networks. Combined with experiments of network datasets from various domains, the results indicate that the proposed algorithm can effectively reveal co-occurring disjoint, overlapping and nested communities without necessarily assigning each node to a community.
Practical implications
This study carries important practical implications for complex adaptive systems in business and management sciences, in which entities are increasingly getting organized into communities (Jacucci et al., 2006). The proposed community detection method can be used for network-based fraud detection by enabling experts to understand the formation and development of fraudulent setups with an active exchange of information and resources between the firms (Van Vlasselaer et al., 2017). Products and services are getting connected and mapped in every walk of life due to the emergence of a variety of interconnected devices, social networks and software applications.
Social implications
The proposed algorithm could be extended for community detection on customer trajectory patterns and design recommendation systems for online products and services (Ghose et al., 2019; Liu and Wang, 2017). In line with prior research, the proposed algorithm can aid companies in investigating the characteristics of implicit communities of bloggers or social media users for their services and products so as to identify peer influencers and conduct targeted marketing (Chau and Xu, 2012; De Matos et al., 2014; Zhang et al., 2016). The proposed algorithm can be used to understand the behavior of each group and the appropriate communication strategy for that group. For instance, a group using a specific language or following a specific account might benefit more from a particular piece of content than another group. The proposed algorithm can thus help in exploring the factors defining communities and confronting many real-life challenges.
Originality/value
This work is based on a theoretical argument that communities in networks are not only based on compatibility among nodes but also on the compatibility among links. Building up on the aforementioned argument, the authors propose a community detection method that considers the relationship among both the entities in a network (nodes and links) as opposed to traditional methods, which are predominantly based on relationships among nodes only.
Details
Keywords
Seyed Mojtaba Taghavi, Vahidreza Ghezavati, Hadi Mohammadi Bidhandi and Seyed Mohammad Javad Mirzapour Al-e-Hashem
This paper aims to minimize the mean-risk cost of sustainable and resilient supplier selection, order allocation and production scheduling (SS,OA&PS) problem under uncertainty of…
Abstract
Purpose
This paper aims to minimize the mean-risk cost of sustainable and resilient supplier selection, order allocation and production scheduling (SS,OA&PS) problem under uncertainty of disruptions. The authors use conditional value at risk (CVaR) as a risk measure in optimizing the combined objective function of the total expected value and CVaR cost. A sustainable supply chain can create significant competitive advantages for companies through social justice, human rights and environmental progress. To control disruptions, the authors applied (proactive and reactive) resilient strategies. In this study, the authors combine resilience and social responsibility issues that lead to synergy in supply chain activities.
Design/methodology/approach
The present paper proposes a risk-averse two-stage mixed-integer stochastic programming model for sustainable and resilient SS,OA&PS problem under supply disruptions. In this decision-making process, determining the primary supplier portfolio according to the minimum sustainable-resilient score establishes the first-stage decisions. The recourse or second-stage decisions are: determining the amount of order allocation and scheduling of parts by each supplier, determining the reactive risk management strategies, determining the amount of order allocation and scheduling by each of reaction strategies and determining the number of products and scheduling of products on the planning time horizon. Uncertain parameters of this study are the start time of disruption, remaining capacity rate of suppliers and lead times associated with each reactive strategy.
Findings
In this paper, several numerical examples along with different sensitivity analyses (on risk parameters, minimum sustainable-resilience score of suppliers and shortage costs) were presented to evaluate the applicability of the proposed model. The results showed that the two-stage risk-averse stochastic mixed-integer programming model for designing the SS,OA&PS problem by considering economic and social aspects and resilience strategies is an effective and flexible tool and leads to optimal decisions with the least cost. In addition, the managerial insights obtained from this study are extracted and stated in Section 4.6.
Originality/value
This work proposes a risk-averse stochastic programming approach for a new multi-product sustainable and resilient SS,OA&PS problem. The planning horizon includes three periods before the disruption, during the disruption period and the recovery period. Other contributions of this work are: selecting the main supply portfolio based on the minimum score of sustainable-resilient criteria of suppliers, allocating and scheduling suppliers orders before and after disruptions, considering the balance constraint in receiving parts and using proactive and reactive risk management strategies simultaneously. Also, the scheduling of reactive strategies in different investment modes is applied to this problem.
Details