Search results
1 – 10 of 156Santonab Chakraborty, Rakesh D. Raut, T.M. Rofin and Shankar Chakraborty
Supplier selection along with continuous evaluation of their performance is a crucial activity in healthcare supply chain management for effective utilization of scarce resources…
Abstract
Purpose
Supplier selection along with continuous evaluation of their performance is a crucial activity in healthcare supply chain management for effective utilization of scarce resources while providing quality service at an affordable price, and minimizing chances of stock-out, avoiding serious consequences on the illness or fatality of the patients. Presence of both qualitative and quantitative evaluation criteria, set of potential suppliers and participation of different stakeholders with varying interest make healthcare supplier selection a challenging task which can be effectively solved using any of the multi-criteria decision making (MCDM) methods.
Design/methodology/approach
To deal with various qualitative criteria, like cost, quality, delivery performance, reliability, responsiveness and flexibility, this paper proposes integration of grey system theory with a newly developed MCDM tool, i.e. mixed aggregation by comprehensive normalization technique (MACONT) to identify the best performing supplier for pharmaceutical items in a healthcare unit from a pool of six competing alternatives based on the opinions of three healthcare professionals.
Findings
While assessing importance of the six evaluation criteria and performance of the alternative healthcare suppliers against those criteria using grey numbers, and exploring use of three normalization procedures and two aggregation operations of MACONT method, this integrated approach singles out S5 as the most compromised healthcare supplier for the considered problem. A sensitivity analysis of its ranking performance against varying values of both balance parameters and preference parameters also validates its solution accuracy and robustness.
Originality/value
This integrated approach can thus efficiently solve healthcare supplier selection problems based on qualitative evaluation criteria in uncertain group decision making environment. It can also be deployed to deal with other decision making problems in the healthcare sector, like supplier selection for healthcare devices, performance evaluation of healthcare units, ranking of physicians etc.
Details
Keywords
Mehmet Chakkol, Mark Johnson, Antonios Karatzas, Georgios Papadopoulos and Nikolaos Korfiatis
President Trump's tenure was accompanied by a series of protectionist measures that intended to reinvigorate US-based production and make manufacturing supply chains more “local”…
Abstract
Purpose
President Trump's tenure was accompanied by a series of protectionist measures that intended to reinvigorate US-based production and make manufacturing supply chains more “local”. Amidst these increasing institutional pressures to localise, and the business uncertainty that ensued, this study investigates the extent to which manufacturers reconfigured their supply bases.
Design/methodology/approach
Bloomberg's Supply Chain Function (SPLC) is used to manually extract data about the direct suppliers of 30 of the largest American manufacturers in terms of market capitalisation. Overall, the raw data comprise 20,100 quantified buyer–supplier relationships that span seven years (2014–2020). The supply base dimensions of spatial complexity, spend concentration and buyer dependence are operationalised by applying appropriate aggregation functions on the raw data. The final dataset is a firm-year panel that is analysed using a random effect (RE) modelling approach and the conditional means of the three dimensions are plotted over time.
Findings
Over the studied timeframe, American manufacturers progressively reduced the spatial complexity of their supply bases and concentrated their purchase spend to fewer suppliers. Contrary to the aims of governmental policies, American manufacturers increased their dependence on foreign suppliers and reduced their dependence on local ones.
Originality/value
The research provides insights into the dynamics of manufacturing supply chains as they adapt to shifting institutional demands.
Details
Keywords
Hossein Shakibaei, Mohammad Reza Farhadi-Ramin, Mohammad Alipour-Vaezi, Amir Aghsami and Masoud Rabbani
Every day, small and big incidents happen all over the world, and given the human, financial and spiritual damage they cause, proper planning should be sought to deal with them so…
Abstract
Purpose
Every day, small and big incidents happen all over the world, and given the human, financial and spiritual damage they cause, proper planning should be sought to deal with them so they can be appropriately managed in times of crisis. This study aims to examine humanitarian supply chain models.
Design/methodology/approach
A new model is developed to pursue the necessary relations in an optimal way that will minimize human, financial and moral losses. In this developed model, in order to optimize the problem and minimize the amount of human and financial losses, the following subjects have been applied: magnitude of the areas in which an accident may occur as obtained by multiple attribute decision-making methods, the distances between relief centers, the number of available rescuers, the number of rescuers required and the risk level of each patient which is determined using previous data and machine learning (ML) algorithms.
Findings
For this purpose, a case study in the east of Tehran has been conducted. According to the results obtained from the algorithms, problem modeling and case study, the accuracy of the proposed model is evaluated very well.
Originality/value
Obtaining each injured person's priority using ML techniques and each area's importance or risk level, besides developing a bi-objective mathematical model and using multiple attribute decision-making methods, make this study unique among very few studies that concern ML in the humanitarian supply chain. Moreover, the findings validate the results and the model's functionality very well.
Details
Keywords
Manisha Malik, Devyani Tomar, Narpinder Singh and B.S. Khatkar
This study aims to provide a salt ready-mix to instant fried noodles manufacturers.
Abstract
Purpose
This study aims to provide a salt ready-mix to instant fried noodles manufacturers.
Design/methodology/approach
Response surface methodology was used to get optimized salt ready-mix based on carbonate salt, disodium phosphate, tripotassium phospahte, sodium hexametaphosphate and sodium chloride. Peak viscosity of flour and yellowness, cooking loss and hardness of noodles were considered as response factors for finding optimized salt formulation.
Findings
The results showed that salts have an important role in governing quality of noodles. Optimum levels of five independent variables of salts, namely, carbonate salt (1:1 mixture of sodium to potassium carbonate), disodium phosphate, sodium hexametaphosphate, tripotassium phosphate and sodium chloride were 0.64%, 0.29%, 0.25%, 0.46% and 0.78% on flour weight basis, respectively.
Originality/value
To the best of the authors’ knowledge, this is the first study to assess the effect of different combinations of different salts on the quality of noodles. These findings will also benefit noodle manufacturers, assisting in production of superior quality noodles.
Details
Keywords
Prajakta Thakare and Ravi Sankar V.
Agriculture is the backbone of a country, contributing more than half of the sector of economy throughout the world. The need for precision agriculture is essential in evaluating…
Abstract
Purpose
Agriculture is the backbone of a country, contributing more than half of the sector of economy throughout the world. The need for precision agriculture is essential in evaluating the conditions of the crops with the aim of determining the proper selection of pesticides. The conventional method of pest detection fails to be stable and provides limited accuracy in the prediction. This paper aims to propose an automatic pest detection module for the accurate detection of pests using the hybrid optimization controlled deep learning model.
Design/methodology/approach
The paper proposes an advanced pest detection strategy based on deep learning strategy through wireless sensor network (WSN) in the agricultural fields. Initially, the WSN consisting of number of nodes and a sink are clustered as number of clusters. Each cluster comprises a cluster head (CH) and a number of nodes, where the CH involves in the transfer of data to the sink node of the WSN and the CH is selected using the fractional ant bee colony optimization (FABC) algorithm. The routing process is executed using the protruder optimization algorithm that helps in the transfer of image data to the sink node through the optimal CH. The sink node acts as the data aggregator and the collection of image data thus obtained acts as the input database to be processed to find the type of pest in the agricultural field. The image data is pre-processed to remove the artifacts present in the image and the pre-processed image is then subjected to feature extraction process, through which the significant local directional pattern, local binary pattern, local optimal-oriented pattern (LOOP) and local ternary pattern (LTP) features are extracted. The extracted features are then fed to the deep-convolutional neural network (CNN) in such a way to detect the type of pests in the agricultural field. The weights of the deep-CNN are tuned optimally using the proposed MFGHO optimization algorithm that is developed with the combined characteristics of navigating search agents and the swarming search agents.
Findings
The analysis using insect identification from habitus image Database based on the performance metrics, such as accuracy, specificity and sensitivity, reveals the effectiveness of the proposed MFGHO-based deep-CNN in detecting the pests in crops. The analysis proves that the proposed classifier using the FABC+protruder optimization-based data aggregation strategy obtains an accuracy of 94.3482%, sensitivity of 93.3247% and the specificity of 94.5263%, which is high as compared to the existing methods.
Originality/value
The proposed MFGHO optimization-based deep-CNN is used for the detection of pest in the crop fields to ensure the better selection of proper cost-effective pesticides for the crop fields in such a way to increase the production. The proposed MFGHO algorithm is developed with the integrated characteristic features of navigating search agents and the swarming search agents in such a way to facilitate the optimal tuning of the hyperparameters in the deep-CNN classifier for the detection of pests in the crop fields.
Details
Keywords
For ranking aggregation in crowdsourcing task, the key issue is how to select the optimal working group with a given number of workers to optimize the performance of their…
Abstract
Purpose
For ranking aggregation in crowdsourcing task, the key issue is how to select the optimal working group with a given number of workers to optimize the performance of their aggregation. Performance prediction for ranking aggregation can solve this issue effectively. However, the performance prediction effect for ranking aggregation varies greatly due to the different influencing factors selected. Although questions on why and how data fusion methods perform well have been thoroughly discussed in the past, there is a lack of insight about how to select influencing factors to predict the performance and how much can be improved of.
Design/methodology/approach
In this paper, performance prediction of multivariable linear regression based on the optimal influencing factors for ranking aggregation in crowdsourcing task is studied. An influencing factor optimization selection method based on stepwise regression (IFOS-SR) is proposed to screen the optimal influencing factors. A working group selection model based on the optimal influencing factors is built to select the optimal working group with a given number of workers.
Findings
The proposed approach can identify the optimal influencing factors of ranking aggregation, predict the aggregation performance more accurately than the state-of-the-art methods and select the optimal working group with a given number of workers.
Originality/value
To find out under which condition data fusion method may lead to performance improvement for ranking aggregation in crowdsourcing task, the optimal influencing factors are identified by the IFOS-SR method. This paper presents an analysis of the behavior of the linear combination method and the CombSUM method based on the optimal influencing factors, and optimizes the task assignment with a given number of workers by the optimal working group selection method.
Details
Keywords
Zhichao Wang and Valentin Zelenyuk
Estimation of (in)efficiency became a popular practice that witnessed applications in virtually any sector of the economy over the last few decades. Many different models were…
Abstract
Estimation of (in)efficiency became a popular practice that witnessed applications in virtually any sector of the economy over the last few decades. Many different models were deployed for such endeavors, with Stochastic Frontier Analysis (SFA) models dominating the econometric literature. Among the most popular variants of SFA are Aigner, Lovell, and Schmidt (1977), which launched the literature, and Kumbhakar, Ghosh, and McGuckin (1991), which pioneered the branch taking account of the (in)efficiency term via the so-called environmental variables or determinants of inefficiency. Focusing on these two prominent approaches in SFA, the goal of this chapter is to try to understand the production inefficiency of public hospitals in Queensland. While doing so, a recognized yet often overlooked phenomenon emerges where possible dramatic differences (and consequently very different policy implications) can be derived from different models, even within one paradigm of SFA models. This emphasizes the importance of exploring many alternative models, and scrutinizing their assumptions, before drawing policy implications, especially when such implications may substantially affect people’s lives, as is the case in the hospital sector.
Details
Keywords
In the final step, the trust model is applied to the on-demand federated multipath distance vector routing protocol (AOMDV) to introduce path trust as a foundation for routing…
Abstract
Purpose
In the final step, the trust model is applied to the on-demand federated multipath distance vector routing protocol (AOMDV) to introduce path trust as a foundation for routing selection in the route discovery phase, construct a trusted path, and implement a path warning mechanism to detect malicious nodes in the route maintenance phase, respectively.
Design/methodology/approach
A trust-based on-demand multipath distance vector routing protocol is being developed to address the problem of flying ad-hoc network being subjected to internal attacks and experiencing frequent connection interruptions. Following the construction of the node trust assessment model and the presentation of trust evaluation criteria, the data packet forwarding rate, trusted interaction degree and detection packet receipt rate are discussed. In the next step, the direct trust degree of the adaptive fuzzy trust aggregation network compute node is constructed. After then, rely on the indirect trust degree of neighbouring nodes to calculate the trust degree of the node in the network. Design a trust fluctuation penalty mechanism, as a second step, to defend against the switch attack in the trust model.
Findings
When compared to the lightweight trust-enhanced routing protocol (TEAOMDV), it significantly improves the data packet delivery rate and throughput of the network significantly.
Originality/value
Additionally, it reduces the amount of routing overhead and the average end-to-end delay.
Details
Keywords
R.S. Vignesh and M. Monica Subashini
An abundance of techniques has been presented so forth for waste classification but, they deliver inefficient results with low accuracy. Their achievement on various repositories…
Abstract
Purpose
An abundance of techniques has been presented so forth for waste classification but, they deliver inefficient results with low accuracy. Their achievement on various repositories is different and also, there is insufficiency of high-scale databases for training. The purpose of the study is to provide high security.
Design/methodology/approach
In this research, optimization-assisted federated learning (FL) is introduced for thermoplastic waste segregation and classification. The deep learning (DL) network trained by Archimedes Henry gas solubility optimization (AHGSO) is used for the classification of plastic and resin types. The deep quantum neural networks (DQNN) is used for first-level classification and the deep max-out network (DMN) is employed for second-level classification. This developed AHGSO is obtained by blending the features of Archimedes optimization algorithm (AOA) and Henry gas solubility optimization (HGSO). The entities included in this approach are nodes and servers. Local training is carried out depending on local data and updations to the server are performed. Then, the model is aggregated at the server. Thereafter, each node downloads the global model and the update training is executed depending on the downloaded global and the local model till it achieves the satisfied condition. Finally, local update and aggregation at the server is altered based on the average method. The Data tag suite (DATS_2022) dataset is used for multilevel thermoplastic waste segregation and classification.
Findings
By using the DQNN in first-level classification the designed optimization-assisted FL has gained an accuracy of 0.930, mean average precision (MAP) of 0.933, false positive rate (FPR) of 0.213, loss function of 0.211, mean square error (MSE) of 0.328 and root mean square error (RMSE) of 0.572. In the second level classification, by using DMN the accuracy, MAP, FPR, loss function, MSE and RMSE are 0.932, 0.935, 0.093, 0.068, 0.303 and 0.551.
Originality/value
The multilevel thermoplastic waste segregation and classification using the proposed model is accurate and improves the effectiveness of the classification.
Details
Keywords
Rahul Arora, Nitin Arora and Sidhartha Bhattacharjee
COVID-19 has affected the economies adversely from all sides. The sudden halt in production has impacted both the supply and demand sides. It calls for analysis to quantify the…
Abstract
Purpose
COVID-19 has affected the economies adversely from all sides. The sudden halt in production has impacted both the supply and demand sides. It calls for analysis to quantify the impact of the reduction in economic activity on the economy-wide variables so that appropriate steps can be taken. This study aims to evaluate the sensitivity of various sectors of the Indian economy to this dual shock.
Design/methodology/approach
The eight-sector open economy general equilibrium Global Trade Analysis Project (GTAP) model has been simulated to evaluate the sector-specific effects of a fall in economic activity due to COVID-19. This model uses an economy-wide accounting framework to quantify the impact of a shock on the given equilibrium economy and report the post-simulation new equilibrium values.
Findings
The empirical results state that welfare for the Indian economy falls to the tune of 7.70% due to output shock. Because of demand–supply linkages, it also impacts the inter- and intra-industry flows, demand for factors of production and imports. There is a momentous fall in the demand for factor endowments from all sectors. Among those, the trade-hotel-transport and manufacturing sectors are in the first two positions from the top. The study recommends an immediate revival of the manufacturing and trade-hotel-transport sectors to get the Indian economy back on track.
Originality/value
The present study has modified the existing GTAP model accounting framework through unemployment and output closures to account for the impact of change in sectoral output due to COVID-19 on the level of employment and other macroeconomic variables.
Details