Search results
1 – 10 of 10Monireh Jahani Sayyad Noveiri, Sohrab Kordrostami and Mojtaba Ghiyasi
The purpose of this study is to estimate inputs (outputs) and flexible measures when outputs (inputs) are changed provided that the relative efficiency values remain without…
Abstract
Purpose
The purpose of this study is to estimate inputs (outputs) and flexible measures when outputs (inputs) are changed provided that the relative efficiency values remain without change.
Design/methodology/approach
A novel inverse data envelopment analysis (DEA) approach with flexible measures is proposed in this research to assess inputs (outputs) and flexible measures when outputs (inputs) are perturbed on condition that the relative efficiency scores remain unchanged. Furthermore, flexible inverse DEA approaches proposed in this study are used for a numerical example from the literature and an application of Iranian banking industry to clarify and validate them.
Findings
The findings show that including flexible measures into the investigation effects on the changes of performance measures estimated and leads to more reasonable achievements.
Originality/value
The traditional inverse DEA models usually investigate the changes of some determinate input-output factors for the changes of other given input-output indicators assuming that the efficiency values are preserved. However, there are situations that the changes of performance measures should be tackled while some measures, called flexible measures, can play either input or output roles. Accordingly, inverse DEA optimization models with flexible measures are rendered in this paper to address these issues.
Details
Keywords
Yasaman Zibaei Vishghaei, Sohrab Kordrostami, Alireza Amirteimoori and Soheil Shokri
Assessing inputs and outputs is a significant aspect of taking decisions while there are complex and multistage processes in many examinations. Due to the presence of interval…
Abstract
Purpose
Assessing inputs and outputs is a significant aspect of taking decisions while there are complex and multistage processes in many examinations. Due to the presence of interval performance measures in various real-world studies, the purpose of this study is to address the changes of interval inputs of two-stage processes for the perturbations of interval outputs of two-stage systems, given that the overall efficiency scores are maintained.
Design/methodology/approach
Actually, an interval inverse two-stage data envelopment analysis (DEA) model is proposed to plan resources. To illustrate, an interval two-stage network DEA model with external interval inputs and outputs and also its inverse problem are suggested to estimate the upper and lower bounds of the entire efficiency and the stages efficiency along with the variations of interval inputs.
Findings
An example from the literature and a real case study of the banking industry are applied to demonstrate the introduced approach. The results show the proposed approach is suitable to estimate the resources of two-stage systems when interval measures are presented.
Originality/value
To the best of the authors’ knowledge, there is no study to estimate the fluctuation of imprecise inputs related to network structures for the changes of imprecise outputs while the interval efficiency of network processes is maintained. Accordingly, this paper considers the resource planning problem when there are imprecise and interval measures in two-stage networks.
Details
Keywords
Somayye Karimi Omshi, Sohrab Kordrostami, Alireza Amirteimoori and Armin Ghane Kanafi
Data envelopment analysis (DEA) is a significant method for measuring the relative efficiency of decision making units (DMUs) that use the least inputs, produce the most desirable…
Abstract
Purpose
Data envelopment analysis (DEA) is a significant method for measuring the relative efficiency of decision making units (DMUs) that use the least inputs, produce the most desirable outputs and emit the least undesirable outputs in order to maximize their profits. In DEA, detecting an optimal scale size (OSS) is also vital and could be more applicable in economic activities when there are integer and undesirable measures. The purpose of this research is to measure average-profit efficiency (APE) and OSSs with integer data and undesirable outputs.
Design/methodology/approach
This study presents an alternative concept of APE using the concepts of most productive scale size (MPSS), profit efficiency and scales, containing desirable and undesirable outputs along with integer and non-integer measures. In fact, the OSS minimizes APE as the optimal scale, which is the ratio of the profit efficiency to the radial average output. Considering the prices of the inputs and desirable outputs, as well as the lack of any specific weight for the undesirable outputs, a two-step model for the numerical calculation of OSS is presented. In addition, the proposed approach is applied to a real data set of Iranian gas companies while there are integer measures and undesirable outputs.
Findings
The results show the introduced approach is beneficial to estimate OSSs from the aspect of maximizing profits of firms with undesirable outputs and integer values.
Originality/value
Estimating OSSs is the significant issue for managers, but its investigation in the presence of integer measures and undesirable outputs is presently under-considered.
Details
Keywords
Maryeh Nematizadeh, Alireza Amirteimoori, Sohrab Kordrostami and Leila Khoshandam
This study aims to address the lack of discrimination between fully efficient decision-making units in nonparametric efficiency analysis models by introducing a new ranking…
Abstract
Purpose
This study aims to address the lack of discrimination between fully efficient decision-making units in nonparametric efficiency analysis models by introducing a new ranking technique that incorporates contextual variables.
Design/methodology/approach
The proposed method combines Data Envelopment Analysis (DEA) and Ordinary Least Squares (OLS). First, DEA evaluates the partial efficiency of each unit, considering all inputs and only one output. Next, OLS removes the influence of contextual variables on the partial efficiencies. Finally, a ranking criterion based on modified partial efficiencies is formulated. The method is applied to data from 100 Chinese banks, including state-owned, commercial and industrial institutions, for the year 2020.
Findings
The ranking results show that the top six positions are assigned to highly esteemed banks in China, demonstrating strong alignment with real-world performance. The method provides a comprehensive ranking of all units, including nonextreme efficient ones, without excluding any. It resolves infeasibility issues that arise during the ranking of efficient units and ensures uniqueness in efficiency scores, leading to a more reliable and robust ranking process. Contextual variables exerted a greater influence on the first partial efficiency compared to the second. Notably, Total Capital Adequacy (TCA) significantly impact bank efficiency.
Originality/value
This study introduces a novel ranking method that effectively integrates contextual variables into DEA-based efficiency analysis, addressing limitations of existing methods. The practical application to Chinese banks demonstrates its utility and relevance.
Details
Keywords
Shokoofa Mostofi, Sohrab Kordrostami, Amir Hossein Refahi Sheikhani, Marzieh Faridi Masouleh and Soheil Shokri
This study aims to improve the detection and quantification of cardiac issues, which are a leading cause of mortality globally. By leveraging past data and using knowledge mining…
Abstract
Purpose
This study aims to improve the detection and quantification of cardiac issues, which are a leading cause of mortality globally. By leveraging past data and using knowledge mining strategies, this study seeks to develop a technique that could assess and predict the onset of cardiac sickness in real time. The use of a triple algorithm, combining particle swarm optimization (PSO), artificial bee colony (ABC) and support vector machine (SVM), is proposed to enhance the accuracy of predictions. The purpose is to contribute to the existing body of knowledge on cardiac disease prognosis and improve overall performance in health care.
Design/methodology/approach
This research uses a knowledge-mining strategy to enhance the detection and quantification of cardiac issues. Decision trees are used to form predictions of cardiovascular disorders, and these predictions are evaluated using training data and test results. The study has also introduced a novel triple algorithm that combines three different combination processes: PSO, ABC and SVM to process and merge the data. A neural network is then used to classify the data based on these three approaches. Real data on various aspects of cardiac disease are incorporated into the simulation.
Findings
The results of this study suggest that the proposed triple algorithm, using the combination of PSO, ABC and SVM, significantly improves the accuracy of predictions for cardiac disease. By processing and merging data using the triple algorithm, the neural network was able to effectively classify the data. The incorporation of real data on various aspects of cardiac disease in the simulation further enhanced the findings. This research contributes to the existing knowledge on cardiac disease prognosis and highlights the potential of leveraging past data for strategic forecasting in the health-care sector.
Originality/value
The originality of this research lies in the development of the triple algorithm, which combines multiple data mining strategies to improve prognosis accuracy for cardiac diseases. This approach differs from existing methods by using a combination of PSO, ABC, SVM, information gain, genetic algorithms and bacterial foraging optimization with the Gray Wolf Optimizer. The proposed technique offers a novel and valuable contribution to the field, enhancing the competitive position and overall performance of businesses in the health-care sector.
Details
Keywords
Rita Shakouri, Maziar Salahi and Sohrab Kordrostami
The purpose of this paper is to present a stochastic p-robust data envelopment analysis (DEA) model for decision-making units (DMUs) efficiency estimation under uncertainty. The…
Abstract
Purpose
The purpose of this paper is to present a stochastic p-robust data envelopment analysis (DEA) model for decision-making units (DMUs) efficiency estimation under uncertainty. The main contribution of this paper consists of the development of a more robust system for the estimation of efficiency in situations of inputs uncertainty. The proposed model is used for the efficiency measurement of a commercial Iranian bank.
Design/methodology/approach
This paper has been arranged to launch along the following steps: the classical Charnes, Cooper, and Rhodes (CCR) DEA model was briefly reviewed. After that, the p-robust DEA model is introduced and then calculated the priority weights of each scenario for CCR DEA output oriented method. To compute the priority weights of criteria in discrete scenarios, the analytical hierarchy analysis process (AHP) is used. To tackle the uncertainty of experts’ opinion, a synthetic technique is applied based on both robust and stochastic optimizations. In the sequel, stochastic p-robust models are proposed for the estimation of efficiency, with particular attention being paid to DEA models.
Findings
The proposed method provides a more encompassing measure of efficiency in the presence of synthetic uncertainty approach. According to the results, the expected score, relative regret score and stochastic P-robust score for DMUs are obtained. The applicability of the extended model is illustrated in the context of the analysis of an Iranian commercial bank performance. Also, it is shown that the stochastic p-robust DEA model is a proper generalization of traditional DEA and gained a desired robustness level. In fact, the maximum possible efficiency score of a DMU with overall permissible uncertainties is obtained, and the minimal amount of uncertainty level under the stochastic p-robustness measure that is required to achieve this efficiency score. Finally, by an example, it is shown that the objective values of the input and output models are not inverse of each other as in classical DEA models.
Originality/value
This research showed that the enormous decrease in maximum possible regret makes only a small addition in the expected efficiency. In other words, improvements in regret can somewhat affect the expected efficiency. The superior issue this kind of modeling is to permit a harmful effect to the objective to better hedge against the uncertain cases that are commonly ignored.
Details
Keywords
Fateme Seihani Parashkouh, Sohrab Kordrostami, Alireza Amirteimoori and Armin Ghane-Kanafi
The purpose of this paper is introducing an alternative model to measure the relative efficiency of observations with undesirable products. Describing the reference set and…
Abstract
Purpose
The purpose of this paper is introducing an alternative model to measure the relative efficiency of observations with undesirable products. Describing the reference set and benchmarking.
Design/methodology/approach
In this paper, an alternative definition of weak disposability assumption is introduced to handle undesirable outputs. Actually, two types of undesirable outputs are addressed and a substitute definition of weak disposability is presented.
Findings
Using this assumption a linear production technology set along with a performance analysis model is constructed to assess the relative efficiency of the decision-making units. To illustrate the radial application of the proposed approach, a real case on transportation system of USA during 1992-2009 is given.
Originality/value
To date, data envelopment analysis studies have investigated undesirable outputs by the assumption of weak disposability, defined as the proportional contraction of good and bad products, which leads to the null-joint assumption between good and bad outputs. Therefore, the only way to produce no undesirable outputs is producing zero desirable outputs. So the production process should be stopped while it is not economically cost-effective. However, in some processes there are some undesirable outputs, which are decreased with non-same percentages. So these undesirable outputs can be stopped while the good outputs have a strictly positive value. In this situation, the good outputs are not null-joint with this type of bad outputs. In the current paper, a new definition of the weak disposability of outputs was represented while two groups of undesirable outputs were considered. Hence, desirable outputs and the first kind of undesirable outputs were decreased proportionally. However, the reduction value was different for the second kind of undesirable outputs. Hence, the null-joint assumption is removed from the production technology. Then, a new technology was proposed based on five postulates as inclusion of observations, free disposability of desirable outputs and inputs, new weak disposability, convexity and minimum extrapolation.
Details
Keywords
Mehrdad Fadaei PellehShahi, Sohrab Kordrostami, Amir Hossein Refahi Sheikhani and Marzieh Faridi Masouleh
Predicting the final status of an ongoing process or a subsequent activity in a process is an important aspect of process management. Semi-structured business processes cannot be…
Abstract
Purpose
Predicting the final status of an ongoing process or a subsequent activity in a process is an important aspect of process management. Semi-structured business processes cannot be predicted by precise and mathematical methods. Therefore, artificial intelligence is one of the successful methods. This study aims to propose a method that is a combination of deep learning methods, in particular, the recurrent neural network and Markov chain.
Design/methodology/approach
The proposed method applies the BestFirst algorithm for the search section and the Cfssubseteval algorithm for the feature comparison section. This study focuses on the prediction systems of social insurance and tries to present a method that is less costly in providing real-world results based on the past history of an event.
Findings
The proposed method is simulated with real data obtained from Iranian Social Security Organization, and the results demonstrate that using the proposed method increases the memory utilization slightly more than the Markov method; however, the CPU usage time has dramatically decreased in comparison with the Markov method and the recurrent neural network and has, therefore, significantly increased the accuracy and efficiency.
Originality/value
This research tries to provide an approach capable of producing the findings closer to the real world with fewer time and processing overheads, given the previous records of an event and the prediction systems of social insurance.
Details
Keywords
Shahrooz Fathi Ajirlo, Alireza Amirteimoori and Sohrab Kordrostami
The purpose of this paper is to propose a modified model in multi-stage processes when there are intermediate measures between the stages and in this sense, the new efficiency…
Abstract
Purpose
The purpose of this paper is to propose a modified model in multi-stage processes when there are intermediate measures between the stages and in this sense, the new efficiency scores are more accurate. Conventional data envelopment analysis (DEA) models disregard the internal structures of peer decision-making units (DMUs) in evaluating their relative efficiency. Such an approach would cause managers to lose important DMU information. Therefore, in multistage processes, traditional DEA models encounter problems when intermediate measures are used for efficiency evaluation.
Design/methodology/approach
In this study, two-stage additive integer-valued DEA models were proposed. Three models were proposed for measuring inefficiency slacks in each stage and in the system as a whole.
Findings
Three models were proposed for measuring inefficiency slacks in each stage and in the system as a whole.
Originality/value
The advantage of the proposed models for multi-stage systems is that they can accurately determine the stages with the greatest weaknesses/strengths. By introducing an applied case in the Iranian power industry, the paper demonstrated the applications and advantages of the proposed models.
Details
Keywords
Seyed Mohamad Fakhr Mousavi, Alireza Amirteimoori, Sohrab Kordrostami and Mohsen Vaez-Ghasemi
As returns to scale (RTS) describes the long run connection of the changes of outputs relative to increases in the inputs, the purpose of this study is to answer the following…
Abstract
Purpose
As returns to scale (RTS) describes the long run connection of the changes of outputs relative to increases in the inputs, the purpose of this study is to answer the following questions: If the proportionate changes exist in the inputs, what is the rate of changes in outputs with respect to the inputs’ variations in the two-stage networks over the long term? How can the authors investigate quantitative RTS in the two-stage networks? In other words, the purpose of this research is to introduce a different approach to estimate the performance, RTS and scale economies (SE) in network structures.
Design/methodology/approach
This paper proposes a novel non-radial approach based on data envelopment analysis to analyze the performance and to investigate RTS and SE in two-stage processes.
Findings
The findings show that the range adjusted measure (RAM)/RTS approach can identify reference sets for overall systems and each stage. In addition, the models presented in this paper can classify decision-making units and determine the increasing/decreasing trends of RTS.
Originality/value
The majority of previous RTS studies have been examined in black-box structures and have been discussed in a radial framework. Therefore, in this study, RTS and SE in the two-stage networks are dealt with using an extended RAM approach. Actually, the efficiency and RTS for each stage and the overall model are calculated using the proposed technique.
Details