Search results
1 – 10 of over 4000Joonho Na, Qia Wang and Chaehwan Lim
The purpose of this study is to analyze the environmental efficiency level and trend of the transportation sector in the upper–mid–downstream of the Yangtze River Economic Belt…
Abstract
Purpose
The purpose of this study is to analyze the environmental efficiency level and trend of the transportation sector in the upper–mid–downstream of the Yangtze River Economic Belt and the JingJinJi region in China and assess the effectiveness of policies for protecting the low-carbon environment.
Design/methodology/approach
This study uses the meta-frontier slack-based measure (SBM) approach to evaluate environmental efficiency, which targets and classifies specific regions into regional groups. First, this study employs the SBM with the undesirable outputs to construct the environmental efficiency measurement models of the four regions under the meta-frontier and group frontiers, respectively. Then, this study uses the technology gap ratio to evaluate the gap between the group frontier and the meta-frontier.
Findings
The analysis reveals several key findings: (1) the JingJinJi region and the downstream of the YEB had achieved the overall optimal production technology in transportation than the other two regions; (2) significant technology gaps in environmental efficiency were observed among these four regions in China; and (3) the downstream region of the YEB exhibited the lowest levels of energy consumption and excessive CO2 emissions.
Originality/value
To evaluate the differences in environmental efficiency resulting from regions and technological gaps in transportation, this study employs the meta-frontier model, which overcomes the limitation of traditional environmental efficiency methods. Furthermore, in the practical, the study provides the advantage of observing the disparities in transportation efficiency performed by the Yangtze River Economic Belt and the Beijing–Tianjin–Hebei regions.
Details
Keywords
Taylor Boyd, Grace Docken and John Ruggiero
The purpose of this paper is to improve the estimation of the production frontier in cases where outliers exist. We focus on the case when outliers appear above the true frontier…
Abstract
Purpose
The purpose of this paper is to improve the estimation of the production frontier in cases where outliers exist. We focus on the case when outliers appear above the true frontier due to measurement error.
Design/methodology/approach
The authors use stochastic data envelopment analysis (SDEA) to allow observed points above the frontier. They supplement SDEA with assumptions on the efficiency and show that the true frontier in the presence of outliers can be derived.
Findings
This paper finds that the authors’ maximum likelihood approach outperforms super-efficiency measures. Using simulations, this paper shows that SDEA is a useful model for outlier detection.
Originality/value
The model developed in this paper is original; the authors add distributional assumptions to derive the optimal quantile with SDEA to remove outliers. The authors believe that the value of the paper will lead to many citations because real-world data are often subject to outliers.
Details
Keywords
The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The…
Abstract
Purpose
The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The focus herein is primarily on methodological developments. Specifically, attention is mainly paid to modeling aspects, computational features, the satisfaction of properties and duality. Finally, some promising avenues of future research on this topic are stated.
Design/methodology/approach
DEA is a methodology based on mathematical programming for the assessment of relative efficiency of a set of decision-making units (DMUs) that use several inputs to produce several outputs. DEA is classified in the literature as a non-parametric method because it does not assume a particular functional form for the underlying production function and presents, in this sense, some outstanding properties: the efficiency of firms may be evaluated independently on the market prices of the inputs used and outputs produced; it may be easily used with multiple inputs and outputs; a single score of efficiency for each assessed organization is obtained; this technique ranks organizations based on relative efficiency; and finally, it yields benchmarking information. DEA models provide both benchmarking information and efficiency scores for each of the evaluated units when it is applied to a dataset of observations and variables (inputs and outputs). Without a doubt, this benchmarking information gives DEA a distinct advantage over other efficiency methodologies, such as stochastic frontier analysis (SFA). Technical inefficiency is typically measured in DEA as the distance between the observed unit and a “benchmarking” target on the estimated piece-wise linear efficient frontier. The choice of this target is critical for assessing the potential performance of each DMU in the sample, as well as for providing information on how to increase its performance. However, traditional DEA models yield targets that are determined by the “furthest” efficient projection to the evaluated DMU. The projected point on the efficient frontier obtained as such may not be a representative projection for the judged unit, and consequently, some authors in the literature have suggested determining closest targets instead. The general argument behind this idea is that closer targets suggest directions of enhancement for the inputs and outputs of the inefficient units that may lead them to the efficiency with less effort. Indeed, authors like Aparicio et al. (2007) have shown, in an application on airlines, that it is possible to find substantial differences between the targets provided by applying the criterion used by the traditional DEA models, and those obtained when the criterion of closeness is utilized for determining projection points on the efficient frontier. The determination of closest targets is connected to the calculation of the least distance from the evaluated unit to the efficient frontier of the reference technology. In fact, the former is usually computed through solving mathematical programming models associated with minimizing some type of distance (e.g. Euclidean). In this particular respect, the main contribution in the literature is the paper by Briec (1998) on Hölder distance functions, where formally technical inefficiency to the “weakly” efficient frontier is defined through mathematical distances.
Findings
All the interesting features of the determination of closest targets from a benchmarking point of view have generated, in recent times, the increasing interest of researchers in the calculation of the least distance to evaluate technical inefficiency (Aparicio et al., 2014a). So, in this paper, we present a general classification of published contributions, mainly from a methodological perspective, and additionally, we indicate avenues for further research on this topic. The approaches that we cite in this paper differ in the way that the idea of similarity is made operative. Similarity is, in this sense, implemented as the closeness between the values of the inputs and/or outputs of the assessed units and those of the obtained projections on the frontier of the reference production possibility set. Similarity may be measured through multiple distances and efficiency measures. In turn, the aim is to globally minimize DEA model slacks to determine the closest efficient targets. However, as we will show later in the text, minimizing a mathematical distance in DEA is not an easy task, as it is equivalent to minimizing the distance to the complement of a polyhedral set, which is not a convex set. This complexity will justify the existence of different alternatives for solving these types of models.
Originality/value
As we are aware, this is the first survey in this topic.
Details
Keywords
Bonita L. Betters-Reed and Lynda L. Moore
When we take the lens of race, ethnicity, gender, and class to the collected academic work on women business owners, what does it reveal? What do we really know? Are there…
Abstract
When we take the lens of race, ethnicity, gender, and class to the collected academic work on women business owners, what does it reveal? What do we really know? Are there differing definitions of success across segments of the women businessowner demographics? Do the challenges faced by African American women entrepreneurs differ from those confronting white female entrepreneurs? Do immigrant female women businessowners face more significant institutional barriers than their counterparts who have been U.S. citizens for at least two generations? Are there similar reasons for starting their businesses?
Abstract
Purpose
Identifying the frontiers of a specific research field is one of the most basic tasks in bibliometrics and research published in leading conferences is crucial to the data mining research community, whereas few research studies have focused on it. The purpose of this study is to detect the intellectual structure of data mining based on conference papers.
Design/methodology/approach
This study takes the authoritative conference papers of the ranking 9 in the data mining field provided by Google Scholar Metrics as a sample. According to paper amount, this paper first detects the annual situation of the published documents and the distribution of the published conferences. Furthermore, from the research perspective of keywords, CiteSpace was used to dig into the conference papers to identify the frontiers of data mining, which focus on keywords term frequency, keywords betweenness centrality, keywords clustering and burst keywords.
Findings
Research showed that the research heat of data mining had experienced a linear upward trend during 2007 and 2016. The frontier identification based on the conference papers showed that there were five research hotspots in data mining, including clustering, classification, recommendation, social network analysis and community detection. The research contents embodied in the conference papers were also very rich.
Originality/value
This study detected the research frontier from leading data mining conference papers. Based on the keyword co-occurrence network, from four dimensions of keyword term frequency, betweeness centrality, clustering analysis and burst analysis, this paper identified and analyzed the research frontiers of data mining discipline from 2007 to 2016.
Details
Keywords
Shih-Liang Chao and Yi-Hung Yeh
This study aims to measure the productivity of 21 major shipyards in China, South Korea and Japan.
Abstract
Purpose
This study aims to measure the productivity of 21 major shipyards in China, South Korea and Japan.
Design/methodology/approach
Data envelopment analysis was applied to measure the productivity of shipyards. The contemporaneous and intertemporal productivity scores of each shipyard were measured. Additionally, the technical gaps among shipyards in China, South Korea and Japan were measured and compared.
Findings
The results indicate that Japan led the global shipbuilding industry in 2014 and South Korea dominated in 2015. Additionally, from 2014 to 2015, shipyards in South Korea and Japan maintained their levels of productivity. Comparatively, major shipyards in China made substantial progress from 2014 to 2015, revealing their strong ambition to improve productivity.
Originality/value
This study first used a metafrontier framework to measure the technical gap of shipyards among major shipbuilding countries. The model and approach objectively analyze the productivity of major shipyards and considers their nationalities. Additionally, this study is the first to measure changes in the productivity of shipyards. By decomposing the metafrontier Malmquist productivity index, major shipyards were categorized into eight sets. The results of this study can provide a clear direction for shipyards to improve their productivity.
Details
Keywords
The purpose of this paper is to empirically examine the relationship between intensity of competition and technical efficiency of large European container ports, accounting for…
Abstract
Purpose
The purpose of this paper is to empirically examine the relationship between intensity of competition and technical efficiency of large European container ports, accounting for regional diversities and spatial aspects of inter-port competition.
Design/methodology/approach
The analysis consists of applying a stochastic production frontier approach to a dataset of 77 large European container ports over the period 2002-2012, with inefficiency terms simultaneously modeled as a function of (among other factors) a constructed index of competitive intensity at different spatial levels.
Findings
The results indicate that there is no significant negative effect of competitive intensity on efficiency. In fact, for competing European ports within a proximity of 300 km, a higher level of competition is found to be associated with a higher level of technical efficiency.
Originality/value
The originality of the paper stems from its particular focus on European port regions and its novel findings in this context, which have implications for the discussions regarding pro-competitive port policy and regulation in the European Union.
Details
Keywords
Nicola Castellano, Roberto Del Gobbo and Lorenzo Leto
The concept of productivity is central to performance management and decision-making, although it is complex and multifaceted. This paper aims to describe a methodology based on…
Abstract
Purpose
The concept of productivity is central to performance management and decision-making, although it is complex and multifaceted. This paper aims to describe a methodology based on the use of Big Data in a cluster analysis combined with a data envelopment analysis (DEA) that provides accurate and reliable productivity measures in a large network of retailers.
Design/methodology/approach
The methodology is described using a case study of a leading kitchen furniture producer. More specifically, Big Data is used in a two-step analysis prior to the DEA to automatically cluster a large number of retailers into groups that are homogeneous in terms of structural and environmental factors and assess a within-the-group level of productivity of the retailers.
Findings
The proposed methodology helps reduce the heterogeneity among the units analysed, which is a major concern in DEA applications. The data-driven factorial and clustering technique allows for maximum within-group homogeneity and between-group heterogeneity by reducing subjective bias and dimensionality, which is embedded with the use of Big Data.
Practical implications
The use of Big Data in clustering applied to productivity analysis can provide managers with data-driven information about the structural and socio-economic characteristics of retailers' catchment areas, which is important in establishing potential productivity performance and optimizing resource allocation. The improved productivity indexes enable the setting of targets that are coherent with retailers' potential, which increases motivation and commitment.
Originality/value
This article proposes an innovative technique to enhance the accuracy of productivity measures through the use of Big Data clustering and DEA. To the best of the authors’ knowledge, no attempts have been made to benefit from the use of Big Data in the literature on retail store productivity.
Details
Keywords
Phong Hoang Nguyen and Duyen Thi Bich Pham
The paper aims to enrich previous findings for an emerging banking industry such as Vietnam, reporting the difference between the parametric and nonparametric methods when…
Abstract
Purpose
The paper aims to enrich previous findings for an emerging banking industry such as Vietnam, reporting the difference between the parametric and nonparametric methods when measuring cost efficiency. The purpose of the study is to assess the consistency in issuing policies to improve the cost efficiency of Vietnamese commercial banks.
Design/methodology/approach
The cost efficiency of banks is assessed through the data envelopment analysis (DEA) and the stochastic frontier analysis (SFA). Next, five tests are conducted in succession to analyze the differences in cost efficiency measured by these two methods, including the distribution, the rankings, the identification of the best and worst banks, the time consistency and the determinants of efficiency frontier. The data are collected from the annual financial statements of Vietnamese banks during 2005–2017.
Findings
The results show that the cost efficiency obtained under the SFA models is more consistent than under the DEA models. However, the DEA-based efficiency scores are more similar in ranking order and stability over time. The inconsistency in efficiency characteristics under two different methods reminds policy makers and bank administrators to compare and select the appropriate efficiency frontier measure for each stage and specific economic conditions.
Originality/value
This paper shows the need to control for heterogeneity over banking groups and time as well as for random noise and outliers when measuring the cost efficiency.
Details
Keywords
Jose F. Baños, Ana Rodriguez-Alvarez and Patricia Suarez-Cano
This paper aims to model the efficiency of labour offices belonging to the public employment services (PESs) in Spain using a stochastic matching frontier approach.
Abstract
Purpose
This paper aims to model the efficiency of labour offices belonging to the public employment services (PESs) in Spain using a stochastic matching frontier approach.
Design/methodology/approach
With this aim in mind, the authors apply a random parameter model approach to control for observed and unobserved heterogeneity.
Findings
Results indicate that when the information criteria of the estimates are analysed, it improves by controlling both, observed and unobserved heterogeneity in the inefficiency term. Also, results suggest that counsellors improve the productivity of labour offices and that the share of unemployed skilled persons, unemployed persons aged 44 or younger, as well as the share of unemployed persons in the construction sector, all affect the technical efficiency of PESs offices.
Originality/value
The model extends the previous specifications in the matching literature that capture only observed heterogeneity. Moreover, as far as the authors know, it is the first paper that estimates a matching frontier for the Spanish case. Finally, the database they use is at the office level and includes the work carried out by counsellors, which is a novelty in the analysis of this type of studies at the Spanish level.
Details