Search results
1 – 10 of 572Cristian Barra and Pasquale Marcello Falcone
The paper aims at addressing the following research questions: does institutional quality improve countries' environmental efficiency? And which pillars of institutional quality…
Abstract
Purpose
The paper aims at addressing the following research questions: does institutional quality improve countries' environmental efficiency? And which pillars of institutional quality improve countries' environmental efficiency?
Design/methodology/approach
By specifying a directional distance function in the context of stochastic frontier method where GHG emissions are considered as the bad output and the GDP is referred as the desirable one, the work computes the environmental efficiency into the appraisal of a production function for the European countries over three decades.
Findings
According to the countries' performance, the findings confirm that high and upper middle-income countries have higher environmental efficiency compared to low middle-income countries. In this environmental context, the role of institutional quality turns out to be really important in improving the environmental efficiency for high income countries.
Originality/value
This article attempts to analyze the role of different dimensions of institutional quality in different European countries' performance – in terms of mitigating GHGs (undesirable output) – while trying to raise their economic performance through their GDP (desirable output).
Highlights
The paper aims at addressing the following research question: does institutional quality improve countries' environmental efficiency?
We adopt a directional distance function in the context of stochastic frontier method, considering 40 European economies over a 30-year time interval.
The findings confirm that high and upper middle-income countries have higher environmental efficiency compared to low middle-income countries.
The role of institutional quality turns out to be really important in improving the environmental efficiency for high income countries, while the performance decreases for the low middle-income countries.
The paper aims at addressing the following research question: does institutional quality improve countries' environmental efficiency?
We adopt a directional distance function in the context of stochastic frontier method, considering 40 European economies over a 30-year time interval.
The findings confirm that high and upper middle-income countries have higher environmental efficiency compared to low middle-income countries.
The role of institutional quality turns out to be really important in improving the environmental efficiency for high income countries, while the performance decreases for the low middle-income countries.
Details
Keywords
Emerson Norabuena-Figueroa, Roger Rurush-Asencio, K. P. Jaheer Mukthar, Jose Sifuentes-Stratti and Elia Ramírez-Asís
The development of information technologies has led to a considerable transformation in human resource management from conventional or commonly known as personnel management to…
Abstract
The development of information technologies has led to a considerable transformation in human resource management from conventional or commonly known as personnel management to modern one. Data mining technology, which has been widely used in several applications, including those that function on the web, includes clustering algorithms as a key component. Web intelligence is a recent academic field that calls for sophisticated analytics and machine learning techniques to facilitate information discovery, particularly on the web. Human resource data gathered from the web are typically enormous, highly complex, dynamic, and unstructured. Traditional clustering methods need to be upgraded because they are ineffective. Standard clustering algorithms are enhanced and expanded with optimization capabilities to address this difficulty by swarm intelligence, a subset of nature-inspired computing. We collect the initial raw human resource data and preprocess the data wherein data cleaning, data normalization, and data integration takes place. The proposed K-C-means-data driven cuckoo bat optimization algorithm (KCM-DCBOA) is used for clustering of the human resource data. The feature extraction is done using principal component analysis (PCA) and the classification of human resource data is done using support vector machine (SVM). Other approaches from the literature were contrasted with the suggested approach. According to the experimental findings, the suggested technique has extremely promising features in terms of the quality of clustering and execution time.
Details
Keywords
Bahman Arasteh and Ali Ghaffari
Reducing the number of generated mutants by clustering redundant mutants, reducing the execution time by decreasing the number of generated mutants and reducing the cost of…
Abstract
Purpose
Reducing the number of generated mutants by clustering redundant mutants, reducing the execution time by decreasing the number of generated mutants and reducing the cost of mutation testing are the main goals of this study.
Design/methodology/approach
In this study, a method is suggested to identify and prone the redundant mutants. In the method, first, the program source code is analyzed by the developed parser to filter out the effectless instructions; then the remaining instructions are mutated by the standard mutation operators. The single-line mutants are partially executed by the developed instruction evaluator. Next, a clustering method is used to group the single-line mutants with the same results. There is only one complete run per cluster.
Findings
The results of experiments on the Java benchmarks indicate that the proposed method causes a 53.51 per cent reduction in the number of mutants and a 57.64 per cent time reduction compared to similar experiments in the MuJava and MuClipse tools.
Originality/value
Developing a classifier that takes the source code of the program and classifies the programs' instructions into effective and effectless classes using a dependency graph; filtering out the effectless instructions reduces the total number of mutants generated; Developing and implementing an instruction parser and instruction-level mutant generator for Java programs; the mutant generator takes instruction in the original program as a string and generates its single-line mutants based on the standard mutation operators in MuJava; Developing a stack-based evaluator that takes an instruction (original or mutant) and the test data and evaluates its result without executing the whole program.
Details
Keywords
Taining Wang and Daniel J. Henderson
A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production…
Abstract
A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production frontier is considered without log-transformation to prevent induced non-negligible estimation bias. Second, the model flexibility is improved via semiparameterization, where the technology is an unknown function of a set of environment variables. The technology function accounts for latent heterogeneity across individual units, which can be freely correlated with inputs, environment variables, and/or inefficiency determinants. Furthermore, the technology function incorporates a single-index structure to circumvent the curse of dimensionality. Third, distributional assumptions are eschewed on both stochastic noise and inefficiency for model identification. Instead, only the conditional mean of the inefficiency is assumed, which depends on related determinants with a wide range of choice, via a positive parametric function. As a result, technical efficiency is constructed without relying on an assumed distribution on composite error. The model provides flexible structures on both the production frontier and inefficiency, thereby alleviating the risk of model misspecification in production and efficiency analysis. The estimator involves a series based nonlinear least squares estimation for the unknown parameters and a kernel based local estimation for the technology function. Promising finite-sample performance is demonstrated through simulations, and the model is applied to investigate productive efficiency among OECD countries from 1970–2019.
Details
Keywords
Due to its high leverage nature, a bank suffers vitally from the credit risk it inherently bears. As a result, managing credit is the ultimate responsibility of a bank. In this…
Abstract
Due to its high leverage nature, a bank suffers vitally from the credit risk it inherently bears. As a result, managing credit is the ultimate responsibility of a bank. In this chapter, we examine how efficiently banks manage their credit risk via a powerful tool used widely in the decision/management science area called data envelopment analysis (DEA). Among various existing versions, our DEA is a two-stage, dynamic model that captures how each bank performs relative to its peer banks in terms of value creation and credit risk control. Using data from the largest 22 banks in the United States over the period of 1996 till 2013, we have identified leading banks such as First Bank systems and Bank of New York Mellon before and after mergers and acquisitions, respectively. With the goal of preventing financial crises such as the one that occurred in 2008, a conceptual model of credit risk reduction and management (CRR&M) is proposed in the final section of this study. Discussions on strategy formulations at both the individual bank level and the national level are provided. With the help of our two-stage DEA-based decision support systems and CRR&M-driven strategies, policy/decision-makers in a banking sector can identify improvement opportunities regarding value creation and risk mitigation. The effective tool and procedures presented in this work will help banks worldwide manage the unknown and become more resilient to potential credit crises in the 21st century.
Details
Keywords
This study examines how health-conscious consumers utilize nutrition facts panel labels when purchasing food products, focusing specifically on the dimension of ethical…
Abstract
Purpose
This study examines how health-conscious consumers utilize nutrition facts panel labels when purchasing food products, focusing specifically on the dimension of ethical evaluation. It aims to understand how ethical considerations influence the decision-making process of consumers who prioritize health. By analyzing the impact of ethical evaluation on label usage, the study sheds light on the significance of ethics in consumer behavior in the context of purchasing packaged edible oil.
Design/methodology/approach
Empirical data were collected using an online survey and a non-ordered questionnaire. In total, 469 valid responses were obtained. The study used SPSS version 27.0 and SmartPLS version 3 for demographic analysis and structural equation modeling.
Findings
The findings suggest that three factors – perceived benefits, perceived threats, and nutrition self-efficacy, positively impact the use of NFP labels. However, perceived barriers negatively influence the use of NFP labels. In additionally, ethical evaluation mediates the usage of NFP labels.
Practical implications
In the health belief model, ethical evaluation functions as a mediator and has a greater influence on NFP label use. This study provides a framework for marketers to promote consumer health consciousness by encouraging them to incorporate NFP labels.
Originality/value
This study is one of the first attempts to demonstrate that ethical evaluation mediate health beliefs and the use of nutrition labels.
Details
Keywords
Zhichao Wang and Valentin Zelenyuk
Estimation of (in)efficiency became a popular practice that witnessed applications in virtually any sector of the economy over the last few decades. Many different models were…
Abstract
Estimation of (in)efficiency became a popular practice that witnessed applications in virtually any sector of the economy over the last few decades. Many different models were deployed for such endeavors, with Stochastic Frontier Analysis (SFA) models dominating the econometric literature. Among the most popular variants of SFA are Aigner, Lovell, and Schmidt (1977), which launched the literature, and Kumbhakar, Ghosh, and McGuckin (1991), which pioneered the branch taking account of the (in)efficiency term via the so-called environmental variables or determinants of inefficiency. Focusing on these two prominent approaches in SFA, the goal of this chapter is to try to understand the production inefficiency of public hospitals in Queensland. While doing so, a recognized yet often overlooked phenomenon emerges where possible dramatic differences (and consequently very different policy implications) can be derived from different models, even within one paradigm of SFA models. This emphasizes the importance of exploring many alternative models, and scrutinizing their assumptions, before drawing policy implications, especially when such implications may substantially affect people’s lives, as is the case in the hospital sector.
Details
Keywords
Huaxiang Song, Chai Wei and Zhou Yong
The paper aims to tackle the classification of Remote Sensing Images (RSIs), which presents a significant challenge for computer algorithms due to the inherent characteristics of…
Abstract
Purpose
The paper aims to tackle the classification of Remote Sensing Images (RSIs), which presents a significant challenge for computer algorithms due to the inherent characteristics of clustered ground objects and noisy backgrounds. Recent research typically leverages larger volume models to achieve advanced performance. However, the operating environments of remote sensing commonly cannot provide unconstrained computational and storage resources. It requires lightweight algorithms with exceptional generalization capabilities.
Design/methodology/approach
This study introduces an efficient knowledge distillation (KD) method to build a lightweight yet precise convolutional neural network (CNN) classifier. This method also aims to substantially decrease the training time expenses commonly linked with traditional KD techniques. This approach entails extensive alterations to both the model training framework and the distillation process, each tailored to the unique characteristics of RSIs. In particular, this study establishes a robust ensemble teacher by independently training two CNN models using a customized, efficient training algorithm. Following this, this study modifies a KD loss function to mitigate the suppression of non-target category predictions, which are essential for capturing the inter- and intra-similarity of RSIs.
Findings
This study validated the student model, termed KD-enhanced network (KDE-Net), obtained through the KD process on three benchmark RSI data sets. The KDE-Net surpasses 42 other state-of-the-art methods in the literature published from 2020 to 2023. Compared to the top-ranked method’s performance on the challenging NWPU45 data set, KDE-Net demonstrated a noticeable 0.4% increase in overall accuracy with a significant 88% reduction in parameters. Meanwhile, this study’s reformed KD framework significantly enhances the knowledge transfer speed by at least three times.
Originality/value
This study illustrates that the logit-based KD technique can effectively develop lightweight CNN classifiers for RSI classification without substantial sacrifices in computation and storage costs. Compared to neural architecture search or other methods aiming to provide lightweight solutions, this study’s KDE-Net, based on the inherent characteristics of RSIs, is currently more efficient in constructing accurate yet lightweight classifiers for RSI classification.
Details
Keywords
Mohd. Nishat Faisal, Lamay Bin Sabir and Khurram Jahangir Sharif
This study has two major objectives. First, comprehensively review the literature on transparency in supply chain management. Second, based on a critical analysis of literature…
Abstract
Purpose
This study has two major objectives. First, comprehensively review the literature on transparency in supply chain management. Second, based on a critical analysis of literature, identify the attributes and sub-attributes of supply chain transparency and develop a numerical measure to quantify transparency in supply chains.
Design/methodology/approach
A systematic literature review (SLR) was conducted using the PRISMA approach. Utilizing SCOPUS database past eighteen-year papers search resulted in 249 papers to understand major developments in the domain of supply chain transparency. Subsequently, graph theoretic approach is applied to quantify transparency in supply chain and the proposed index is evaluated for case supply chains from pharma and dairy sectors.
Findings
It can be concluded from SLR that supply chain transparency research has evolved from merely tracking and tracing of the product towards sustainable development of the whole value chain. The research identifies four major attributes and their sub-attributes that influence transparency in supply chains, which are used to develop transparency index. The proposed index for two sectors helps to understand areas that need immediate attention to improve transparency in the case supply chains.
Originality/value
This paper attempts to understand the development of transparency research in supply chain using the PRISMA approach for SLR. In addition, development of mathematical model to quantify supply chain transparency is a novel attempt that would help benchmark best practices in the industry. Further, transparency index would help to understand specific areas that need attention to improve transparency in supply chains.
Details
Keywords
This study aims to solve the problem of job scheduling and multi automated guided vehicle (AGV) cooperation in intelligent manufacturing workshops.
Abstract
Purpose
This study aims to solve the problem of job scheduling and multi automated guided vehicle (AGV) cooperation in intelligent manufacturing workshops.
Design/methodology/approach
In this study, an algorithm for job scheduling and cooperative work of multiple AGVs is designed. In the first part, with the goal of minimizing the total processing time and the total power consumption, the niche multi-objective evolutionary algorithm is used to determine the processing task arrangement on different machines. In the second part, AGV is called to transport workpieces, and an improved ant colony algorithm is used to generate the initial path of AGV. In the third part, to avoid path conflicts between running AGVs, the authors propose a simple priority-based waiting strategy to avoid collisions.
Findings
The experiment shows that the solution can effectively deal with job scheduling and multiple AGV operation problems in the workshop.
Originality/value
In this paper, a collaborative work algorithm is proposed, which combines the job scheduling and AGV running problem to make the research results adapt to the real job environment in the workshop.
Details