Search results

1 – 7 of 7
Article
Publication date: 11 August 2023

Emmanouil G. Chalampalakis, Ioannis Dokas and Eleftherios Spyromitros

This study focuses on the banking systems evaluation in Portugal, Italy, Ireland, Greece and Spain (known as the PIIGS) during the financial and post-financial crisis period from…

Abstract

Purpose

This study focuses on the banking systems evaluation in Portugal, Italy, Ireland, Greece and Spain (known as the PIIGS) during the financial and post-financial crisis period from 2009 to 2018.

Design/methodology/approach

A conditional robust nonparametric frontier analysis (order-m estimators) is used to measure banking efficiency combined with variables highlighting the effects of Non-Performing Loans. Next, a truncated regression is used to examine if institutional, macroeconomic, and financial variables affect bank performance differently. Unlike earlier studies, we use the Corruption Perception Index (CPI) as an institutional variable that affects banking sector efficiency.

Findings

This research shows that the PIIGS crisis affects each bank/country differently due to their various efficiency levels. Most of the study variables — CPI, government debt to GDP ratio, inflation, bank size — significantly affect banking efficiency measures.

Originality/value

The contribution of this article to the relevant banking literature is two-fold. First, it analyses the efficiency of the PIIGS banking system from 2009 to 2018, focusing on NPLs. Second, this is the first empirical study to use probabilistic frontier analysis (order-m estimators) to evaluate PIIGS banking systems.

Details

Journal of Economic Studies, vol. 51 no. 3
Type: Research Article
ISSN: 0144-3585

Keywords

Book part
Publication date: 6 May 2024

Ezzeddine Delhoumi and Faten Moussa

The purpose of this chapter is to cover banking efficiency using the concept of the Meta frontier function and to study group and subgroup differences in the production…

Abstract

The purpose of this chapter is to cover banking efficiency using the concept of the Meta frontier function and to study group and subgroup differences in the production technology. This study estimates the technical efficiency (TE) and technology gap ratios (TGRs) for banks in Islamic countries. Using the assumption of the convex hull of the Meta frontier production set using the virtual Meta frontier within the nonparametric approach as presented by Battese and Rao (2002), Battese et al. (2004), and O'Donnell et al. (2007, 2008) and after relaxing this assumption, the study investigates if there is a significant difference between these two methods. To overcome the deterministic criterion addressed to nonparametric approach, the bootstrapping technique has been applied. The first part of this chapter covers the analytical framework necessary for the definition of a Meta frontier function and its estimation using nonparametric data envelopment analysis (DEA) in the case where we impose the assumption of the convex production set and follows in the case of relaxation of this assumption. Then we estimated the TE and the TGR in concave and nonconcave Meta frontier cases by applying the Bootstrap-DEA approach. The empirical part will be reserved for highlighting these methods on data bank to study the technical and technological performance level and prove if there is a difference between the two methods. Three groups of banks namely commercial, investment, and Islamic banks in 17 Islamic countries over a period of 16 years between 1996 and 2011 are used.

Details

The Emerald Handbook of Ethical Finance and Corporate Social Responsibility
Type: Book
ISBN: 978-1-80455-406-7

Keywords

Book part
Publication date: 5 April 2024

Zhichao Wang and Valentin Zelenyuk

Estimation of (in)efficiency became a popular practice that witnessed applications in virtually any sector of the economy over the last few decades. Many different models were…

Abstract

Estimation of (in)efficiency became a popular practice that witnessed applications in virtually any sector of the economy over the last few decades. Many different models were deployed for such endeavors, with Stochastic Frontier Analysis (SFA) models dominating the econometric literature. Among the most popular variants of SFA are Aigner, Lovell, and Schmidt (1977), which launched the literature, and Kumbhakar, Ghosh, and McGuckin (1991), which pioneered the branch taking account of the (in)efficiency term via the so-called environmental variables or determinants of inefficiency. Focusing on these two prominent approaches in SFA, the goal of this chapter is to try to understand the production inefficiency of public hospitals in Queensland. While doing so, a recognized yet often overlooked phenomenon emerges where possible dramatic differences (and consequently very different policy implications) can be derived from different models, even within one paradigm of SFA models. This emphasizes the importance of exploring many alternative models, and scrutinizing their assumptions, before drawing policy implications, especially when such implications may substantially affect people’s lives, as is the case in the hospital sector.

Book part
Publication date: 5 April 2024

Taining Wang and Daniel J. Henderson

A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production…

Abstract

A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production frontier is considered without log-transformation to prevent induced non-negligible estimation bias. Second, the model flexibility is improved via semiparameterization, where the technology is an unknown function of a set of environment variables. The technology function accounts for latent heterogeneity across individual units, which can be freely correlated with inputs, environment variables, and/or inefficiency determinants. Furthermore, the technology function incorporates a single-index structure to circumvent the curse of dimensionality. Third, distributional assumptions are eschewed on both stochastic noise and inefficiency for model identification. Instead, only the conditional mean of the inefficiency is assumed, which depends on related determinants with a wide range of choice, via a positive parametric function. As a result, technical efficiency is constructed without relying on an assumed distribution on composite error. The model provides flexible structures on both the production frontier and inefficiency, thereby alleviating the risk of model misspecification in production and efficiency analysis. The estimator involves a series based nonlinear least squares estimation for the unknown parameters and a kernel based local estimation for the technology function. Promising finite-sample performance is demonstrated through simulations, and the model is applied to investigate productive efficiency among OECD countries from 1970–2019.

Book part
Publication date: 5 April 2024

Christine Amsler, Robert James, Artem Prokhorov and Peter Schmidt

The traditional predictor of technical inefficiency proposed by Jondrow, Lovell, Materov, and Schmidt (1982) is a conditional expectation. This chapter explores whether, and by…

Abstract

The traditional predictor of technical inefficiency proposed by Jondrow, Lovell, Materov, and Schmidt (1982) is a conditional expectation. This chapter explores whether, and by how much, the predictor can be improved by using auxiliary information in the conditioning set. It considers two types of stochastic frontier models. The first type is a panel data model where composed errors from past and future time periods contain information about contemporaneous technical inefficiency. The second type is when the stochastic frontier model is augmented by input ratio equations in which allocative inefficiency is correlated with technical inefficiency. Compared to the standard kernel-smoothing estimator, a newer estimator based on a local linear random forest helps mitigate the curse of dimensionality when the conditioning set is large. Besides numerous simulations, there is an illustrative empirical example.

Book part
Publication date: 5 April 2024

Hung-pin Lai

The standard method to estimate a stochastic frontier (SF) model is the maximum likelihood (ML) approach with the distribution assumptions of a symmetric two-sided stochastic…

Abstract

The standard method to estimate a stochastic frontier (SF) model is the maximum likelihood (ML) approach with the distribution assumptions of a symmetric two-sided stochastic error v and a one-sided inefficiency random component u. When v or u has a nonstandard distribution, such as v follows a generalized t distribution or u has a χ2 distribution, the likelihood function can be complicated or untractable. This chapter introduces using indirect inference to estimate the SF models, where only least squares estimation is used. There is no need to derive the density or likelihood function, thus it is easier to handle a model with complicated distributions in practice. The author examines the finite sample performance of the proposed estimator and also compare it with the standard ML estimator as well as the maximum simulated likelihood (MSL) estimator using Monte Carlo simulations. The author found that the indirect inference estimator performs quite well in finite samples.

Book part
Publication date: 5 April 2024

Ziwen Gao, Steven F. Lehrer, Tian Xie and Xinyu Zhang

Motivated by empirical features that characterize cryptocurrency volatility data, the authors develop a forecasting strategy that can account for both model uncertainty and…

Abstract

Motivated by empirical features that characterize cryptocurrency volatility data, the authors develop a forecasting strategy that can account for both model uncertainty and heteroskedasticity of unknown form. The theoretical investigation establishes the asymptotic optimality of the proposed heteroskedastic model averaging heterogeneous autoregressive (H-MAHAR) estimator under mild conditions. The authors additionally examine the convergence rate of the estimated weights of the proposed H-MAHAR estimator. This analysis sheds new light on the asymptotic properties of the least squares model averaging estimator under alternative complicated data generating processes (DGPs). To examine the performance of the H-MAHAR estimator, the authors conduct an out-of-sample forecasting application involving 22 different cryptocurrency assets. The results emphasize the importance of accounting for both model uncertainty and heteroskedasticity in practice.

1 – 7 of 7