Search results

1 – 10 of 11
Book part
Publication date: 5 April 2024

Hung-pin Lai

The standard method to estimate a stochastic frontier (SF) model is the maximum likelihood (ML) approach with the distribution assumptions of a symmetric two-sided stochastic…

Abstract

The standard method to estimate a stochastic frontier (SF) model is the maximum likelihood (ML) approach with the distribution assumptions of a symmetric two-sided stochastic error v and a one-sided inefficiency random component u. When v or u has a nonstandard distribution, such as v follows a generalized t distribution or u has a χ2 distribution, the likelihood function can be complicated or untractable. This chapter introduces using indirect inference to estimate the SF models, where only least squares estimation is used. There is no need to derive the density or likelihood function, thus it is easier to handle a model with complicated distributions in practice. The author examines the finite sample performance of the proposed estimator and also compare it with the standard ML estimator as well as the maximum simulated likelihood (MSL) estimator using Monte Carlo simulations. The author found that the indirect inference estimator performs quite well in finite samples.

Book part
Publication date: 5 April 2024

Feng Yao, Qinling Lu, Yiguo Sun and Junsen Zhang

The authors propose to estimate a varying coefficient panel data model with different smoothing variables and fixed effects using a two-step approach. The pilot step estimates the…

Abstract

The authors propose to estimate a varying coefficient panel data model with different smoothing variables and fixed effects using a two-step approach. The pilot step estimates the varying coefficients by a series method. We then use the pilot estimates to perform a one-step backfitting through local linear kernel smoothing, which is shown to be oracle efficient in the sense of being asymptotically equivalent to the estimate knowing the other components of the varying coefficients. In both steps, the authors remove the fixed effects through properly constructed weights. The authors obtain the asymptotic properties of both the pilot and efficient estimators. The Monte Carlo simulations show that the proposed estimator performs well. The authors illustrate their applicability by estimating a varying coefficient production frontier using a panel data, without assuming distributions of the efficiency and error terms.

Details

Essays in Honor of Subal Kumbhakar
Type: Book
ISBN: 978-1-83797-874-8

Keywords

Book part
Publication date: 5 April 2024

Emir Malikov, Shunan Zhao and Jingfang Zhang

There is growing empirical evidence that firm heterogeneity is technologically non-neutral. This chapter extends the Gandhi, Navarro, and Rivers (2020) proxy variable framework…

Abstract

There is growing empirical evidence that firm heterogeneity is technologically non-neutral. This chapter extends the Gandhi, Navarro, and Rivers (2020) proxy variable framework for structurally identifying production functions to a more general case when latent firm productivity is multi-dimensional, with both factor-neutral and (biased) factor-augmenting components. Unlike alternative methodologies, the proposed model can be identified under weaker data requirements, notably, without relying on the typically unavailable cross-sectional variation in input prices for instrumentation. When markets are perfectly competitive, point identification is achieved by leveraging the information contained in static optimality conditions, effectively adopting a system-of-equations approach. It is also shown how one can partially identify the non-neutral production technology in the traditional proxy variable framework when firms have market power.

Book part
Publication date: 5 April 2024

Alecos Papadopoulos

The author develops a bilateral Nash bargaining model under value uncertainty and private/asymmetric information, combining ideas from axiomatic and strategic bargaining theory…

Abstract

The author develops a bilateral Nash bargaining model under value uncertainty and private/asymmetric information, combining ideas from axiomatic and strategic bargaining theory. The solution to the model leads organically to a two-tier stochastic frontier (2TSF) setup with intra-error dependence. The author presents two different statistical specifications to estimate the model, one that accounts for regressor endogeneity using copulas, the other able to identify separately the bargaining power from the private information effects at the individual level. An empirical application using a matched employer–employee data set (MEEDS) from Zambia and a second using another one from Ghana showcase the applied potential of the approach.

Book part
Publication date: 5 April 2024

Taining Wang and Daniel J. Henderson

A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production…

Abstract

A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production frontier is considered without log-transformation to prevent induced non-negligible estimation bias. Second, the model flexibility is improved via semiparameterization, where the technology is an unknown function of a set of environment variables. The technology function accounts for latent heterogeneity across individual units, which can be freely correlated with inputs, environment variables, and/or inefficiency determinants. Furthermore, the technology function incorporates a single-index structure to circumvent the curse of dimensionality. Third, distributional assumptions are eschewed on both stochastic noise and inefficiency for model identification. Instead, only the conditional mean of the inefficiency is assumed, which depends on related determinants with a wide range of choice, via a positive parametric function. As a result, technical efficiency is constructed without relying on an assumed distribution on composite error. The model provides flexible structures on both the production frontier and inefficiency, thereby alleviating the risk of model misspecification in production and efficiency analysis. The estimator involves a series based nonlinear least squares estimation for the unknown parameters and a kernel based local estimation for the technology function. Promising finite-sample performance is demonstrated through simulations, and the model is applied to investigate productive efficiency among OECD countries from 1970–2019.

Book part
Publication date: 5 April 2024

Ziwen Gao, Steven F. Lehrer, Tian Xie and Xinyu Zhang

Motivated by empirical features that characterize cryptocurrency volatility data, the authors develop a forecasting strategy that can account for both model uncertainty and…

Abstract

Motivated by empirical features that characterize cryptocurrency volatility data, the authors develop a forecasting strategy that can account for both model uncertainty and heteroskedasticity of unknown form. The theoretical investigation establishes the asymptotic optimality of the proposed heteroskedastic model averaging heterogeneous autoregressive (H-MAHAR) estimator under mild conditions. The authors additionally examine the convergence rate of the estimated weights of the proposed H-MAHAR estimator. This analysis sheds new light on the asymptotic properties of the least squares model averaging estimator under alternative complicated data generating processes (DGPs). To examine the performance of the H-MAHAR estimator, the authors conduct an out-of-sample forecasting application involving 22 different cryptocurrency assets. The results emphasize the importance of accounting for both model uncertainty and heteroskedasticity in practice.

Book part
Publication date: 5 April 2024

Bruce E. Hansen and Jeffrey S. Racine

Classical unit root tests are known to suffer from potentially crippling size distortions, and a range of procedures have been proposed to attenuate this problem, including the…

Abstract

Classical unit root tests are known to suffer from potentially crippling size distortions, and a range of procedures have been proposed to attenuate this problem, including the use of bootstrap procedures. It is also known that the estimating equation’s functional form can affect the outcome of the test, and various model selection procedures have been proposed to overcome this limitation. In this chapter, the authors adopt a model averaging procedure to deal with model uncertainty at the testing stage. In addition, the authors leverage an automatic model-free dependent bootstrap procedure where the null is imposed by simple differencing (the block length is automatically determined using recent developments for bootstrapping dependent processes). Monte Carlo simulations indicate that this approach exhibits the lowest size distortions among its peers in settings that confound existing approaches, while it has superior power relative to those peers whose size distortions do not preclude their general use. The proposed approach is fully automatic, and there are no nuisance parameters that have to be set by the user, which ought to appeal to practitioners.

Details

Essays in Honor of Subal Kumbhakar
Type: Book
ISBN: 978-1-83797-874-8

Keywords

Book part
Publication date: 23 April 2024

Preeti Mehra and Aayushi Singh

One of the most marginalized communities in India is the Lesbian, Gay, Bisexual and Transgender (LGBT) community which commonly experiences discrimination. Many studies have…

Abstract

One of the most marginalized communities in India is the Lesbian, Gay, Bisexual and Transgender (LGBT) community which commonly experiences discrimination. Many studies have countered that the LGBT community faces high discrimination in the banking and financing industry. As a result, this study concentrates on this marginalized community and its acceptance and continuation habit regarding mobile wallets. Consequently, this study has considered continuance intentions as a response to confirm the progress of the mobile-wallet industry. Also, this study tried to study the relationship between behavioral intention (BI) and continuous intention (CI) which is seriously lacks in the library of literature. The research operationalized the stimulus–organism–response (SOR) framework for the conceptual model and surveyed 100 self-proclaimed members of the LGBT community in India. The analysis has been done using the partial least structure (PLS). The findings demonstrate that variables like perceived trust (PT) directly influence the BI. On the other hand, variables like perceived ease of use (PEoU), social influence (SI), and satisfaction (S) doesn’t influence BI of the LGBT Community. The main outcome was a favorable association between BI and CI. It will help the stakeholders to understand how important this new market avenue is and how it can be explored. To ensure safe and secure transactions, a group think tank composed of important parties (financial institutions, mobile-wallet providers, the government, security specialists, etc.) should make recommendations. Mobile-wallet providers will attain benefit from this study’s understanding of user categories and ability to tailor their service offers as per the community.

Details

Digital Influence on Consumer Habits: Marketing Challenges and Opportunities
Type: Book
ISBN: 978-1-80455-343-5

Keywords

Abstract

Details

Essays in Honor of Subal Kumbhakar
Type: Book
ISBN: 978-1-83797-874-8

Book part
Publication date: 5 April 2024

Christine Amsler, Robert James, Artem Prokhorov and Peter Schmidt

The traditional predictor of technical inefficiency proposed by Jondrow, Lovell, Materov, and Schmidt (1982) is a conditional expectation. This chapter explores whether, and by…

Abstract

The traditional predictor of technical inefficiency proposed by Jondrow, Lovell, Materov, and Schmidt (1982) is a conditional expectation. This chapter explores whether, and by how much, the predictor can be improved by using auxiliary information in the conditioning set. It considers two types of stochastic frontier models. The first type is a panel data model where composed errors from past and future time periods contain information about contemporaneous technical inefficiency. The second type is when the stochastic frontier model is augmented by input ratio equations in which allocative inefficiency is correlated with technical inefficiency. Compared to the standard kernel-smoothing estimator, a newer estimator based on a local linear random forest helps mitigate the curse of dimensionality when the conditioning set is large. Besides numerous simulations, there is an illustrative empirical example.

Access

Year

Last month (11)

Content type

Book part (11)
1 – 10 of 11