Search results

1 – 10 of 78
Book part
Publication date: 10 April 2019

James G. MacKinnon and Matthew D. Webb

When there are few treated clusters in a pure treatment or difference-in-differences setting, t tests based on a cluster-robust variance estimator can severely over-reject…

Abstract

When there are few treated clusters in a pure treatment or difference-in-differences setting, t tests based on a cluster-robust variance estimator can severely over-reject. Although procedures based on the wild cluster bootstrap often work well when the number of treated clusters is not too small, they can either over-reject or under-reject seriously when it is. In a previous paper, we showed that procedures based on randomization inference (RI) can work well in such cases. However, RI can be impractical when the number of possible randomizations is small. We propose a bootstrap-based alternative to RI, which mitigates the discrete nature of RI p values in the few-clusters case. We also compare it to two other procedures. None of them works perfectly when the number of clusters is very small, but they can work surprisingly well.

Details

The Econometrics of Complex Survey Data
Type: Book
ISBN: 978-1-78756-726-9

Keywords

Content available
Book part
Publication date: 10 April 2019

Abstract

Details

The Econometrics of Complex Survey Data
Type: Book
ISBN: 978-1-78756-726-9

Book part
Publication date: 1 January 2008

Gary Koop

Equilibrium job search models allow for labor markets with homogeneous workers and firms to yield nondegenerate wage densities. However, the resulting wage densities do not accord…

Abstract

Equilibrium job search models allow for labor markets with homogeneous workers and firms to yield nondegenerate wage densities. However, the resulting wage densities do not accord well with empirical regularities. Accordingly, many extensions to the basic equilibrium search model have been considered (e.g., heterogeneity in productivity, heterogeneity in the value of leisure, etc.). It is increasingly common to use nonparametric forms for these extensions and, hence, researchers can obtain a perfect fit (in a kernel smoothed sense) between theoretical and empirical wage densities. This makes it difficult to carry out model comparison of different model extensions. In this paper, we first develop Bayesian parametric and nonparametric methods which are comparable to the existing non-Bayesian literature. We then show how Bayesian methods can be used to compare various nonparametric equilibrium search models in a statistically rigorous sense.

Details

Bayesian Econometrics
Type: Book
ISBN: 978-1-84855-308-8

Book part
Publication date: 24 April 2023

Lealand Morin

The time series of the federal funds rate has recently been extended back to 1928, now including several episodes during which interest rates remained near the lower bound of…

Abstract

The time series of the federal funds rate has recently been extended back to 1928, now including several episodes during which interest rates remained near the lower bound of zero. This series is analyzed, using the method of indirect inference, by applying recent research on bounded time series to estimate a set of bounded parametric diffusion models. This combination uncouples the specification of the bounds from the law of motion. Although Louis Bachelier was the first to use arithmetic Brownian motion to model financial time series, he has often been criticized for this proposal, since the process can take on negative values. Most researchers favor processes such as geometric Brownian motion (GBM), which remains positive. Under this framework, Bachelier's proposal remains valid when specified with bounds and is shown to compare favorably when modeling the federal funds rate.

Details

Essays in Honor of Joon Y. Park: Econometric Methodology in Empirical Applications
Type: Book
ISBN: 978-1-83753-212-4

Keywords

Article
Publication date: 27 May 2022

John Galakis, Ioannis Vrontos and Panos Xidonas

This study aims to introduce a tree-structured linear and quantile regression framework to the analysis and modeling of equity returns, within the context of asset pricing.

Abstract

Purpose

This study aims to introduce a tree-structured linear and quantile regression framework to the analysis and modeling of equity returns, within the context of asset pricing.

Design/Methodology/Approach

The approach is based on the idea of a binary tree, where every terminal node parameterizes a local regression model for a specific partition of the data. A Bayesian stochastic method is developed including model selection and estimation of the tree structure parameters. The framework is applied on numerous U.S. asset pricing models, using alternative mimicking factor portfolios, frequency of data, market indices, and equity portfolios.

Findings

The findings reveal strong evidence that asset returns exhibit asymmetric effects and non- linear patterns to different common factors, but, more importantly, that there are multiple thresholds that create several partitions in the common factor space.

Originality/Value

To the best of the authors' knowledge, this paper is the first to explore and apply a tree-structured and quantile regression framework in an asset pricing context.

Details

Review of Accounting and Finance, vol. 21 no. 3
Type: Research Article
ISSN: 1475-7702

Keywords

Open Access
Article
Publication date: 27 June 2022

Saida Mancer, Abdelhakim Necir and Souad Benchaira

The purpose of this paper is to propose a semiparametric estimator for the tail index of Pareto-type random truncated data that improves the existing ones in terms of mean square…

Abstract

Purpose

The purpose of this paper is to propose a semiparametric estimator for the tail index of Pareto-type random truncated data that improves the existing ones in terms of mean square error. Moreover, we establish its consistency and asymptotic normality.

Design/methodology/approach

To construct a root mean squared error (RMSE)-reduced estimator of the tail index, the authors used the semiparametric estimator of the underlying distribution function given by Wang (1989). This allows us to define the corresponding tail process and provide a weak approximation to this one. By means of a functional representation of the given estimator of the tail index and by using this weak approximation, the authors establish the asymptotic normality of the aforementioned RMSE-reduced estimator.

Findings

In basis on a semiparametric estimator of the underlying distribution function, the authors proposed a new estimation method to the tail index of Pareto-type distributions for randomly right-truncated data. Compared with the existing ones, this estimator behaves well both in terms of bias and RMSE. A useful weak approximation of the corresponding tail empirical process allowed us to establish both the consistency and asymptotic normality of the proposed estimator.

Originality/value

A new tail semiparametric (empirical) process for truncated data is introduced, a new estimator for the tail index of Pareto-type truncated data is introduced and asymptotic normality of the proposed estimator is established.

Details

Arab Journal of Mathematical Sciences, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1319-5166

Keywords

Book part
Publication date: 1 December 2008

Zhen Wei

Survival (default) data are frequently encountered in financial (especially credit risk), medical, educational, and other fields, where the “default” can be interpreted as the…

Abstract

Survival (default) data are frequently encountered in financial (especially credit risk), medical, educational, and other fields, where the “default” can be interpreted as the failure to fulfill debt payments of a specific company or the death of a patient in a medical study or the inability to pass some educational tests.

This paper introduces the basic ideas of Cox's original proportional model for the hazard rates and extends the model within a general framework of statistical data mining procedures. By employing regularization, basis expansion, boosting, bagging, Markov chain Monte Carlo (MCMC) and many other tools, we effectively calibrate a large and flexible class of proportional hazard models.

The proposed methods have important applications in the setting of credit risk. For example, the model for the default correlation through regularization can be used to price credit basket products, and the frailty factor models can explain the contagion effects in the defaults of multiple firms in the credit market.

Details

Econometrics and Risk Management
Type: Book
ISBN: 978-1-84855-196-1

Book part
Publication date: 31 December 2010

Rania Hentati and Jean-Luc Prigent

Purpose – In this chapter, copula theory is used to model dependence structure between hedge fund returns series.Methodology/approach – Goodness-of-fit tests, based on the…

Abstract

Purpose – In this chapter, copula theory is used to model dependence structure between hedge fund returns series.

Methodology/approach – Goodness-of-fit tests, based on the Kendall's functions, are applied as selection criteria of the “best” copula. After estimating the parametric copula that best fits the used data, we apply previous results to construct the cumulative distribution functions of the equally weighted portfolios.

Findings – The empirical validation shows that copula clearly allows better estimation of portfolio returns including hedge funds. The three studied portfolios reject the assumption of multivariate normality of returns. The chosen structure is often of Student type when only indices are considered. In the case of portfolios composed by only hedge funds, the dependence structure is of Franck type.

Originality/value of the chapter – Introducing goodness-of-fit bootstrap method to validate the choice of the best structure of dependence is relevant for hedge fund portfolios. Copulas would be introduced to provide better estimations of performance measures.

Details

Nonlinear Modeling of Economic and Financial Time-Series
Type: Book
ISBN: 978-0-85724-489-5

Keywords

Article
Publication date: 21 July 2020

Shuang Zhang, Song Xi Chen and Lei Lu

With the presence of pricing errors, the authors consider statistical inference on the variance risk premium (VRP) and the associated implied variance, constructed from the option…

Abstract

Purpose

With the presence of pricing errors, the authors consider statistical inference on the variance risk premium (VRP) and the associated implied variance, constructed from the option prices and the historic returns.

Design/methodology/approach

The authors propose a nonparametric kernel smoothing approach that removes the adverse effects of pricing errors and leads to consistent estimation for both the implied variance and the VRP. The asymptotic distributions of the proposed VRP estimator are developed under three asymptotic regimes regarding the relative sample sizes between the option data and historic return data.

Findings

This study reveals that existing methods for estimating the implied variance are adversely affected by pricing errors in the option prices, which causes the estimators for VRP statistically inconsistent. By analyzing the S&P 500 option and return data, it demonstrates that, compared with other implied variance and VRP estimators, the proposed implied variance and VRP estimators are more significant variables in explaining variations in the excess S&P 500 returns, and the proposed VRP estimates have the smallest out-of-sample forecasting root mean squared error.

Research limitations/implications

This study contributes to the estimation of the implied variance and the VRP and helps in the predictions of future realized variance and equity premium.

Originality/value

This study is the first to propose consistent estimations for the implied variance and the VRP with the presence of option pricing errors.

Details

China Finance Review International, vol. 11 no. 1
Type: Research Article
ISSN: 2044-1398

Keywords

Article
Publication date: 22 December 2022

Oxana Krutova

This research considers the question of whether unemployment insurance benefit and labour-market activation measures induce the likelihood of re-employment and whether this effect…

Abstract

Purpose

This research considers the question of whether unemployment insurance benefit and labour-market activation measures induce the likelihood of re-employment and whether this effect differs for natives and immigrants.

Design/methodology/approach

Statistical processing was carried out on the European Union Statistics on Income and Living Conditions cross-sectional data for Finland for the period 2004 to 2016. Propensity score matching analysis was undertaken to investigate whether a treatment effect (unemployment insurance benefit) was a predictor of success in increasing re-employment rates, when controlling for participation in labour-market policy measures, subsidized employment and personal background characteristics.

Findings

We find that the probability of re-employment for recipients of unemployment benefits is half that of non-recipients of benefits. Due to the influence of subsidized employment, subsequent employment income decreases for recipients of unemployment benefits and especially for immigrants. Finally, we find that due to the influence of subsidized employment, time spent as a full-time employee decreases for recipients of unemployment benefits and especially for immigrants.

Originality/value

Although our results indicate that benefit determination has a marked impact on re-employment probabilities, unobserved variables turn to play a significant role in selection of labour-market activation measures. In this respect, we find the treatment assignment to activation policy measures depends on influence of unobserved variables and this effect is more important for the re-employment rates of natives than it is for immigrants.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/IJSE-11-2019-0668.

Details

International Journal of Social Economics, vol. 50 no. 5
Type: Research Article
ISSN: 0306-8293

Keywords

1 – 10 of 78