Search results

1 – 10 of over 2000
Article
Publication date: 1 February 2001

LEO M. TILMAN and PAVEL BRUSILOVSKIY

Value‐at‐Risk (VaR) has become a mainstream risk management technique employed by a large proportion of financial institutions. There exists a substantial amount of research…

Abstract

Value‐at‐Risk (VaR) has become a mainstream risk management technique employed by a large proportion of financial institutions. There exists a substantial amount of research dealing with this task, most commonly referred to as VaR backtesting. A new generation of “self‐learning” VaR models (Conditional Autoregressive Value‐at‐Risk or CAViaR) combine backtesting results with ex ante VaR estimates in an ARIMA framework in order to forecast P/L distributions more accurately. In this commentary, the authors present a systematic overview of several classes of applied statistical techniques that can make VaR backtesting more comprehensive and provide valuable insights into the analytical properties of VaR models in various market environments. In addition, they discuss the challenges associated with extending traditional backtesting approaches for VaR horizons longer than one day and propose solutions to this important problem.

Details

The Journal of Risk Finance, vol. 2 no. 3
Type: Research Article
ISSN: 1526-5943

Article
Publication date: 24 July 2018

Marcelo Cajias

This paper aims to explore the in-sample explanatory and out-of-sample forecasting accuracy of the generalized additive model for location, scale and shape (GAMLSS) model in…

Abstract

Purpose

This paper aims to explore the in-sample explanatory and out-of-sample forecasting accuracy of the generalized additive model for location, scale and shape (GAMLSS) model in contrast to the GAM method in Munich’s residential market.

Design/methodology/approach

The paper explores the in-sample explanatory results via comparison of coefficients and a graphical analysis of non-linear effects. The out-of-sample forecasting accuracy focusses on 50 loops of three models excluding 10 per cent of the observations randomly. Afterwards, it obtains the predicted functional forms and predicts the remaining 10 per cent. The forecasting performance is measured via error variance, root mean squared error, mean absolute error and the mean percentage error.

Findings

The results show that the complexity of asking rents in Munich is more accurately captured by the GAMLSS approach than the GAM as shown by an outperformance in the in-sample explanatory accuracy. The results further show that the theoretical and empirical complexities do pay off in view of the increased out-of-sample forecasting power of the GAMLSS approach.

Research limitations/implications

The computational requirements necessary to estimate GAMLSS models in terms of number of cores and RAM are high and might constitute one of the limiting factors for (institutional) researchers. Moreover, large and detailed knowledge on statistical inference and programming is necessary.

Practical implications

The usage of the GAMLSS approach would lead policymakers to better understand the local factors affecting rents. Institutional researchers, instead, would clearly aim at calibrating the forecasting accuracy of the model to better forecast rents in investment strategies. Finally, future researchers are encouraged to exploit the large potential of the GAMLSS framework and its modelling flexibility.

Originality/value

The GAMLSS approach is widely recognised and used by international institutions such as the World Health Organisation, the International Monetary Fund and the European Commission. This is the first study to the best of the author’s knowledge to assess the properties of the GAMLSS approach in applied real estate research from a statistical asymptotic perspective by using a unique data basis with more than 38,000 observations.

Details

Journal of European Real Estate Research, vol. 11 no. 2
Type: Research Article
ISSN: 1753-9269

Keywords

Article
Publication date: 5 December 2023

Gatot Soepriyanto, Shinta Amalina Hazrati Havidz and Rangga Handika

This study provides a comprehensive analysis of the potential contagion of Bitcoin on financial markets and sheds light on the complex interplay between technological…

Abstract

Purpose

This study provides a comprehensive analysis of the potential contagion of Bitcoin on financial markets and sheds light on the complex interplay between technological advancements, accounting regulatory and financial market stability.

Design/methodology/approach

The study employs a multi-faceted approach to analyze the impact of BTC systemic risk, technological factors and regulatory variables on Asia–Pacific financial markets. Initially, a single-index model is used to estimate the systematic risk of BTC to financial markets. The study then uses ordinary least squares (OLS) to assess the potential impact of systemic risk, technological factors and regulatory variables on financial markets. To further control for time-varying factors common to all countries, a fixed effect (FE) panel data analysis is implemented. Additionally, a multinomial logistic regression model is utilized to evaluate the presence of contagion.

Findings

Results indicate that Bitcoin's systemic risk to the Asia–Pacific financial markets is relatively weak. Furthermore, technological advancements and international accounting standard adoption appear to indirectly stabilize these markets. The degree of contagion is also found to be stronger in foreign currencies (FX) than in stock index (INDEX) markets.

Research limitations/implications

This study has several limitations that should be considered when interpreting the study findings. First, the definition of financial contagion is not universally accepted, and the study results are based on the specific definition and methodology. Second, the matching of daily financial market and BTC data with annual technological and regulatory variable data may have limited the strength of the study findings. However, the authors’ use of both parametric and nonparametric methods provides insights that may inspire further research into cryptocurrency markets and financial contagions.

Practical implications

Based on the authors analysis, they suggest that financial market regulators prioritize the development and adoption of new technologies and international accounting standard practices, rather than focusing solely on the potential risks associated with cryptocurrencies. While a cryptocurrency crash could harm individual investors, it is unlikely to pose a significant threat to the overall financial system.

Originality/value

To the best of the authors knowledge, they have not found an asset pricing approach to assess a possible contagion. The authors have developed a new method to evaluate whether there is a contagion from BTC to financial markets. A simple but intuitive asset pricing method to evaluate a systematic risk from a factor is a single index model. The single index model has been extensively used in stock markets but has not been used to evaluate the systemic risk potentials of cryptocurrencies. The authors followed Morck et al. (2000) and Durnev et al. (2004) to assess whether there is a systemic risk from BTC to financial markets. If the BTC possesses a systematic risk, the explanatory power of the BTC index model should be high. Therefore, the first implied contribution is to re-evaluate the findings from Aslanidis et al. (2019), Dahir et al. (2019) and Handika et al. (2019), using a different method.

Details

International Journal of Emerging Markets, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1746-8809

Keywords

Article
Publication date: 4 June 2019

Nicola Castellano, Roberto Del Gobbo and Katia Corsi

In the literature on determinants of disclosure, scholars generally tend to investigate the existence of relations in “global” terms by considering the whole range of observed…

Abstract

Purpose

In the literature on determinants of disclosure, scholars generally tend to investigate the existence of relations in “global” terms by considering the whole range of observed values pertaining to both dependent and independent variables involved in the descriptive model. Despite the different methodologies used coherently to this approach, a hypothesis can be only accepted or rejected entirely. This paper aims to contribute to the literature by proposing a data-driven method based on smooth curves, which allow scholars to detect the existence of local relations, significant in a limited interval of the dependent variable.

Design/methodology/approach

The employment of smooth curves is simplified by conducting a study on goodwill disclosure. The model derived by the adoption of the locally weighted scatterplot smoothing (LOWESS) curves may provide an accurate description about complex relations between the extent of disclosure and its expected determinants, whose shape is not completely captured by traditional statistic techniques.

Findings

The model based on LOWESS curves provided a comprehensive description about the complexities characterizing the relationship between disclosure and its determinants. The results show that in some cases, the extent of disclosure is influenced by multi-faceted local relations.

Practical implications

The exemplificative study provides evidences useful for standard setters to improve their comprehension about the inclination of companies in disclosing information on goodwill impairment.

Originality/value

The adoption of smooth curves is coherent with an inductive research approach, where empirical evidence is generalized and evolves into theoretical explanations. The method proposed is replicable in all the field of studies, when extant studies come to unclear and contradicting results as a consequence of the complex relations investigated.

Details

Meditari Accountancy Research, vol. 27 no. 3
Type: Research Article
ISSN: 2049-372X

Keywords

Article
Publication date: 5 February 2018

Marcelo Cajias and Sebastian Ertl

The purpose of this paper is to test the asymptotic properties and prediction accuracy of two innovative methods proposed along the hedonic debate: the geographically weighted…

Abstract

Purpose

The purpose of this paper is to test the asymptotic properties and prediction accuracy of two innovative methods proposed along the hedonic debate: the geographically weighted regression (GWR) and the generalized additive model (GAM).

Design/methodology/approach

The authors assess the asymptotic properties of linear, spatial and non-linear hedonic models based on a very large data set in Germany. The employed functional form is based on the OLS, GWR and the GAM, while the estimation methodology was chosen to be iterative in forecasting, the fitted rents for each quarter based on their 1-quarter-prior functional form. The performance accuracy is measured by traditional indicators such as the error variance and the mean squared (percentage) error.

Findings

The results provide evidence for a clear disadvantage of the GWR model in out-of-sample forecasts. There exists a strong out-of-sample discrepancy between the GWR and the GAM models, whereas the simplicity of the OLS approach is not substantially outperformed by the GAM approach.

Practical implications

For policymakers, a more accurate knowledge on market dynamics via hedonic models leads to a more precise market control and to a better understanding of the local factors affecting current and future rents. For institutional researchers, instead, the findings are essential and might be used as a guide when valuing residential portfolios and forecasting cashflows. Even though this study analyses residential real estate, the results should be of interest to all forms of real estate investments.

Originality/value

Sample size is essential when deriving the asymptotic properties of hedonic models. Whit this study covering more than 570,000 observations, this study constitutes – to the authors’ knowledge – one of the largest data sets used for spatial real estate analysis.

Details

Journal of Property Investment & Finance, vol. 36 no. 1
Type: Research Article
ISSN: 1463-578X

Keywords

Book part
Publication date: 5 April 2024

Feng Yao, Qinling Lu, Yiguo Sun and Junsen Zhang

The authors propose to estimate a varying coefficient panel data model with different smoothing variables and fixed effects using a two-step approach. The pilot step estimates the…

Abstract

The authors propose to estimate a varying coefficient panel data model with different smoothing variables and fixed effects using a two-step approach. The pilot step estimates the varying coefficients by a series method. We then use the pilot estimates to perform a one-step backfitting through local linear kernel smoothing, which is shown to be oracle efficient in the sense of being asymptotically equivalent to the estimate knowing the other components of the varying coefficients. In both steps, the authors remove the fixed effects through properly constructed weights. The authors obtain the asymptotic properties of both the pilot and efficient estimators. The Monte Carlo simulations show that the proposed estimator performs well. The authors illustrate their applicability by estimating a varying coefficient production frontier using a panel data, without assuming distributions of the efficiency and error terms.

Details

Essays in Honor of Subal Kumbhakar
Type: Book
ISBN: 978-1-83797-874-8

Keywords

Book part
Publication date: 13 December 2013

Ivan Jeliazkov

For over three decades, vector autoregressions have played a central role in empirical macroeconomics. These models are general, can capture sophisticated dynamic behavior, and…

Abstract

For over three decades, vector autoregressions have played a central role in empirical macroeconomics. These models are general, can capture sophisticated dynamic behavior, and can be extended to include features such as structural instability, time-varying parameters, dynamic factors, threshold-crossing behavior, and discrete outcomes. Building upon growing evidence that the assumption of linearity may be undesirable in modeling certain macroeconomic relationships, this article seeks to add to recent advances in VAR modeling by proposing a nonparametric dynamic model for multivariate time series. In this model, the problems of modeling and estimation are approached from a hierarchical Bayesian perspective. The article considers the issues of identification, estimation, and model comparison, enabling nonparametric VAR (or NPVAR) models to be fit efficiently by Markov chain Monte Carlo (MCMC) algorithms and compared to parametric and semiparametric alternatives by marginal likelihoods and Bayes factors. Among other benefits, the methodology allows for a more careful study of structural instability while guarding against the possibility of unaccounted nonlinearity in otherwise stable economic relationships. Extensions of the proposed nonparametric model to settings with heteroskedasticity and other important modeling features are also considered. The techniques are employed to study the postwar U.S. economy, confirming the presence of distinct volatility regimes and supporting the contention that certain nonlinear relationships in the data can remain undetected by standard models.

Details

VAR Models in Macroeconomics – New Developments and Applications: Essays in Honor of Christopher A. Sims
Type: Book
ISBN: 978-1-78190-752-8

Keywords

Article
Publication date: 30 September 2014

Chihiro Shimizu, Koji Karato and Kiyohiko Nishimura

The purpose of this article, starting from linear regression, was to estimate a switching regression model, nonparametric model and generalized additive model as a semi-parametric…

Abstract

Purpose

The purpose of this article, starting from linear regression, was to estimate a switching regression model, nonparametric model and generalized additive model as a semi-parametric model, perform function estimation with multiple nonlinear estimation methods and conduct comparative analysis of their predictive accuracy. The theoretical importance of estimating hedonic functions using a nonlinear function form has been pointed out in ample previous research (e.g. Heckman et al. (2010).

Design/methodology/approach

The distinctive features of this study include not only our estimation of multiple nonlinear model function forms but also the method of verifying predictive accuracy. Using out-of-sample testing, we predicted and verified predictive accuracy by performing random sampling 500 times without replacement for 9,682 data items (the same number used in model estimation), based on data for the years before and after the year used for model estimation.

Findings

As a result of estimating multiple models, we believe that when it comes to hedonic function estimation, nonlinear models are superior based on the strength of predictive accuracy viewed in statistical terms and on graphic comparisons. However, when we examined predictive accuracy using out-of-sample testing, we found that the predictive accuracy was inferior to linear models for all nonlinear models.

Research limitations/implications

In terms of the reason why the predictive accuracy was inferior, it is possible that there was an overfitting in the function estimation. Because this research was conducted for a specific period of time, it needs to be developed by expanding it to multiple periods over which the market fluctuates dynamically and conducting further analysis.

Practical implications

Many studies compare predictive accuracy by separating the estimation model and verification model using data at the same point in time. However, when attempting practical application for auto-appraisal systems and the like, it is necessary to estimate a model using past data and make predictions with respect to current transactions. It is possible to apply this study to auto-appraisal systems.

Social implications

It is recognized that housing price fluctuations caused by the subprime crisis had a massive impact on the financial system. The findings of this study are expected to serve as a tool for measuring housing price fluctuation risks in the financial system.

Originality/value

While the importance of nonlinear estimation when estimating hedonic functions has been pointed out in theoretical terms, there is a noticeable lag when it comes to testing based on actual data. Given this, we believe that our verification of nonlinear estimation’s validity using multiple nonlinear models is significant not just from an academic perspective – it may also have practical applications.

Details

International Journal of Housing Markets and Analysis, vol. 7 no. 4
Type: Research Article
ISSN: 1753-8270

Keywords

Book part
Publication date: 16 December 2009

Jeffrey S. Racine

The R environment for statistical computing and graphics (R Development Core Team, 2008) offers practitioners a rich set of statistical methods ranging from random number…

Abstract

The R environment for statistical computing and graphics (R Development Core Team, 2008) offers practitioners a rich set of statistical methods ranging from random number generation and optimization methods through regression, panel data, and time series methods, by way of illustration. The standard R distribution (base R) comes preloaded with a rich variety of functionality useful for applied econometricians. This functionality is enhanced by user-supplied packages made available via R servers that are mirrored around the world. Of interest in this chapter are methods for estimating nonparametric and semiparametric models. We summarize many of the facilities in R and consider some tools that might be of interest to those wishing to work with nonparametric methods who want to avoid resorting to programming in C or Fortran but need the speed of compiled code as opposed to interpreted code such as Gauss or Matlab by way of example. We encourage those working in the field to strongly consider implementing their methods in the R environment thereby making their work accessible to the widest possible audience via an open collaborative forum.

Details

Nonparametric Econometric Methods
Type: Book
ISBN: 978-1-84950-624-3

Article
Publication date: 6 November 2017

Lungile Ntsalaze and Sylvanus Ikhide

The purpose of this paper is to assess the existence of critical tipping points for explanatory variables (age, government grants, education and household size) – in particular…

Abstract

Purpose

The purpose of this paper is to assess the existence of critical tipping points for explanatory variables (age, government grants, education and household size) – in particular, household debt service-to-income on multidimensional poverty.

Design/methodology/approach

The paper applies a generalized additive model (GAM) using regression splines on National Income Dynamics Study data to establish threshold effects of the explanatory variables on multidimensional poverty.

Findings

The results show that the tipping point at which debt is associated with improved household welfare is 42.5 percent (level of debt service-to-income). With significant findings, household heads younger than 60 years of age and more children are associated with lower multidimensional poverty. Government grants may suffer from fungibility as they do not seem to be an effective tool for multidimensional poverty eradication. The ideal household size with negative significant correlation to multidimensional poverty is less than four members. And lastly, education proves to be the best instrument for households to escape multidimensional poverty.

Social implications

High household indebtedness is a severe social problem. Its effects include deteriorating physical and mental health, relationship difficulties and breakdown. Significant social costs arise such as medical treatment and indirectly, reduction of productivity. Further effects on society include rising criminal behavior, children dropping out of school thereby transferring poverty to succeeding generations. Non-performing loans increase and in turn lead to reduced credit availability. The overall health of the economy is impacted due to reduced aggregate demand.

Originality/value

Macro studies have demonstrated the presence of thresholds on debt analyses. However, such is not known in micro analyses, this paper attempts to bridge this knowledge gap by applying GAM for analysis of debt-poverty nexus at the micro level.

Details

International Journal of Social Economics, vol. 44 no. 11
Type: Research Article
ISSN: 0306-8293

Keywords

1 – 10 of over 2000