Search results

1 – 10 of over 1000
Article
Publication date: 11 June 2018

Antonis Pavlou, Michalis Doumpos and Constantin Zopounidis

The optimization of investment portfolios is a topic of major importance in financial decision making, with many relevant models available in the relevant literature. The purpose…

Abstract

Purpose

The optimization of investment portfolios is a topic of major importance in financial decision making, with many relevant models available in the relevant literature. The purpose of this paper is to perform a thorough comparative assessment of different bi-objective models as well as multi-objective one, in terms of the performance and robustness of the whole set of Pareto optimal portfolios.

Design/methodology/approach

In this study, three bi-objective models are considered (mean-variance (MV), mean absolute deviation, conditional value-at-risk (CVaR)), as well as a multi-objective model. An extensive comparison is performed using data from the Standard and Poor’s 500 index, over the period 2005–2016, through a rolling-window testing scheme. The results are analyzed using novel performance indicators representing the deviations between historical (estimated) efficient frontiers, actual out-of-sample efficient frontiers and realized out-of-sample portfolio results.

Findings

The obtained results indicate that the well-known MV model provides quite robust results compared to other bi-objective optimization models. On the other hand, the CVaR model appears to be the least robust model. The multi-objective approach offers results which are well balanced and quite competitive against simpler bi-objective models, in terms of out-of-sample performance.

Originality/value

This is the first comparative study of portfolio optimization models that examines the performance of the whole set of efficient portfolios, proposing analytical ways to assess their stability and robustness over time. Moreover, an extensive out-of-sample testing of a multi-objective portfolio optimization model is performed, through a rolling-window scheme, in contrast static results in prior works. The insights derived from the obtained results could be used to design improved and more robust portfolio optimization models, focusing on a multi-objective setting.

Details

Management Decision, vol. 57 no. 2
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 29 November 2019

A. George Assaf and Mike G. Tsionas

This paper aims to present several Bayesian specification tests for both in- and out-of-sample situations.

Abstract

Purpose

This paper aims to present several Bayesian specification tests for both in- and out-of-sample situations.

Design/methodology/approach

The authors focus on the Bayesian equivalents of the frequentist approach for testing heteroskedasticity, autocorrelation and functional form specification. For out-of-sample diagnostics, the authors consider several tests to evaluate the predictive ability of the model.

Findings

The authors demonstrate the performance of these tests using an application on the relationship between price and occupancy rate from the hotel industry. For purposes of comparison, the authors also provide evidence from traditional frequentist tests.

Research limitations/implications

There certainly exist other issues and diagnostic tests that are not covered in this paper. The issues that are addressed, however, are critically important and can be applied to most modeling situations.

Originality/value

With the increased use of the Bayesian approach in various modeling contexts, this paper serves as an important guide for diagnostic testing in Bayesian analysis. Diagnostic analysis is essential and should always accompany the estimation of regression models.

Details

International Journal of Contemporary Hospitality Management, vol. 32 no. 4
Type: Research Article
ISSN: 0959-6119

Keywords

Article
Publication date: 28 September 2021

Olga Filippova, Jeremy Gabe and Michael Rehm

Automated valuation models (AVMs) are statistical asset pricing models omnipresent in residential real estate markets, where they inform property tax assessment, mortgage…

Abstract

Purpose

Automated valuation models (AVMs) are statistical asset pricing models omnipresent in residential real estate markets, where they inform property tax assessment, mortgage underwriting and marketing. Use of these asset pricing models outside of residential real estate is rare. The purpose of the paper is to explore key characteristics of commercial office lease contracts and test an application in estimating office market rental prices using an AVM.

Design/methodology/approach

The authors apply a semi-log ordinary least squares hedonic regression approach to estimate either contract rent or the total costs of occupancy (TOC) (“grossed up” rent). Furthermore, the authors adopt a training/test split in the observed leasing data to evaluate the accuracy of using these pricing models for prediction. In the study, 80% of the samples are randomly selected to train the AVM and 20% was held back to test accuracy out of sample. A naive prediction model is used to establish accuracy prediction benchmarks for the AVM using the out-of-sample test data. To evaluate the performance of the AVM, the authors use a Monte Carlo simulation to run the selection process 100 times and calculate the test dataset's mean error (ME), mean absolute error (MAE), mean absolute percentage error (MAPE), median absolute percentage error (MdAPE), coefficient of dispersion (COD) and the training model's r-squared statistic (R2) for each run.

Findings

Using a sample of office lease transactions in Sydney CBD (Central Business District), Australia, the authors demonstrate accuracy statistics that are comparable to those used in residential valuation and outperform a naive model.

Originality/value

AVMs in an office leasing context have significant implications for practice. First, an AVM can act as an impartial arbiter in market rent review disputes. Second, the technology may enable frequent market rent reviews as a lease negotiation strategy that allows tenants and property owners to share market risk by limiting concerns over high costs and adversarial litigation that can emerge in a market rent review dispute.

Details

Property Management, vol. 40 no. 2
Type: Research Article
ISSN: 0263-7472

Keywords

Article
Publication date: 29 April 2021

Saba Haider, Mian Sajid Nazir, Alfredo Jiménez and Muhammad Ali Jibran Qamar

In this paper the authors examine evidence on exchange rate predictability through commodity prices for a set of countries categorized as commodity import- and export-dependent…

Abstract

Purpose

In this paper the authors examine evidence on exchange rate predictability through commodity prices for a set of countries categorized as commodity import- and export-dependent developed and emerging countries.

Design/methodology/approach

The authors perform in-sample and out-of-sample forecasting analysis. The commodity prices are modeled to predict the exchange rate and to analyze whether this commodity price model can perform better than the random walk model (RWM) or not. These two models are compared and evaluated in terms of exchange rate forecasting abilities based on mean squared forecast error and Theil inequality coefficient.

Findings

The authors find that primary commodity prices better predict exchange rates in almost two-thirds of export-dependent developed countries. In contrast, the RWM shows superior performance in the majority of export-dependent emerging, import-dependent emerging and developed countries.

Originality/value

Previous studies examined the exchange rate of commodity export-dependent developed countries mainly. This study examines both developed and emerging countries and finds for which one the changes in prices of export commodities (in case of commodity export-dependent country) or prices of major importing commodities (in case of import-dependent countries) can significantly predict the exchange rate.

Details

International Journal of Emerging Markets, vol. 18 no. 1
Type: Research Article
ISSN: 1746-8809

Keywords

Article
Publication date: 8 February 2016

Petros Messis and Achilleas Zapranis

The purpose of this paper is to examine the predictive ability of different well-known models for capturing time variation in betas against a novel approach where the beta…

Abstract

Purpose

The purpose of this paper is to examine the predictive ability of different well-known models for capturing time variation in betas against a novel approach where the beta coefficient is treated as a function of market return.

Design/methodology/approach

Different GARCH models, the Kalman filter algorithm and the Schwert and Seguin model are used against our novel approach. The mean square error, the mean absolute error and the Diebold and Mariano test statistic constitute the measures of forecast accuracy. All models are tested over nine consecutive years and three different samples.

Findings

The results show substantial differences in predictive accuracy among the samples. The new approach of modelling the systematic risk overwhelms the rest of the models in longer samples. In the smallest sample, the Kalman filter random walk model prevails. The examination of parameters between two groups of stocks with best and worst accuracy results depicts significant variations. For these stocks, the iid assumption of return is rejected and large differences exist on diagnostic tests.

Originality/value

This study contributes to the literature with different ways. First, it examines the predictive accuracy of betas with different well-known models and introduces a novel approach. Second, after constructing betas from the estimated models’ parameters, they are used for out-of-sample instead of in-sample forecasts over nine consecutive years and three different samples. Third, a more closely examination of the models’ parameters could signal at an early stage the candidate models with the expected lowest forecasting errors. Finally, the study carries out some diagnostic tests for examining whether the existence of iid normal returns is accompanied by better performance.

Details

Managerial Finance, vol. 42 no. 2
Type: Research Article
ISSN: 0307-4358

Keywords

Article
Publication date: 13 November 2018

Rangga Handika and Dony Abdul Chalid

This paper aims to investigate whether the best statistical model also corresponds to the best empirical performance in the volatility modeling of financialized commodity markets.

Abstract

Purpose

This paper aims to investigate whether the best statistical model also corresponds to the best empirical performance in the volatility modeling of financialized commodity markets.

Design/methodology/approach

The authors use various p and q values in Value-at-Risk (VaR) GARCH(p, q) estimation and perform backtesting at different confidence levels, different out-of-sample periods and different data frequencies for eight financialized commodities.

Findings

They find that the best fitted GARCH(p,q) model tends to generate the best empirical performance for most financialized commodities. Their findings are consistent at different confidence levels and different out-of-sample periods. However, the strong results occur for both daily and weekly returns series. They obtain weak results for the monthly series.

Research limitations/implications

Their research method is limited to the GARCH(p,q) model and the eight discussed financialized commodities.

Practical implications

They conclude that they should continue to rely on the log-likelihood statistical criteria for choosing a GARCH(p,q) model in financialized commodity markets for daily and weekly forecasting horizons.

Social implications

The log-likelihood statistical criterion has strong predictive power in GARCH high-frequency data series (daily and weekly). This finding justifies the importance of using statistical criterion in financial market modeling.

Originality/value

First, this paper investigates whether the best statistical model corresponds to the best empirical performance. Second, this paper provides an indirect test for evaluating the accuracy of volatility modeling by using the VaR approach.

Details

Review of Accounting and Finance, vol. 17 no. 4
Type: Research Article
ISSN: 1475-7702

Keywords

Content available
Article
Publication date: 5 April 2022

Wenhui Li, Anthony Loviscek and Miki Ortiz-Eggenberg

In the search for alternative income-generating assets, the paper addresses the following question, one that the literature has yet to answer: what is a reasonable allocation, if…

Abstract

Purpose

In the search for alternative income-generating assets, the paper addresses the following question, one that the literature has yet to answer: what is a reasonable allocation, if any, to asset-backed securities within a 60–40% stock-bond balanced portfolio of mutual funds?

Design/methodology/approach

The authors apply the Black–Litterman model of Modern Portfolio Theory to test the efficacy of adding asset-backed securities to the classic 60–40% stock-bond portfolio of mutual funds. The authors use out-of-sample tests of one, three, five, and ten years to determine a reasonable asset allocation. The data are monthly and range from January 2000 through September 2021.

Findings

The statistical evidence indicates a modest reward-risk added value from the addition of asset-backed securities, as measured by the Sharpe “reward-to-variability” ratio, in holding periods of three, five, and ten years. Based on the findings, the authors conclude that a reasonable asset allocation for income-seeking, risk-averse investors who follow the classic 60%–40% stock-bond allocation is 8%–10%.

Research limitations/implications

The findings apply to a stock-bond balanced portfolio of mutual funds. Other fund combinations could produce different results.

Practical implications

Investors and money managers can use the findings to improve portfolio performance.

Originality/value

For investors seeking higher income-generating securities in the current record-low interest rate environment, the authors determine a reasonable asset allocation range on asset-backed securities. This study is the first to provide such direction to these investors.

Details

Managerial Finance, vol. 48 no. 6
Type: Research Article
ISSN: 0307-4358

Keywords

Article
Publication date: 1 April 2001

Clarence N.W. Tan and Herlina Dihardjo

Outlines previous research on company failure prediction and discusses some of the methodological issues involved. Extends an earlier study (Tan 1997) using artificial neural…

1241

Abstract

Outlines previous research on company failure prediction and discusses some of the methodological issues involved. Extends an earlier study (Tan 1997) using artificial neural networks (ANN) to predict financial distress in Australian credit unions by extending the forecast period of the models, presents the results and compares them with probit model results. Finds the ANN models generally at least as good as the probit, although both types improved their accuracy rates (for Type I and Type II errors) when early warning signals were included. Believes ANN “is a promising technique” although more research is required, and suggests some avenues for this.

Details

Managerial Finance, vol. 27 no. 4
Type: Research Article
ISSN: 0307-4358

Keywords

Article
Publication date: 28 February 2023

Isabel Abinzano, Harold Bonilla and Luis Muga

Using data from business reorganization processes under Act 1116 of 2006 in Colombia during the period 2008 to 2018, a model for predicting the success of these processes is…

Abstract

Purpose

Using data from business reorganization processes under Act 1116 of 2006 in Colombia during the period 2008 to 2018, a model for predicting the success of these processes is proposed. The paper aims to validate the model in two different periods. The first one, in 2019, characterized by stability, and the second one, in 2020, characterized by the uncertainty generated by the COVID-19 pandemic.

Design/methodology/approach

A set of five financial variables comprising indebtedness, profitability and solvency proxies, firm age, macroeconomic conditions, and industry and regional dummies are used as independent variables in a logit model to predict the failure of reorganization processes. In addition, an out-of-sample analysis is carried out for the 2019 and 2020 periods.

Findings

The results show a high predictive power of the estimated model. Even the results of the out-of-sample analysis are satisfactory during the unstable pandemic period. However, industry and regional effects add no predictive power for 2020, probably due to subsidies for economic activity and the relaxation of insolvency legislation in Colombia during that year.

Originality/value

In a context of global reform in insolvency laws, the consistent predictive ability shown by the model, even during periods of uncertainty, can guide regulatory changes to ensure the survival of companies entering into reorganization processes, and reduce the observed high failure rate.

Details

The Journal of Risk Finance, vol. 24 no. 3
Type: Research Article
ISSN: 1526-5943

Keywords

Article
Publication date: 7 March 2016

Marian Alexander Dietzel

Recent research has found significant relationships between internet search volume and real estate markets. This paper aims to examine whether Google search volume data can serve…

Abstract

Purpose

Recent research has found significant relationships between internet search volume and real estate markets. This paper aims to examine whether Google search volume data can serve as a leading sentiment indicator and are able to predict turning points in the US housing market. One of the main objectives is to find a model based on internet search interest that generates reliable real-time forecasts.

Design/methodology/approach

Starting from seven individual real-estate-related Google search volume indices, a multivariate probit model is derived by following a selection procedure. The best model is then tested for its in- and out-of-sample forecasting ability.

Findings

The results show that the model predicts the direction of monthly price changes correctly, with over 89 per cent in-sample and just above 88 per cent in one to four-month out-of-sample forecasts. The out-of-sample tests demonstrate that although the Google model is not always accurate in terms of timing, the signals are always correct when it comes to foreseeing an upcoming turning point. Thus, as signals are generated up to six months early, it functions as a satisfactory and timely indicator of future house price changes.

Practical implications

The results suggest that Google data can serve as an early market indicator and that the application of this data set in binary forecasting models can produce useful predictions of changes in upward and downward movements of US house prices, as measured by the Case–Shiller 20-City House Price Index. This implies that real estate forecasters, economists and policymakers should consider incorporating this free and very current data set into their market forecasts or when performing plausibility checks for future investment decisions.

Originality/value

This is the first paper to apply Google search query data as a sentiment indicator in binary forecasting models to predict turning points in the housing market.

Details

International Journal of Housing Markets and Analysis, vol. 9 no. 1
Type: Research Article
ISSN: 1753-8270

Keywords

1 – 10 of over 1000