Search results
1 – 10 of 146Wenhui Li, Anthony Loviscek and Miki Ortiz-Eggenberg
In the search for alternative income-generating assets, the paper addresses the following question, one that the literature has yet to answer: what is a reasonable allocation, if…
Abstract
Purpose
In the search for alternative income-generating assets, the paper addresses the following question, one that the literature has yet to answer: what is a reasonable allocation, if any, to asset-backed securities within a 60–40% stock-bond balanced portfolio of mutual funds?
Design/methodology/approach
The authors apply the Black–Litterman model of Modern Portfolio Theory to test the efficacy of adding asset-backed securities to the classic 60–40% stock-bond portfolio of mutual funds. The authors use out-of-sample tests of one, three, five, and ten years to determine a reasonable asset allocation. The data are monthly and range from January 2000 through September 2021.
Findings
The statistical evidence indicates a modest reward-risk added value from the addition of asset-backed securities, as measured by the Sharpe “reward-to-variability” ratio, in holding periods of three, five, and ten years. Based on the findings, the authors conclude that a reasonable asset allocation for income-seeking, risk-averse investors who follow the classic 60%–40% stock-bond allocation is 8%–10%.
Research limitations/implications
The findings apply to a stock-bond balanced portfolio of mutual funds. Other fund combinations could produce different results.
Practical implications
Investors and money managers can use the findings to improve portfolio performance.
Originality/value
For investors seeking higher income-generating securities in the current record-low interest rate environment, the authors determine a reasonable asset allocation range on asset-backed securities. This study is the first to provide such direction to these investors.
Details
Keywords
Joseph F. Hair, Pratyush N. Sharma, Marko Sarstedt, Christian M. Ringle and Benjamin D. Liengaard
The purpose of this paper is to assess the appropriateness of equal weights estimation (sumscores) and the application of the composite equivalence index (CEI) vis-à-vis…
Abstract
Purpose
The purpose of this paper is to assess the appropriateness of equal weights estimation (sumscores) and the application of the composite equivalence index (CEI) vis-à-vis differentiated indicator weights produced by partial least squares structural equation modeling (PLS-SEM).
Design/methodology/approach
The authors rely on prior literature as well as empirical illustrations and a simulation study to assess the efficacy of equal weights estimation and the CEI.
Findings
The results show that the CEI lacks discriminatory power, and its use can lead to major differences in structural model estimates, conceals measurement model issues and almost always leads to inferior out-of-sample predictive accuracy compared to differentiated weights produced by PLS-SEM.
Research limitations/implications
In light of its manifold conceptual and empirical limitations, the authors advise against the use of the CEI. Its adoption and the routine use of equal weights estimation could adversely affect the validity of measurement and structural model results and understate structural model predictive accuracy. Although this study shows that the CEI is an unsuitable metric to decide between equal weights and differentiated weights, it does not propose another means for such a comparison.
Practical implications
The results suggest that researchers and practitioners should prefer differentiated indicator weights such as those produced by PLS-SEM over equal weights.
Originality/value
To the best of the authors’ knowledge, this study is the first to provide a comprehensive assessment of the CEI’s usefulness. The results provide guidance for researchers considering using equal indicator weights instead of PLS-SEM-based weighted indicators.
Details
Keywords
Jun-Hwa Cheah, Wolfgang Kersten, Christian M. Ringle and Carl Wallenburg
D.M.K.N. Seneviratna and R.M. Kapila Tharanga Rathnayaka
The Coronavirus (COVID-19) is one of the major pandemic diseases caused by a newly discovered virus that has been directly affecting the human respiratory system. Because of the…
Abstract
Purpose
The Coronavirus (COVID-19) is one of the major pandemic diseases caused by a newly discovered virus that has been directly affecting the human respiratory system. Because of the gradually increasing magnitude of the COVID-19 pandemic across the world, it has been sparking emergencies and critical issues in the healthcare systems around the world. However, predicting the exact amount of daily reported new COVID cases is the most serious issue faced by governments around the world today. So, the purpose of this current study is to propose a novel hybrid grey exponential smoothing model (HGESM) to predicting transmission dynamics of the COVID-19 outbreak properly.
Design/methodology/approach
As a result of the complications relates to the traditional time series approaches, the proposed HGESM model is well defined to handle exponential data patterns in multidisciplinary systems. The proposed methodology consists of two parts as double exponential smoothing and grey exponential smoothing modeling approach respectively. The empirical analysis of this study was carried out on the basis of the 3rd outbreak of Covid-19 cases in Sri Lanka, from 1st March 2021 to 15th June 2021. Out of the total 90 daily observations, the first 85% of daily confirmed cases were used during the training, and the remaining 15% of the sample.
Findings
The new proposed HGESM is highly accurate (less than 10%) with the lowest root mean square error values in one head forecasting. Moreover, mean absolute deviation accuracy testing results confirmed that the new proposed model has given more significant results than other time-series predictions with the limited samples.
Originality/value
The findings suggested that the new proposed HGESM is more suitable and effective for forecasting time series with the exponential trend in a short-term manner.
Details
Keywords
This paper aims to test three parametric models in pricing and hedging higher-order moment swaps. Using vanilla option prices from the volatility surface of the Euro Stoxx 50…
Abstract
Purpose
This paper aims to test three parametric models in pricing and hedging higher-order moment swaps. Using vanilla option prices from the volatility surface of the Euro Stoxx 50 Index, the paper shows that the pricing accuracy of these models is very satisfactory under four different pricing error functions. The result is that taking a position in a third moment swap considerably improves the performance of the standard hedge of a variance swap based on a static position in the log-contract and a dynamic trading strategy. The position in the third moment swap is taken by running a Monte Carlo simulation.
Design/methodology/approach
This paper undertook empirical tests of three parametric models. The aim of the paper is twofold: assess the pricing accuracy of these models and show how the classical hedge of the variance swap in terms of a position in a log-contract and a dynamic trading strategy can be significantly enhanced by using third-order moment swaps. The pricing accuracy was measured under four different pricing error functions. A Monte Carlo simulation was run to take a position in the third moment swap.
Findings
The results of the paper are twofold: the pricing accuracy of the Heston (1993) model and that of two Levy models with stochastic time and stochastic volatility are satisfactory; taking a position in third-order moment swaps can significantly improve the performance of the standard hedge of a variance swap.
Research limitations/implications
The limitation is that these empirical tests are conducted on existing three parametric models. Maybe more critical insights could have been revealed had these tests been conducted in a brand new derivatives pricing model.
Originality/value
This work is 100 per cent original, and it undertook empirical tests of the pricing and hedging accuracy of existing three parametric models.
Details
Keywords
Classification techniques have been applied to many applications in various fields of sciences. There are several ways of evaluating classification algorithms. The analysis of…
Abstract
Classification techniques have been applied to many applications in various fields of sciences. There are several ways of evaluating classification algorithms. The analysis of such metrics and its significance must be interpreted correctly for evaluating different learning algorithms. Most of these measures are scalar metrics and some of them are graphical methods. This paper introduces a detailed overview of the classification assessment measures with the aim of providing the basics of these measures and to show how it works to serve as a comprehensive source for researchers who are interested in this field. This overview starts by highlighting the definition of the confusion matrix in binary and multi-class classification problems. Many classification measures are also explained in details, and the influence of balanced and imbalanced data on each metric is presented. An illustrative example is introduced to show (1) how to calculate these measures in binary and multi-class classification problems, and (2) the robustness of some measures against balanced and imbalanced data. Moreover, some graphical measures such as Receiver operating characteristics (ROC), Precision-Recall, and Detection error trade-off (DET) curves are presented with details. Additionally, in a step-by-step approach, different numerical examples are demonstrated to explain the preprocessing steps of plotting ROC, PR, and DET curves.
Details
Keywords
Patrik Jonsson, Johan Öhlin, Hafez Shurrab, Johan Bystedt, Azam Sheikh Muhammad and Vilhelm Verendel
This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?
Abstract
Purpose
This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?
Design/methodology/approach
A mixed-method case approach is applied. Explanatory variables are identified from the literature and explored in a qualitative analysis at an automotive original equipment manufacturer. Using logistic regression and random forest classification models, quantitative data (historical schedule transactions and internal data) enables the testing of the predictive difference of variables under various planning horizons and inaccuracy levels.
Findings
The effects on delivery schedule inaccuracies are contingent on a decoupling point, and a variable may have a combined amplifying (complexity generating) and stabilizing (complexity absorbing) moderating effect. Product complexity variables are significant regardless of the time horizon, and the item’s order life cycle is a significant variable with predictive differences that vary. Decoupling management is identified as a mechanism for generating complexity absorption capabilities contributing to delivery schedule accuracy.
Practical implications
The findings provide guidelines for exploring and finding patterns in specific variables to improve material delivery schedule inaccuracies and input into predictive forecasting models.
Originality/value
The findings contribute to explaining material delivery schedule variations, identifying potential root causes and moderators, empirically testing and validating effects and conceptualizing features that cause and moderate inaccuracies in relation to decoupling management and complexity theory literature?
Details
Keywords
Stefan Colza Lee and William Eid Junior
This paper aims to identify a possible mismatch between the theory found in academic research and the practices of investment managers in Brazil.
Abstract
Purpose
This paper aims to identify a possible mismatch between the theory found in academic research and the practices of investment managers in Brazil.
Design/methodology/approach
The chosen approach is a field survey. This paper considers 78 survey responses from 274 asset management companies. Data obtained are analyzed using independence tests between two variables and multiple regressions.
Findings
The results show that most Brazilian investment managers have not adopted current best practices recommended by the financial academic literature and that there is a significant gap between academic recommendations and asset management practices. The modern portfolio theory is still more widely used than the post-modern portfolio theory, and quantitative portfolio optimization is less often used than the simple rule of defining a maximum concentration limit for any single asset. Moreover, the results show that the normal distribution is used more than parametrical distributions with asymmetry and kurtosis to estimate value at risk, among other findings.
Originality/value
This study may be considered a pioneering work in portfolio construction, risk management and performance evaluation in Brazil. Although academia in Brazil and abroad has thoroughly researched portfolio construction, risk management and performance evaluation, little is known about the actual implementation and utilization of this research by Brazilian practitioners.
Details
Keywords
Jaewon Choi and Jieun Lee
The authors estimate systemic risk in the Korean economy using the econometric measures of commonality and connectedness applied to stock returns. To assess potential systemic…
Abstract
The authors estimate systemic risk in the Korean economy using the econometric measures of commonality and connectedness applied to stock returns. To assess potential systemic risk concerns arising from the high concentration of the economy in large business groups and a few export-oriented sectors, the authors perform three levels of estimation using individual stocks, business groups, and industry returns. The results show that the measures perform well over the study’s sample period by indicating heightened levels of commonality and interconnectedness during crisis periods. In out-of-sample tests, the measures can predict future losses in the stock market during the crises. The authors also provide the recent readings of their measures at the market, chaebol, and industry levels. Although the measures indicate systemic risk is not a major concern in Korea, as they tend to be at the lowest level since 1998, there is an increasing trend in commonality and connectedness since 2017. Samsung and SK exhibit increasing degrees of commonality and connectedness, perhaps because of their heavy dependence on a few major member firms. Commonality in the finance industry has not subsided since the financial crisis, suggesting that systemic risk is still a concern in the banking sector.
Details