Search results
1 – 10 of over 5000
Increasing availability of the financial data has opened new opportunities for quantitative modeling. It has also exposed limitations of the existing frameworks, such as low…
Abstract
Increasing availability of the financial data has opened new opportunities for quantitative modeling. It has also exposed limitations of the existing frameworks, such as low accuracy of the simplified analytical models and insufficient interpretability and stability of the adaptive data-driven algorithms. I make the case that boosting (a novel, ensemble learning technique) can serve as a simple and robust framework for combining the best features of the analytical and data-driven models. Boosting-based frameworks for typical financial and econometric applications are outlined. The implementation of a standard boosting procedure is illustrated in the context of the problem of symbolic volatility forecasting for IBM stock time series. It is shown that the boosted collection of the generalized autoregressive conditional heteroskedastic (GARCH)-type models is systematically more accurate than both the best single model in the collection and the widely used GARCH(1,1) model.
Vincent A. Schmidt and Jane M. Binner
Divisia component data is used in the training of an Aggregate Feedforward Neural Network (AFFNN), a general-purpose connectionist system designed to assist with data mining…
Abstract
Divisia component data is used in the training of an Aggregate Feedforward Neural Network (AFFNN), a general-purpose connectionist system designed to assist with data mining activities. The neural network is able to learn the money-price relationship, defined as the relationships between the rate of growth of the money supply and inflation. Learned relationships are expressed in terms of an automatically generated series of human-readable and machine-executable rules, shown to meaningfully and accurately describe inflation in terms of the original values of the Divisia component dataset.
Nathan Lael Joseph, David S. Brée and Efstathios Kalyvas
Are the learning procedures of genetic algorithms (GAs) able to generate optimal architectures for artificial neural networks (ANNs) in high frequency data? In this experimental…
Abstract
Are the learning procedures of genetic algorithms (GAs) able to generate optimal architectures for artificial neural networks (ANNs) in high frequency data? In this experimental study, GAs are used to identify the best architecture for ANNs. Additional learning is undertaken by the ANNs to forecast daily excess stock returns. No ANN architectures were able to outperform a random walk, despite the finding of non-linearity in the excess returns. This failure is attributed to the absence of suitable ANN structures and further implies that researchers need to be cautious when making inferences from ANN results that use high frequency data.
D. K. Malhotra, Kunal Malhotra and Rashmi Malhotra
Traditionally, loan officers use different credit scoring models to complement judgmental methods to classify consumer loan applications. This study explores the use of decision…
Abstract
Traditionally, loan officers use different credit scoring models to complement judgmental methods to classify consumer loan applications. This study explores the use of decision trees, AdaBoost, and support vector machines (SVMs) to identify potential bad loans. Our results show that AdaBoost does provide an improvement over simple decision trees as well as SVM models in predicting good credit clients and bad credit clients. To cross-validate our results, we use k-fold classification methodology.
Details
Keywords
Vincent A. Schmidt and Jane M. Binner
This chapter introduces a mechanism for generating a series of rules that characterize the money-price relationship for the United States, defined as the relationship between the…
Abstract
This chapter introduces a mechanism for generating a series of rules that characterize the money-price relationship for the United States, defined as the relationship between the rate of growth of the money supply and inflation. Monetary Services Indicator (MSI) component data is used to train a selection of candidate feedforward neural networks. The selected network is mined for rules, expressed in human-readable and machine-executable form. The rule and network accuracy are compared, and expert commentary is made on the readability and reliability of the extracted rule set. The ultimate goal of this research is to produce rules that meaningfully and accurately describe inflation in terms of the MSI component dataset.11Paper cleared for public release AFRL/WS–07–0848.
Ying L. Becker, Lin Guo and Odilbek Nurmamatov
Value at risk (VaR) and expected shortfall (ES) are popular market risk measurements. The former is not coherent but robust, whereas the latter is coherent but less interpretable…
Abstract
Value at risk (VaR) and expected shortfall (ES) are popular market risk measurements. The former is not coherent but robust, whereas the latter is coherent but less interpretable, only conditionally backtestable and less robust. In this chapter, we compare an innovative artificial neural network (ANN) model with a time series model in the context of forecasting VaR and ES of the univariate time series of four asset classes: US large capitalization equity index, European large cap equity index, US bond index, and US dollar versus euro exchange rate price index for the period of January 4, 1999, to December 31, 2018. In general, the ANN model has more favorable backtesting results as compared to the autoregressive moving average, generalized autoregressive conditional heteroscedasticity (ARMA-GARCH) time series model. In terms of forecasting accuracy, the ANN model has much fewer in-sample and out-of-sample exceptions than those of the ARMA-GARCH model.
Details
Keywords
Across disciplines, researchers and practitioners employ decision tree ensembles such as random forests and XGBoost with great success. What explains their popularity? This…
Abstract
Across disciplines, researchers and practitioners employ decision tree ensembles such as random forests and XGBoost with great success. What explains their popularity? This chapter showcases how marketing scholars and decision-makers can harness the power of decision tree ensembles for academic and practical applications. The author discusses the origin of decision tree ensembles, explains their theoretical underpinnings, and illustrates them empirically using a real-world telemarketing case, with the objective of predicting customer conversions. Readers unfamiliar with decision tree ensembles will learn to appreciate them for their versatility, competitive accuracy, ease of application, and computational efficiency and will gain a comprehensive understanding why decision tree ensembles contribute to every data scientist's methodological toolbox.
Details
Keywords
The authors develop a novel forecast combination approach based on the order statistics of individual predictability from panel data forecasts. To this end, the authors define the…
Abstract
The authors develop a novel forecast combination approach based on the order statistics of individual predictability from panel data forecasts. To this end, the authors define the notion of forecast depth, which provides a ranking among different forecasts based on their normalized forecast errors during the training period. The forecast combination is in the form of a depth-weighted trimmed mean. The authors derive the limiting distribution of the depth-weighted forecast combination, based on which the authors can readily construct prediction intervals. Using this novel forecast combination, the authors predict the national level of new COVID-19 cases in the United States and compare it with other approaches including the ensemble forecast from the Centers for Disease Control and Prevention (CDC). The authors find that the depth-weighted forecast combination yields more accurate and robust predictions compared with other popular forecast combinations and reports much narrower prediction intervals.
Details
Keywords