Search results

1 – 10 of 53
Book part
Publication date: 24 March 2006

Valeriy V. Gavrishchaka

Increasing availability of the financial data has opened new opportunities for quantitative modeling. It has also exposed limitations of the existing frameworks, such as low…

Abstract

Increasing availability of the financial data has opened new opportunities for quantitative modeling. It has also exposed limitations of the existing frameworks, such as low accuracy of the simplified analytical models and insufficient interpretability and stability of the adaptive data-driven algorithms. I make the case that boosting (a novel, ensemble learning technique) can serve as a simple and robust framework for combining the best features of the analytical and data-driven models. Boosting-based frameworks for typical financial and econometric applications are outlined. The implementation of a standard boosting procedure is illustrated in the context of the problem of symbolic volatility forecasting for IBM stock time series. It is shown that the boosted collection of the generalized autoregressive conditional heteroskedastic (GARCH)-type models is systematically more accurate than both the best single model in the collection and the widely used GARCH(1,1) model.

Details

Econometric Analysis of Financial and Economic Time Series
Type: Book
ISBN: 978-1-84950-388-4

Book part
Publication date: 12 November 2018

Fabian Mundt and Kenneth Horvath

Relational thinking and spatial analyses have become highly relevant for higher education research. However, choices of research methods and specifically of statistical procedures…

Abstract

Relational thinking and spatial analyses have become highly relevant for higher education research. However, choices of research methods and specifically of statistical procedures do not often correspond to the epistemological underpinnings implied by relational perspectives. Against this background, this chapter illustrates the uses and challenges of geometric data analysis (GDA) for studying the complexities and dynamics of current spaces of higher education. GDA can be described as a set of statistical techniques that allow the identification, assessment and visualisation of complex relations in social science data. Using an investigation into the social topologies of first-year students as an example, we discuss the mathematical foundations, the step-by-step procedures of data analysis, the interpretation of results and strategies for integrating GDA into multimethod research designs. In sum, we argue that GDA does not only entail a comprehensive set of statistical instruments that permit visual analysis of relational structures, but also enables the systematic integration of qualitative and quantitative methods, hence supporting the development of innovative and coherent research designs and analytical strategies.

Details

Theory and Method in Higher Education Research
Type: Book
ISBN: 978-1-78769-277-0

Keywords

Book part
Publication date: 1 December 2008

Zhen Wei

Survival (default) data are frequently encountered in financial (especially credit risk), medical, educational, and other fields, where the “default” can be interpreted as the…

Abstract

Survival (default) data are frequently encountered in financial (especially credit risk), medical, educational, and other fields, where the “default” can be interpreted as the failure to fulfill debt payments of a specific company or the death of a patient in a medical study or the inability to pass some educational tests.

This paper introduces the basic ideas of Cox's original proportional model for the hazard rates and extends the model within a general framework of statistical data mining procedures. By employing regularization, basis expansion, boosting, bagging, Markov chain Monte Carlo (MCMC) and many other tools, we effectively calibrate a large and flexible class of proportional hazard models.

The proposed methods have important applications in the setting of credit risk. For example, the model for the default correlation through regularization can be used to price credit basket products, and the frailty factor models can explain the contagion effects in the defaults of multiple firms in the credit market.

Details

Econometrics and Risk Management
Type: Book
ISBN: 978-1-84855-196-1

Abstract

Details

Machine Learning and Artificial Intelligence in Marketing and Sales
Type: Book
ISBN: 978-1-80043-881-1

Book part
Publication date: 18 October 2019

Mohammad Arshad Rahman and Shubham Karnawat

This article is motivated by the lack of flexibility in Bayesian quantile regression for ordinal models where the error follows an asymmetric Laplace (AL) distribution. The…

Abstract

This article is motivated by the lack of flexibility in Bayesian quantile regression for ordinal models where the error follows an asymmetric Laplace (AL) distribution. The inflexibility arises because the skewness of the distribution is completely specified when a quantile is chosen. To overcome this shortcoming, we derive the cumulative distribution function (and the moment-generating function) of the generalized asymmetric Laplace (GAL) distribution – a generalization of AL distribution that separates the skewness from the quantile parameter – and construct a working likelihood for the ordinal quantile model. The resulting framework is termed flexible Bayesian quantile regression for ordinal (FBQROR) models. However, its estimation is not straightforward. We address estimation issues and propose an efficient Markov chain Monte Carlo (MCMC) procedure based on Gibbs sampling and joint Metropolis–Hastings algorithm. The advantages of the proposed model are demonstrated in multiple simulation studies and implemented to analyze public opinion on homeownership as the best long-term investment in the United States following the Great Recession.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part B
Type: Book
ISBN: 978-1-83867-419-9

Keywords

Abstract

Details

Mathematical and Economic Theory of Road Pricing
Type: Book
ISBN: 978-0-08-045671-3

Book part
Publication date: 20 July 2017

Jack Andersen

The purpose of the chapter is to argue for a twofold understanding of knowledge organization: the organization of knowledge as a form of communicative action in digital culture…

Abstract

The purpose of the chapter is to argue for a twofold understanding of knowledge organization: the organization of knowledge as a form of communicative action in digital culture and the organization of knowledge as an analytical means to address features of digital culture.

The approach taken is an interpretative text-based form of argumentation.

The chapter suggests that by putting forward such a twofold understanding of knowledge organization, new directions are given as to how to situate and understand the activity and practice of the organization of knowledge in digital culture.

By offering the twofold understanding of the organization of knowledge, a tool of reflection is provided when users and the public at large try to make sense of, for example, data, archives, search engines, or algorithms.

The originality of the chapter is its demonstration of how to conceive of knowledge organization as a form of communicative action and as an analytical means for understanding issues in digital culture.

Details

The Organization of Knowledge
Type: Book
ISBN: 978-1-78714-531-3

Keywords

Book part
Publication date: 30 May 2018

Francesco Moscone, Veronica Vinciotti and Elisa Tosetti

This chapter reviews graphical modeling techniques for estimating large covariance matrices and their inverse. The chapter provides a selective survey of different models and…

Abstract

This chapter reviews graphical modeling techniques for estimating large covariance matrices and their inverse. The chapter provides a selective survey of different models and estimators proposed by the graphical modeling literature and offers some practical examples where these methods could be applied in the area of health economics.

Book part
Publication date: 1 January 2008

Michiel de Pooter, Francesco Ravazzolo, Rene Segers and Herman K. van Dijk

Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial posterior…

Abstract

Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial posterior probability near the boundary of the parameter region. This feature refers to near-instability within dynamic models, to forecasting with near-random walk models and to clustering of several economic series in a small number of groups within a data panel. Two canonical models are used: a linear regression model with autocorrelation and a simple variance components model. Several well-known time-series models like unit root and error correction models and further state space and panel data models are shown to be simple generalizations of these two canonical models for the purpose of posterior inference. A Bayesian model averaging procedure is presented in order to deal with models with substantial probability both near and at the boundary of the parameter region. Analytical, graphical, and empirical results using U.S. macroeconomic data, in particular on GDP growth, are presented.

Details

Bayesian Econometrics
Type: Book
ISBN: 978-1-84855-308-8

Book part
Publication date: 15 March 2021

Brett Lantz

Machine learning and artificial intelligence (AI) have arisen as the availability of larger data sources, statistical methods, and computing power have rapidly and simultaneously…

Abstract

Machine learning and artificial intelligence (AI) have arisen as the availability of larger data sources, statistical methods, and computing power have rapidly and simultaneously evolved. The transformation is leading to a revolution that will affect virtually every industry. Businesses that are slow to adopt modern data practices are likely to be left behind with little chance to catch up.

The purpose of this chapter is to provide a brief overview of machine learning and AI in the business setting. In addition to providing historical context, the chapter also provides justification for AI investment, even in industries in which data is not the core business function. The means by which computers learn is de-mystified and various algorithms and evaluation methods are presented. Lastly, the chapter considers various ethical and practical consequences of machine learning algorithms after implementation.

Details

The Machine Age of Customer Insight
Type: Book
ISBN: 978-1-83909-697-6

Keywords

1 – 10 of 53