Search results

1 – 10 of 836
Book part
Publication date: 19 November 2014

Enrique Martínez-García and Mark A. Wynne

We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of…

Abstract

We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of Martínez-García and Wynne (2010). We discuss the trade-offs that monetary policy – characterized by a Taylor-type rule – faces in an interconnected world, with perfectly flexible exchange rates. We then use posterior model probabilities to evaluate the weight of evidence in support of such a model when estimated against more parsimonious specifications that either abstract from monetary frictions or assume autarky by means of controlled experiments that employ simulated data. We argue that Bayesian model comparison with posterior odds is sensitive to sample size and the choice of observable variables for estimation. We show that posterior model probabilities strongly penalize overfitting, which can lead us to favor a less parameterized model against the true data-generating process when the two become arbitrarily close to each other. We also illustrate that the spillovers from monetary policy across countries have an added confounding effect.

Book part
Publication date: 18 October 2019

Hedibert Freitas Lopes, Matthew Taddy and Matthew Gardner

Heavy-tailed distributions present a tough setting for inference. They are also common in industrial applications, particularly with internet transaction datasets, and machine…

Abstract

Heavy-tailed distributions present a tough setting for inference. They are also common in industrial applications, particularly with internet transaction datasets, and machine learners often analyze such data without considering the biases and risks associated with the misuse of standard tools. This chapter outlines a procedure for inference about the mean of a (possibly conditional) heavy-tailed distribution that combines nonparametric analysis for the bulk of the support with Bayesian parametric modeling – motivated from extreme value theory – for the heavy tail. The procedure is fast and massively scalable. The work should find application in settings wherever correct inference is important and reward tails are heavy; we illustrate the framework in causal inference for A/B experiments involving hundreds of millions of users of eBay.com.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part B
Type: Book
ISBN: 978-1-83867-419-9

Article
Publication date: 12 June 2009

Claude Gagnadre, Armand Caron, Hervé Guézénoc and Yves Grohens

The purpose of this paper is to compare a picture obtained by means of electron microscopy, resulting from the interaction of an electron beam with the material surface, and the…

Abstract

Purpose

The purpose of this paper is to compare a picture obtained by means of electron microscopy, resulting from the interaction of an electron beam with the material surface, and the numerical mapping of the material surface potentials. This new method has been successfully applied to a composite material and will be checked to describe other complex materials.

Design/methodology/approach

This surface potential function is calculated by a numerical approximation of Laplace's equation with three variables reduced to two variables by using the continuity assumptions on the potential.

Findings

The results are particularly satisfactory and allow future developments in electron microscopy picture analysis to be forecasted.

Originality/value

This paper demonstrates new approximated operator of the surface potential with good accordance between experimental and calculated values.

Details

Kybernetes, vol. 38 no. 5
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 27 May 2022

John Galakis, Ioannis Vrontos and Panos Xidonas

This study aims to introduce a tree-structured linear and quantile regression framework to the analysis and modeling of equity returns, within the context of asset pricing.

Abstract

Purpose

This study aims to introduce a tree-structured linear and quantile regression framework to the analysis and modeling of equity returns, within the context of asset pricing.

Design/Methodology/Approach

The approach is based on the idea of a binary tree, where every terminal node parameterizes a local regression model for a specific partition of the data. A Bayesian stochastic method is developed including model selection and estimation of the tree structure parameters. The framework is applied on numerous U.S. asset pricing models, using alternative mimicking factor portfolios, frequency of data, market indices, and equity portfolios.

Findings

The findings reveal strong evidence that asset returns exhibit asymmetric effects and non- linear patterns to different common factors, but, more importantly, that there are multiple thresholds that create several partitions in the common factor space.

Originality/Value

To the best of the authors' knowledge, this paper is the first to explore and apply a tree-structured and quantile regression framework in an asset pricing context.

Details

Review of Accounting and Finance, vol. 21 no. 3
Type: Research Article
ISSN: 1475-7702

Keywords

Book part
Publication date: 1 December 2016

Roman Liesenfeld, Jean-François Richard and Jan Vogler

We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and…

Abstract

We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and non-Gaussian response variables. The class of models under consideration includes specifications for discrete choices, event counts and limited-dependent variables (truncation, censoring, and sample selection) among others. Our algorithm relies upon a novel implementation of efficient importance sampling (EIS) specifically designed to exploit typical sparsity of high-dimensional spatial precision (or covariance) matrices. It is numerically very accurate and computationally feasible even for very high-dimensional latent processes. Thus, maximum likelihood (ML) estimation of high-dimensional non-Gaussian spatial models, hitherto considered to be computationally prohibitive, becomes feasible. We illustrate our approach with ML estimation of a spatial probit for US presidential voting decisions and spatial count data models (Poisson and Negbin) for firm location choices.

Details

Spatial Econometrics: Qualitative and Limited Dependent Variables
Type: Book
ISBN: 978-1-78560-986-2

Keywords

Article
Publication date: 1 August 1999

Gh. Juncu

The paper analyses the preconditioning of non‐linear nonsymmetric equations with approximations of the discrete Laplace operator. The test problems are non‐linear 2‐D elliptic…

Abstract

The paper analyses the preconditioning of non‐linear nonsymmetric equations with approximations of the discrete Laplace operator. The test problems are non‐linear 2‐D elliptic equations that describe natural convection, Darcy flow, in a porous medium. The standard second order accurate finite difference scheme is used to discretize the models’ equations. The discrete approximations are solved with a double iterative process using the Newton method as outer iteration and the preconditioned generalised conjugate gradient (PGCG) methods as inner iteration. Three PGCG algorithms, CGN, CGS and GMRES, are tested. The preconditioning with discrete Laplace operator approximations consists of replacing the solving of the equation with the preconditioner by a few iterations of an appropriate iterative scheme. Two iterative algorithms are tested: incomplete Cholesky (IC) and multigrid (MG). The numerical results show that MG preconditioning leads to mesh independence. CGS is the most robust algorithm but its efficiency is lower than that of GMRES.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 9 no. 5
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 1 February 1984

B.C. BROOKES

Haitun has recently shown that empirical distributions are of two types—‘Gaussian’ and ‘Zipfian’—characterized by the presence or absence of moments. Gaussian‐type distributions…

Abstract

Haitun has recently shown that empirical distributions are of two types—‘Gaussian’ and ‘Zipfian’—characterized by the presence or absence of moments. Gaussian‐type distributions arise only in physical contexts: Zipfian only in social contexts. As the whole of modern statistical theory is based on Gaussian distributions, Haitun thus shows that its application to social statistics, including cognitive statistics, is ‘inadmissible’. A new statistical theory based on ‘Zipfian’ distributions is therefore needed for the social sciences. Laplace's notorious ‘law of succession’, which has evaded derivation by classical probability theory, is shown to be the ‘Zipfian’ frequency analogue of the Bradford law. It is argued that these two laws together provide the most convenient analytical instruments for the exploration of social science data. Some implications of these findings for the quantitative analysis of information systems are briefly discussed.

Details

Journal of Documentation, vol. 40 no. 2
Type: Research Article
ISSN: 0022-0418

Article
Publication date: 14 June 2011

Sana Abu‐Gurra, Vedat Suat Ertürk and Shaher Momani

The purpose of this paper is to find a semi‐analytic solution to the fractional oscillator equations. In this paper, the authors apply the modified differential transform method…

Abstract

Purpose

The purpose of this paper is to find a semi‐analytic solution to the fractional oscillator equations. In this paper, the authors apply the modified differential transform method to find approximate analytical solutions to fractional oscillators.

Design/methodology/approach

The modified differential transform method is used to obtain the solutions of the systems. This approach rests on the recently developed modification of the differential transform method. Some examples are given to illustrate the ability and reliability of the modified differential transform method for solving fractional oscillators.

Findings

The main conclusion is that the proposed method is a good way for solving such problems. The results are compared with those obtained by the fourth‐order Runge‐Kutta method. It is shown that the results reveal that the modified differential transform method in many instances gives better results.

Originality/value

The paper demostrates that a hybrid method of differential transform method, Laplace transform and Padé approximations provides approximate solutions of the oscillatory systems.

Details

Kybernetes, vol. 40 no. 5/6
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 27 July 2018

Taiwo S. Yusuf and Basant K. Jha

The purpose of this paper is to present a semi-analytical solution for time-dependent natural convection flow with heat generation/absorption in an annulus partially filled with…

Abstract

Purpose

The purpose of this paper is to present a semi-analytical solution for time-dependent natural convection flow with heat generation/absorption in an annulus partially filled with porous material.

Design/methodology/approach

The governing partial differential equations are transformed into the ordinary differential equations using the Laplace transform technique. The exact solution obtained is inverted from the Laplace domain to time domain using the Riemann-sum approximation approach. Justification of the Riemann-sum approximation approach is achieved by comparing the values obtained with those of the implicit finite difference method at both the transient state and the steady state at large time.

Findings

If is found that the peak axial velocity always occur in the clear fluid region. In addition, there is an indication that heat generating fluid is desirable for optimum mass flux in the annular gap most importantly when the convection current is enhanced by constant heat flux.

Originality/value

In view of the amount of works done on natural convection with internal heat generation/absorption, it becomes interesting to investigate the influence of this essential activity on natural convection flow in a vertical cylinder partially filled with porous material where the outer surface of the inner cylinder is either heated isothermally or with constant heat flux.

Details

Multidiscipline Modeling in Materials and Structures, vol. 14 no. 5
Type: Research Article
ISSN: 1573-6105

Keywords

Article
Publication date: 4 January 2021

Ben Mansour Dia

The author examine the sequestration of CO2 in abandoned geological formations where leakages are permitted up to only a certain threshold to meet the international CO2 emissions…

Abstract

Purpose

The author examine the sequestration of CO2 in abandoned geological formations where leakages are permitted up to only a certain threshold to meet the international CO2 emissions standards. Technically, the author address a Bayesian experimental design problem to optimally mitigate uncertainties and to perform risk assessment on a CO2 sequestration model, where the parameters to be inferred are random subsurface properties while the quantity of interest is desired to be kept within safety margins.

Design/methodology/approach

The author start with a probabilistic formulation of learning the leak-age rate, and the author later relax it to a Bayesian experimental design of learning the formations geo-physical properties. The injection rate is the design parameter, and the learned properties are used to estimate the leakage rate by means of a nonlinear operator. The forward model governs a two-phase two-component flow in a porous medium with no solubility of CO2 in water. The Laplace approximation is combined with Monte Carlo sampling to estimate the expectation of the Kullback–Leibler divergence that stands for the objective function.

Findings

Different scenarios, of confining CO2 while measuring the risk of harmful leakages, are analyzed numerically. The efficiency of the inversion of the CO2 leakage rate improves with the injection rate as great improvements, in terms of the accuracy of the estimation of the formation properties, are noticed. However, this study shows that those results do not imply in any way that the learned value of the CO2 leakage should exhibit the same behavior. Also this study enhances the implementation of CO2 sequestrations by extending the duration given by the reservoir capacity, controlling the injection while the emissions remain in agreement with the international standards.

Originality/value

Uncertainty quantification of the reservoir properties is addressed. Nonlinear goal-oriented inverse problem, for the estimation of the leakage rate, is known to be very challenging. This study presents a relaxation of the probabilistic design of learning the leakage rate to the Bayesian experimental design of learning the reservoir geophysical properties.

Details

Engineering Computations, vol. 38 no. 3
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of 836