Search results
1 – 10 of 96Climate control needs have reached momentum. While scientists call for stabilizing climate and regulators structure climate change mitigation and adaptation efforts around the…
Abstract
Climate control needs have reached momentum. While scientists call for stabilizing climate and regulators structure climate change mitigation and adaptation efforts around the globe, economists are concerned with finding proper and fair financing mechanisms. In an overlapping-generations framework, Sachs (2014) solves the climate change predicament that seems to pit today’s against future generations. Sachs (2014) proposes that the current generation mitigates climate change financed through bonds to remain financially as well-off as without mitigation while improving environmental well-being of future generations through ensured climate stability. This intergenerational tax-and-transfer policy turns climate change mitigation into a Pareto improving strategy. Sachs’ (2014) discrete model is integrated in contemporary growth and resource theories. The following article analyzes how climate bonds can be phased-in, in a model for a socially optimal solution and a laissez-faire economy. Optimal trajectories are derived partially analytically (e.g., by using the Pontryagin maximum principle to define the optimal equilibrium), partially data driven (e.g., by the use of modern big market data), and partially by using novel cutting-edge methods – for example, nonlinear model predictive control (NMPC), which solves complex dynamic optimization problems with different nonlinearities for infinite and finite decision horizons. NMPC will be programed with terminal condition in order to determine appropriate numeric solutions converging to some optimal equilibria. The analysis tests if the climate change debt adjusted growth model stays within the bounds of a sustainable fiscal policy by employing NMPC, which solves complex dynamic systems with different nonlinearities.
Details
Keywords
SERGIO M. FOCARDI and FRANK J. FABOZZI
Fat‐tailed distributions have been found in many financial and economic variables ranging from forecasting returns on financial assets to modeling recovery distributions in…
Abstract
Fat‐tailed distributions have been found in many financial and economic variables ranging from forecasting returns on financial assets to modeling recovery distributions in bankruptcies. They have also been found in numerous insurance applications such as catastrophic insurance claims and in value‐at‐risk measures employed by risk managers. Financial applications include:
Willi Semmler and Christian R. Proaño
The recent financial and sovereign debt crises around the world have sparked a growing literature on models and empirical estimates of defaultable debt. Frequently households and…
Abstract
The recent financial and sovereign debt crises around the world have sparked a growing literature on models and empirical estimates of defaultable debt. Frequently households and firms come under default threat, local governments can default, and recently sovereign default threats were eminent for Greece and Spain in 2012–2013. Moreover, Argentina experienced an actual default in 2001. What causes sovereign default risk, and what are the escape routes from default risk? Previous studies such as Arellano (2008), Roch and Uhlig (2013), and Arellano et al. (2014) have provided theoretical models to explore the main dynamics of sovereign defaults. These models can be characterized as threshold models in which there is a convergence toward a good no-default equilibrium below the threshold and a default equilibrium above the threshold. However, in these models aggregate output is exogenous, so that important macroeconomic feedback effects are not taken into account. In this chapter, we (1) propose alternative model variants suitable for certain types of countries in the EU where aggregate output is endogenously determined and where financial stress plays a key role, (2) show how these model variants can be solved through the Nonlinear Model Predictive Control numerical technique, and (3) present some empirical evidence on the nonlinear dynamics of output, sovereign debt, and financial stress in some euro areas and other industrialized countries.
Details
Keywords
José Azevedo‐Pereira, Gualter Couto and Cláudia Nunes
This paper aims to focus on the problem of the optimal relocation policy for a firm that faces two types of uncertainty: one about the moments in which new (and more efficient…
Abstract
Purpose
This paper aims to focus on the problem of the optimal relocation policy for a firm that faces two types of uncertainty: one about the moments in which new (and more efficient) sites will become available; and the other regarding the degree of efficiency improvement inherent to each one of these new, yet to be known, potential location places.
Design/methodology/approach
The paper considers the relocation issue as an optimal stopping decision problem. It uses Poisson jump processes to model the increase in the efficiency process, where these jumps occur according to a homogeneous Poisson process, but the magnitude of these jumps can have special distributions. In particular it assumes that the magnitudes can be gamma‐distributed or truncated‐exponential distributed.
Findings
Particular characteristics concerning the expected optimal timing for relocation, the corresponding volatility and the value of the firm under the optimal relocation policy are derived. These results lead also to the conjecture that the optimal relocation policy is robust in terms of distributions of the degree of improvement of efficiency that are considered, as long as the expected values are the same.
Originality/value
The paper provides an innovative approach to relocation problems, using stochastic tools. Moreover, the use of the truncated exponential and the gamma distribution functions to model the Poisson jumps is particularly suitable, given the situation under study. To the authors' knowledge, this is the first time that this type of setting is used to tackle a real options problem.
Details
Keywords
Xin Gu, Qing Zhang and Erdogan Madenci
This paper aims to review the existing bond-based peridynamic (PD) and state-based PD heat conduction models, and further propose a refined bond-based PD thermal conduction model…
Abstract
Purpose
This paper aims to review the existing bond-based peridynamic (PD) and state-based PD heat conduction models, and further propose a refined bond-based PD thermal conduction model by using the PD differential operator.
Design/methodology/approach
The general refined bond-based PD is established by replacing the local spatial derivatives in the classical heat conduction equations with their corresponding nonlocal integral forms obtained by the PD differential operator. This modeling approach is representative of the state-based PD models, whereas the resulting governing equations appear as the bond-based PD models.
Findings
The refined model can be reduced to the existing bond-based PD heat conduction models by specifying particular influence functions. Also, the refined model does not require any calibration procedure unlike the bond-based PD. A systematic explicit dynamic solver is introduced to validate 1 D, 2 D and 3 D heat conduction in domains with and without a crack subjected to a combination of Dirichlet, Neumann and convection boundary conditions. All of the PD predictions are in excellent agreement with the classical solutions and demonstrate the nonlocal feature and advantage of PD in dealing with heat conduction in discontinuous domains.
Originality/value
The existing PD heat conduction models are reviewed. A refined bond-based PD thermal conduction model by using PD differential operator is proposed and 3 D thermal conduction in intact or cracked structures is simulated.
Details
Keywords
Diep Duong and Norman R. Swanson
The topic of volatility measurement and estimation is central to financial and more generally time-series econometrics. In this chapter, we begin by surveying models of…
Abstract
The topic of volatility measurement and estimation is central to financial and more generally time-series econometrics. In this chapter, we begin by surveying models of volatility, both discrete and continuous, and then we summarize some selected empirical findings from the literature. In particular, in the first sections of this chapter, we discuss important developments in volatility models, with focus on time-varying and stochastic volatility as well as nonparametric volatility estimation. The models discussed share the common feature that volatilities are unobserved and belong to the class of missing variables. We then provide empirical evidence on “small” and “large” jumps from the perspective of their contribution to overall realized variation, using high-frequency price return data on 25 stocks in the DOW 30. Our “small” and “large” jump variations are constructed at three truncation levels, using extant methodology of Barndorff-Nielsen and Shephard (2006), Andersen, Bollerslev, and Diebold (2007), and Aït-Sahalia and Jacod (2009a, 2009b, 2009c). Evidence of jumps is found in around 22.8% of the days during the 1993–2000 period, much higher than the corresponding figure of 9.4% during the 2001–2008 period. Although the overall role of jumps is lessening, the role of large jumps has not decreased, and indeed, the relative role of large jumps, as a proportion of overall jumps, has actually increased in the 2000s.
Details
Keywords
Michiel de Pooter, Francesco Ravazzolo, Rene Segers and Herman K. van Dijk
Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial posterior…
Abstract
Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial posterior probability near the boundary of the parameter region. This feature refers to near-instability within dynamic models, to forecasting with near-random walk models and to clustering of several economic series in a small number of groups within a data panel. Two canonical models are used: a linear regression model with autocorrelation and a simple variance components model. Several well-known time-series models like unit root and error correction models and further state space and panel data models are shown to be simple generalizations of these two canonical models for the purpose of posterior inference. A Bayesian model averaging procedure is presented in order to deal with models with substantial probability both near and at the boundary of the parameter region. Analytical, graphical, and empirical results using U.S. macroeconomic data, in particular on GDP growth, are presented.
The EOQ formula is definitely the oldest and best known single stage lot sizing technique. Its use reportedly dates back to 1904, even though it was not published until a later…
Abstract
The EOQ formula is definitely the oldest and best known single stage lot sizing technique. Its use reportedly dates back to 1904, even though it was not published until a later date. It is often looked upon with scepticism by practitioners and academicians alike, although the reasons for this may differ; it seems, however, to be the most widely used lot sizing technique overall.