Search results
1 – 10 of 22“It should also be noted that the objective of convergence and equal distribution, including across under-performing areas, can hinder efforts to generate growth. Contrariwise…
Abstract
“It should also be noted that the objective of convergence and equal distribution, including across under-performing areas, can hinder efforts to generate growth. Contrariwise, the objective of competitiveness can exacerbate regional and social inequalities, by targeting efforts on zones of excellence where projects achieve greater returns (dynamic major cities, higher levels of general education, the most advanced projects, infrastructures with the heaviest traffic, and so on). If cohesion policy and the Lisbon Strategy come into conflict, it must be borne in mind that the former, for the moment, is founded on a rather more solid legal foundation than the latter” European Commission (2005, p. 9)Adaptation of Cohesion Policy to the Enlarged Europe and the Lisbon and Gothenburg Objectives.
This paper explores the use of some stochastic models for traffic assignment in the case of homogeneous traffic and simple networks. For non-dynamic routing we obtain asymptotic…
Abstract
This paper explores the use of some stochastic models for traffic assignment in the case of homogeneous traffic and simple networks. For non-dynamic routing we obtain asymptotic results in the form of paths representing time dependent evolution of traffic over routes. A functional limit theorem gives integral equations for the limiting fluid path which converges to an assignment satisfying Wardrop's first principle as time goes to infinity. For linear cost functions we are able to use the theory of large deviations to examine the way in which rare network overload events occur. In the case of dynamic assignment, we discuss the use of heavy traffic limits and Brownian models to examine the efficiency of network capacity usage when drivers choose routes according to conditions obtaining on entrance to the network. In particular we discuss the phenomenon of resource pooling.
Tze Leung Lai and Haipeng Xing
This paper shows that volatility persistence in GARCH models and spurious long memory in autoregressive models may arise if the possibility of structural changes is not…
Abstract
This paper shows that volatility persistence in GARCH models and spurious long memory in autoregressive models may arise if the possibility of structural changes is not incorporated in the time series model. It also describes a tractable hidden Markov model (HMM) in which the regression parameters and error variances may undergo abrupt changes at unknown time points, while staying constant between adjacent change-points. Applications to real and simulated financial time series are given to illustrate the issues and methods.
Dirk Zumkeller, Jean-Loup Madre, Bastian Chlond and Jimmy Armoogum
Garland Durham and John Geweke
Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially…
Abstract
Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially quite advantageous ways. But to fully exploit these benefits algorithms that conform to parallel computing environments are needed. This paper presents a sequential posterior simulator designed to operate efficiently in this context. The simulator makes fewer analytical and programming demands on investigators, and is faster, more reliable, and more complete than conventional posterior simulators. The paper extends existing sequential Monte Carlo methods and theory to provide a thorough and practical foundation for sequential posterior simulation that is well suited to massively parallel computing environments. It provides detailed recommendations on implementation, yielding an algorithm that requires only code for simulation from the prior and evaluation of prior and data densities and works well in a variety of applications representative of serious empirical work in economics and finance. The algorithm facilitates Bayesian model comparison by producing marginal likelihood approximations of unprecedented accuracy as an incidental by-product, is robust to pathological posterior distributions, and provides estimates of numerical standard error and relative numerical efficiency intrinsically. The paper concludes with an application that illustrates the potential of these simulators for applied Bayesian inference.
Details
Keywords
In a dynamic environment where underlying competition is “for the market,” this chapter examines what happens when entrants and incumbents can instead negotiate for the market…
Abstract
In a dynamic environment where underlying competition is “for the market,” this chapter examines what happens when entrants and incumbents can instead negotiate for the market. For instance, this might arise when an entrant innovator can choose to license to or be acquired by an incumbent firm (i.e., engage in cooperative commercialization). It is demonstrated that, depending upon the level of firms’ potential dynamic capabilities, there may or may not be gains to trade between incumbents and entrants in a cumulative innovation environment; that is, entrants may not be adequately compensated for losses in future innovative potential. This stands in contrast to static analyses that overwhelmingly identify positive gains to trade from such cooperation.
Details
Keywords
Igor Vaynman and Brendan K. Beare
The variance targeting estimator (VTE) for generalized autoregressive conditionally heteroskedastic (GARCH) processes has been proposed as a computationally simpler and…
Abstract
The variance targeting estimator (VTE) for generalized autoregressive conditionally heteroskedastic (GARCH) processes has been proposed as a computationally simpler and misspecification-robust alternative to the quasi-maximum likelihood estimator (QMLE). In this paper we investigate the asymptotic behavior of the VTE when the stationary distribution of the GARCH process has infinite fourth moment. Existing studies of historical asset returns indicate that this may be a case of empirical relevance. Under suitable technical conditions, we establish a stable limit theory for the VTE, with the rate of convergence determined by the tails of the stationary distribution. This rate is slower than that achieved by the QMLE. The limit distribution of the VTE is nondegenerate but singular. We investigate the use of subsampling techniques for inference, but find that finite sample performance is poor in empirically relevant scenarios.
Details