Search results
1 – 10 of over 2000Yi-Ling Chen, Hong-Yu Luo, Wei-Che Tsai and Hang Zhang
This research applies a static hedging portfolio method derived from Derman, Ergener, and Kani (1995) (henceforth Derman's SHP method) and a new SHP method with European…
Abstract
This research applies a static hedging portfolio method derived from Derman, Ergener, and Kani (1995) (henceforth Derman's SHP method) and a new SHP method with European cash-or-nothing binary options developed by Chung, Shih, and Tsai (2013) to price European continuous double barrier (ECDB) options and the rebates of the ECDB options. Our numerical results indicate that the new SHP method outperforms Derman's SHP method in terms of efficiency and effectiveness under all circumstances.
Details
Keywords
Garland Durham and John Geweke
Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially…
Abstract
Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially quite advantageous ways. But to fully exploit these benefits algorithms that conform to parallel computing environments are needed. This paper presents a sequential posterior simulator designed to operate efficiently in this context. The simulator makes fewer analytical and programming demands on investigators, and is faster, more reliable, and more complete than conventional posterior simulators. The paper extends existing sequential Monte Carlo methods and theory to provide a thorough and practical foundation for sequential posterior simulation that is well suited to massively parallel computing environments. It provides detailed recommendations on implementation, yielding an algorithm that requires only code for simulation from the prior and evaluation of prior and data densities and works well in a variety of applications representative of serious empirical work in economics and finance. The algorithm facilitates Bayesian model comparison by producing marginal likelihood approximations of unprecedented accuracy as an incidental by-product, is robust to pathological posterior distributions, and provides estimates of numerical standard error and relative numerical efficiency intrinsically. The paper concludes with an application that illustrates the potential of these simulators for applied Bayesian inference.
Details
Keywords
The difficulties that MR poses for comparativists were anticipated 40 years ago in Sidney Verba's essay “Some Dilemmas of Comparative Research”, in which he called for a…
Abstract
The difficulties that MR poses for comparativists were anticipated 40 years ago in Sidney Verba's essay “Some Dilemmas of Comparative Research”, in which he called for a “disciplined configurative approach…based on general rules, but on complicated combinations of them” (Verba, 1967, p. 115). Charles Ragin's (1987) book The Comparative Method eloquently spelled out the mismatch between MR and causal explanation in comparative research. At the most basic level, like most other methods of multivariate statistical analysis MR works by rendering the cases invisible, treating them simply as the source of a set of empirical observations on dependent and independent variables. However, even when scholars embrace the analytical purpose of generalizing about relationships between variables, as opposed to dwelling on specific differences between entities with proper names, the cases of interest in comparative political economy are limited in number and occupy a bounded universe.2 They are thus both knowable and manageable. Consequently, retaining named cases in the analysis is an efficient way of conveying information and letting readers evaluate it.3 Moreover, in practice most producers and consumers of comparative political economy are intrinsically interested in specific cases. Why not cater to this interest by keeping our cases visible?
Noel Scott, Rodolfo Baggio and Chris Cooper
This chapter discusses the emerging network science approach to the study of complex adaptive systems and applies tools derived from statistical physics to the analysis of tourism…
Abstract
This chapter discusses the emerging network science approach to the study of complex adaptive systems and applies tools derived from statistical physics to the analysis of tourism destinations. The authors provide a brief history of network science and the characteristics of a network as well as different models such as small world and scale free networks, and dynamic properties such as resilience and information diffusion. The Italian resort island of Elba is used as a case study allowing comparison of the communication network of tourist organizations and the virtual network formed by the websites of these organizations. The study compares the parameters of these networks to networks from the literature and to randomly created networks. The analyses include computer simulations to assess the dynamic properties of these networks. The results indicate that the Elba tourism network has a low degree of collaboration between members. These findings provide a quantitative measure of network performance. In general, the application of network science to the study of social systems offers opportunities for better management of tourism destinations and complex social systems.
Details
Keywords
In empirical research, panel (and multinomial) probit models are leading examples for the use of maximum simulated likelihood estimators. The Geweke–Hajivassiliou–Keane (GHK…
Abstract
In empirical research, panel (and multinomial) probit models are leading examples for the use of maximum simulated likelihood estimators. The Geweke–Hajivassiliou–Keane (GHK) simulator is the most widely used technique for this type of problem. This chapter suggests an algorithm that is based on GHK but uses an adaptive version of sparse-grids integration (SGI) instead of simulation. It is adaptive in the sense that it uses an automated change-of-variables to make the integration problem numerically better behaved along the lines of efficient importance sampling (EIS) and adaptive univariate quadrature. The resulting integral is approximated using SGI that generalizes Gaussian quadrature in a way such that the computational costs do not grow exponentially with the number of dimensions. Monte Carlo experiments show an impressive performance compared to the original GHK algorithm, especially in difficult cases such as models with high intertemporal correlations.