Search results

1 – 10 of over 5000
Article
Publication date: 17 March 2014

Joël Wagner

The concept of value at risk is used in the risk-based calculation of solvency capital requirements in the Basel II/III banking regulations and in the planned Solvency II…

1113

Abstract

Purpose

The concept of value at risk is used in the risk-based calculation of solvency capital requirements in the Basel II/III banking regulations and in the planned Solvency II insurance regulation framework planned in the European Union. While this measure controls the ruin probability of a financial institution, the expected policyholder deficit (EPD) and expected shortfall (ES) measures, which are relevant from the customer's perspective as they value the amount of the shortfall, are not controlled at the same time. Hence, if there are variations in or changes to the asset-liability situation, financial companies may still comply with the capital requirement, while the EPD or ES reach unsatisfactory levels. This is a significant drawback to the solvency frameworks. The paper aims to discuss these issues.

Design/methodology/approach

The author has developed a model framework wherein the author evaluates the relevant risk measures using the distribution-free approach of the normal power approximation. This allows the author to derive analytical approximations of the risk measures solely through the use of the first three central moments of the underlying distributions. For the case of a reference insurance company, the author calculates the required capital using the ruin probability and EPD approaches. For this, the author performs sensitivity analyses considering different asset allocations and different liability characteristics.

Findings

The author concludes that only a simultaneous monitoring of the ruin probability and EPD can lead to satisfactory results guaranteeing a constant level of customer protection. For the reference firm, the author evaluates the relative changes in the capital requirement when applying the EPD approach next to the ruin probability approach. Depending on the development of the assets and liabilities, and in the cases the author illustrates, the reference company would need to provide substantial amounts of additional equity capital.

Originality/value

A comparative assessment of alternative risk measures is relevant given the debate among regulators, industry representatives and academics about how adequately they are used. The author borrows the approach in parts from the work of Barth. Barth compares the ruin probability and EPD approach when discussing the RBC formulas of the US National Association of Insurance Commissioners introduced in the 1990s. The author reconsiders several of these findings and discusses them in the light of the new regulatory frameworks. More precisely, the author first performs sensitivity analyses for the risk measures using different parameter configurations. Such analyses are relevant since in practice parameter values may differ from estimates used in the model and have a significant impact on the values of the risk measures. Second, the author goes beyond a simple discussion of the outcomes for each risk measure, by deriving the firm conclusion that both the frequency and magnitude of shortfalls need to be controlled.

Details

The Journal of Risk Finance, vol. 15 no. 2
Type: Research Article
ISSN: 1526-5943

Keywords

Article
Publication date: 1 June 2000

A. Savini

Gives introductory remarks about chapter 1 of this group of 31 papers, from ISEF 1999 Proceedings, in the methodologies for field analysis, in the electromagnetic community…

1131

Abstract

Gives introductory remarks about chapter 1 of this group of 31 papers, from ISEF 1999 Proceedings, in the methodologies for field analysis, in the electromagnetic community. Observes that computer package implementation theory contributes to clarification. Discusses the areas covered by some of the papers ‐ such as artificial intelligence using fuzzy logic. Includes applications such as permanent magnets and looks at eddy current problems. States the finite element method is currently the most popular method used for field computation. Closes by pointing out the amalgam of topics.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 19 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Open Access
Article
Publication date: 5 December 2023

Liqun Hu, Tonghui Wang, David Trafimow, S.T. Boris Choy, Xiangfei Chen, Cong Wang and Tingting Tong

The authors’ conclusions are based on mathematical derivations that are supported by computer simulations and three worked examples in applications of economics and finance…

Abstract

Purpose

The authors’ conclusions are based on mathematical derivations that are supported by computer simulations and three worked examples in applications of economics and finance. Finally, the authors provide a link to a computer program so that researchers can perform the analyses easily.

Design/methodology/approach

Based on a parameter estimation goal, the present work is concerned with determining the minimum sample size researchers should collect so their sample medians can be trusted as good estimates of corresponding population medians. The authors derive two solutions, using a normal approximation and an exact method.

Findings

The exact method provides more accurate answers than the normal approximation method. The authors show that the minimum sample size necessary for estimating the median using the exact method is substantially smaller than that using the normal approximation method. Therefore, researchers can use the exact method to enjoy a sample size savings.

Originality/value

In this paper, the a priori procedure is extended for estimating the population median under the skew normal settings. The mathematical derivation and with computer simulations of the exact method by using sample median to estimate the population median is new and a link to a free and user-friendly computer program is provided so researchers can make their own calculations.

Details

Asian Journal of Economics and Banking, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2615-9821

Keywords

Article
Publication date: 1 June 2000

K. Wiak

Discusses the 27 papers in ISEF 1999 Proceedings on the subject of electromagnetisms. States the groups of papers cover such subjects within the discipline as: induction machines;…

Abstract

Discusses the 27 papers in ISEF 1999 Proceedings on the subject of electromagnetisms. States the groups of papers cover such subjects within the discipline as: induction machines; reluctance motors; PM motors; transformers and reactors; and special problems and applications. Debates all of these in great detail and itemizes each with greater in‐depth discussion of the various technical applications and areas. Concludes that the recommendations made should be adhered to.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 19 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 1 August 2001

René Gélinas

Discusses the problem of the joint determination of the parameters for X and R control charts. A simple heuristic, called a power approximation, is presented. The power

Abstract

Discusses the problem of the joint determination of the parameters for X and R control charts. A simple heuristic, called a power approximation, is presented. The power approximation is based on three regression equations which are used to estimate the sample size and the control limits for the X chart and the R chart. Thereafter, some developments and discussion about the proposed power approximation are presented and the method’s performance is tested and assessed using a set of problems previously studied in various scientific publications, and also using a specific set of data from a previously published study.

Details

International Journal of Quality & Reliability Management, vol. 18 no. 6
Type: Research Article
ISSN: 0265-671X

Keywords

Book part
Publication date: 21 November 2014

Jan F. Kiviet and Jerzy Niemczyk

IV estimation is examined when some instruments may be invalid. This is relevant because the initial just-identifying orthogonality conditions are untestable, whereas their…

Abstract

IV estimation is examined when some instruments may be invalid. This is relevant because the initial just-identifying orthogonality conditions are untestable, whereas their validity is required when testing the orthogonality of additional instruments by so-called overidentification restriction tests. Moreover, these tests have limited power when samples are small, especially when instruments are weak. Distinguishing between conditional and unconditional settings, we analyze the limiting distribution of inconsistent IV and examine normal first-order asymptotic approximations to its density in finite samples. For simple classes of models we compare these approximations with their simulated empirical counterparts over almost the full parameter space. The latter is expressed in measures for: model fit, simultaneity, instrument invalidity, and instrument weakness. Our major findings are that for the accuracy of large sample asymptotic approximations instrument weakness is much more detrimental than instrument invalidity. Also, IV estimators obtained from strong but possibly invalid instruments are usually much closer to the true parameter values than those obtained from valid but weak instruments.

Book part
Publication date: 21 November 2014

Jin Seo Cho and Halbert White

We provide a new characterization of the equality of two positive-definite matrices A and B, and we use this to propose several new computationally convenient statistical tests…

Abstract

We provide a new characterization of the equality of two positive-definite matrices A and B, and we use this to propose several new computationally convenient statistical tests for the equality of two unknown positive-definite matrices. Our primary focus is on testing the information matrix equality (e.g. White, 1982, 1994). We characterize the asymptotic behavior of our new trace-determinant information matrix test statistics under the null and the alternative and investigate their finite-sample performance for a variety of models: linear regression, exponential duration, probit, and Tobit. The parametric bootstrap suggested by Horowitz (1994) delivers critical values that provide admirable level behavior, even in samples as small as n = 50. Our new tests often have better power than the parametric-bootstrap version of the traditional IMT; when they do not, they nevertheless perform respectably.

Details

Essays in Honor of Peter C. B. Phillips
Type: Book
ISBN: 978-1-78441-183-1

Keywords

Book part
Publication date: 18 September 2006

Joel A.C. Baum and Bill McKelvey

The potential advantage of extreme value theory in modeling management phenomena is the central theme of this paper. The statistics of extremes have played only a very limited…

Abstract

The potential advantage of extreme value theory in modeling management phenomena is the central theme of this paper. The statistics of extremes have played only a very limited role in management studies despite the disproportionate emphasis on unusual events in the world of managers. An overview of this theory and related statistical models is presented, and illustrative empirical examples provided.

Details

Research Methodology in Strategy and Management
Type: Book
ISBN: 978-0-76231-339-6

Article
Publication date: 1 June 1992

J.C. CAVENDISH, C.A. HALL and T.A. PORSCHING

We describe a novel mathematical approach to deriving and solving covolume models of the incompressible 2‐D Navier‐Stokes flow equations. The approach integrates three technical…

100

Abstract

We describe a novel mathematical approach to deriving and solving covolume models of the incompressible 2‐D Navier‐Stokes flow equations. The approach integrates three technical components into a single modelling algorithm: 1. Automatic Grid Generation. An algorithm is described and used to automatically discretize the flow domain into a Delaunay triangulation and a dual Voronoi polygonal tessellation. 2. Covolume Finite Difference Equation Generation. Three covolume discretizations of the Navier‐Stokes equations are presented. The first scheme conserves mass over triangular control volumes, the second scheme over polygonal control volumes and the third scheme conserves mass over both. Simple consistent finite difference equations are derived in terms of the primitive variables of velocity and pressure. 3. Dual Variable Reduction. A network theoretic technique is used to transform each of the finite difference systems into equivalent systems which are considerably smaller than the original primitive finite difference system.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 2 no. 6
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 7 November 2016

João Paulo Pascon

The purpose of this paper is to deal with large deformation analysis of plane beams composed of functionally graded (FG) elastic material with a variable Poisson’s ratio.

Abstract

Purpose

The purpose of this paper is to deal with large deformation analysis of plane beams composed of functionally graded (FG) elastic material with a variable Poisson’s ratio.

Design/methodology/approach

The material is assumed to be linear elastic, with a Poisson’s ratio varying according to a power law along the thickness direction. The finite element used is a plane beam of any-order of approximation along the axis, and with four transverse enrichment schemes, which can describe constant, linear, quadratic and cubic variation of the strain along the thickness direction. Regarding the constitutive law, five materials are adopted: two homogeneous limiting cases, and three intermediate FG cases. The effect of both finite element kinematics and distribution of Poisson’s ratio on the mechanical response of a cantilever is investigated.

Findings

In accordance with the scientific literature, the second scheme, in which the transverse strain is linearly variable, is sufficient for homogeneous long (or thin) beams under bending. However, for FG short (or moderate thick) beams, the third scheme, in which the transverse strain variation is quadratic, is needed for a reliable strain or stress distribution.

Originality/value

In the scientific literature, there are several studies regarding nonlinear analysis of functionally graded materials (FGMs) via finite elements, analysis of FGMs with constant Poisson’s ratio, and geometrically linear problems with gradually variable Poisson’s ratio. However, very few deal with finite element analysis of flexible beams with gradually variable Poisson’s ratio. In the present study, a reliable formulation for such beams is presented.

1 – 10 of over 5000