Search results

1 – 10 of 414
Open Access
Article
Publication date: 5 December 2016

Sang Sup Cho

This study aims to estimate the firm size distributions that belong to the service sector and manufacturing sector in Korea.

3977

Abstract

Purpose

This study aims to estimate the firm size distributions that belong to the service sector and manufacturing sector in Korea.

Design/methodology/approach

When estimating the firm size distribution, the author considers the following two major factors. First, the firm size distribution can have a gamma distribution rather than traditional accepted distributions such as Pareto distribution or log-normal distribution. In particular, industry-specific enterprises can have different size distributions of the type of gamma distribution. Second, the firm size distribution that is applied to this study’s data set should reflect a number of factors. For example, estimating mixture gamma distribution for firm size distribution should be required and compared, because the total amount of configuration data is composed of small businesses, medium-sized and large companies.

Findings

Using 8,230 number of firm data in 2013, the author estimates mixture gamma distribution for the firm size.

Originality/value

From the comparison, empirical results are found for the following characteristics of core firm size distribution: first, the firm size distribution of the manufacturing sector has a longer tail than firm size distribution of the service sector. Second, the manufacturing firm size distribution dominates the entire country firm size distribution. Third, one factor among the three factors that make up the mixed gamma firm size distribution is described for 99 per cent of the firm size distributions. From the estimated firm size distributions of the service sector and manufacturing sector in Korea, the author simply implies the strategy and policy implications for the start-up firm.

Details

Asia Pacific Journal of Innovation and Entrepreneurship, vol. 10 no. 1
Type: Research Article
ISSN: 2071-1395

Keywords

Book part
Publication date: 1 January 2008

S.T. Boris Choy, Wai-yin Wan and Chun-man Chan

The normal error distribution for the observations and log-volatilities in a stochastic volatility (SV) model is replaced by the Student-t distribution for robustness…

Abstract

The normal error distribution for the observations and log-volatilities in a stochastic volatility (SV) model is replaced by the Student-t distribution for robustness consideration. The model is then called the t-t SV model throughout this paper. The objectives of the paper are twofold. First, we introduce the scale mixtures of uniform (SMU) and the scale mixtures of normal (SMN) representations to the Student-t density and show that the setup of a Gibbs sampler for the t-t SV model can be simplified. For example, the full conditional distribution of the log-volatilities has a truncated normal distribution that enables an efficient Gibbs sampling algorithm. These representations also provide a means for outlier diagnostics. Second, we consider the so-called t SV model with leverage where the observations and log-volatilities follow a bivariate t distribution. Returns on exchange rates of Australian dollar to 10 major currencies are fitted by the t-t SV model and the t SV model with leverage, respectively.

Details

Bayesian Econometrics
Type: Book
ISBN: 978-1-84855-308-8

Article
Publication date: 23 September 2019

Giuseppe Orlando, Rosa Maria Mininni and Michele Bufalo

The purpose of this paper is to model interest rates from observed financial market data through a new approach to the Cox–Ingersoll–Ross (CIR) model. This model is popular among…

Abstract

Purpose

The purpose of this paper is to model interest rates from observed financial market data through a new approach to the Cox–Ingersoll–Ross (CIR) model. This model is popular among financial institutions mainly because it is a rather simple (uni-factorial) and better model than the former Vasicek framework. However, there are a number of issues in describing interest rate dynamics within the CIR framework on which focus should be placed. Therefore, a new methodology has been proposed that allows forecasting future expected interest rates from observed financial market data by preserving the structure of the original CIR model, even with negative interest rates. The performance of the new approach, tested on monthly-recorded interest rates data, provides a good fit to current data for different term structures.

Design/methodology/approach

To ensure a fitting close to current interest rates, the innovative step in the proposed procedure consists in partitioning the entire available market data sample, usually showing a mixture of probability distributions of the same type, in a suitable number of sub-sample having a normal/gamma distribution. An appropriate translation of market interest rates to positive values has been introduced to overcome the issue of negative/near-to-zero values. Then, the CIR model parameters have been calibrated to the shifted market interest rates and simulated the expected values of interest rates by a Monte Carlo discretization scheme. We have analysed the empirical performance of the proposed methodology for two different monthly-recorded EUR data samples in a money market and a long-term data set, respectively.

Findings

Better results are shown in terms of the root mean square error when a segmentation of the data sample in normally distributed sub-samples is considered. After assessing the accuracy of the proposed procedure, the implemented algorithm was applied to forecast next-month expected interest rates over a historical period of 12 months (fixed window). Through an error analysis, it was observed that our algorithm provides a better fitting of the predicted expected interest rates to market data than the exponentially weighted moving average model. A further confirmation of the efficiency of the proposed algorithm and of the quality of the calibration of the CIR parameters to the observed market interest rates is given by applying the proposed forecasting technique.

Originality/value

This paper has the objective of modelling interest rates from observed financial market data through a new approach to the CIR model. This model is popular among financial institutions mainly because it is a rather simple (uni-factorial) and better model than the former Vasicek model (Section 2). However, there are a number of issues in describing short-term interest rate dynamics within the CIR framework on which focus should be placed. A new methodology has been proposed that allows us to forecast future expected short-term interest rates from observed financial market data by preserving the structure of the original CIR model. The performance of the new approach, tested on monthly data, provides a good fit for different term structures. It is shown how the proposed methodology overcomes both the usual challenges (e.g. simulating regime switching, clustered volatility and skewed tails), as well as the new ones added by the current market environment (particularly the need to model a downward trend to negative interest rates).

Details

The Journal of Risk Finance, vol. 20 no. 4
Type: Research Article
ISSN: 1526-5943

Keywords

Book part
Publication date: 18 October 2019

Hedibert Freitas Lopes, Matthew Taddy and Matthew Gardner

Heavy-tailed distributions present a tough setting for inference. They are also common in industrial applications, particularly with internet transaction datasets, and machine…

Abstract

Heavy-tailed distributions present a tough setting for inference. They are also common in industrial applications, particularly with internet transaction datasets, and machine learners often analyze such data without considering the biases and risks associated with the misuse of standard tools. This chapter outlines a procedure for inference about the mean of a (possibly conditional) heavy-tailed distribution that combines nonparametric analysis for the bulk of the support with Bayesian parametric modeling – motivated from extreme value theory – for the heavy tail. The procedure is fast and massively scalable. The work should find application in settings wherever correct inference is important and reward tails are heavy; we illustrate the framework in causal inference for A/B experiments involving hundreds of millions of users of eBay.com.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part B
Type: Book
ISBN: 978-1-83867-419-9

Article
Publication date: 17 April 2023

Ashlyn Maria Mathai and Mahesh Kumar

In this paper, a mixture of exponential and Rayleigh distributions in the proportions α and 1 − α and all the parameters in the mixture distribution are estimated based on fuzzy…

Abstract

Purpose

In this paper, a mixture of exponential and Rayleigh distributions in the proportions α and 1 − α and all the parameters in the mixture distribution are estimated based on fuzzy data.

Design/methodology/approach

The methods such as maximum likelihood estimation (MLE) and method of moments (MOM) are applied for estimation. Fuzzy data of triangular fuzzy numbers and Gaussian fuzzy numbers for different sample sizes are considered to illustrate the resulting estimation and to compare these methods. In addition to this, the obtained results are compared with existing results for crisp data in the literature.

Findings

The application of fuzziness in the data will be very useful to obtain precise results in the presence of vagueness in data. Mean square errors (MSEs) of the resulting estimators are computed using crisp data and fuzzy data. On comparison, in terms of MSEs, it is observed that maximum likelihood estimators perform better than moment estimators.

Originality/value

Classical methods of obtaining estimators of unknown parameters fail to give realistic estimators since these methods assume the data collected to be crisp or exact. Normally, such case of precise data is not always feasible and realistic in practice. Most of them will be incomplete and sometimes expressed in linguistic variables. Such data can be handled by generalizing the classical inference methods using fuzzy set theory.

Details

International Journal of Quality & Reliability Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 19 July 2021

Farzaneh Khayat, Lemir Teron and Farzin Rasoulyan

The purpose of this paper is to evaluate socioeconomic factors related to COVID-19 mortality rates in New York City (NYC) to understand the connections between socioeconomic…

Abstract

Purpose

The purpose of this paper is to evaluate socioeconomic factors related to COVID-19 mortality rates in New York City (NYC) to understand the connections between socioeconomic variables, including race and income and the disease.

Design/methodology/approach

Using multivariable negative binomial regression, the association between health and mortality disparities related to COVID-19 and socioeconomic conditions is evaluated. The authors obtained ZIP code-level data from the NYC Department of Health and Mental Hygiene and the US Census Bureau.

Findings

This study concludes that the mortality rate rises in areas with a higher proportion of Hispanic and Black residents, whereas areas with higher income rates had lower mortality associated with COVID-19, among over 18,000 confirmed deaths in NYC.

Originality/value

The paper highlights the impacts of social, racial and wealth disparities in mortality rates. It brings to focus the importance of targeted policies regarding these disparities to alleviate health inequality among marginalized communities and to reduce disease mortality.

Details

International Journal of Human Rights in Healthcare, vol. 15 no. 4
Type: Research Article
ISSN: 2056-4902

Keywords

Article
Publication date: 27 March 2018

Dror Parnes

The purpose of this paper is to analyze the differences between the actual mortgage prompt and late payments and their respective expected measures from 2004 to 2010 to spot early…

Abstract

Purpose

The purpose of this paper is to analyze the differences between the actual mortgage prompt and late payments and their respective expected measures from 2004 to 2010 to spot early symptoms of housing crisis.

Design/methodology/approach

This paper explores these discrepancies across the entire US market and along various delinquency lengths of 30, 60 and 90 days. This paper constructs a Bayesian forecasting model that relies on prior distributional properties of diverse time horizons.

Findings

Abnormal mortgage delinquency rates are identified in real time and can be served as early symptoms for housing crisis.

Practical implications

The statistical scheme proposed in this paper can function as a valuable predictive tool for lending institutions, bank audit companies, regulatory bodies and real estate professional investors who examine changes in economic settings and trends in short sale leads.

Social implications

The abnormal mortgage delinquencies can serve as indicators of changes in economic fundamentals and early signs of a mounting housing crisis.

Originality/value

This paper presents a unique statistical technique in the context of mortgage delinquencies.

Details

International Journal of Housing Markets and Analysis, vol. 11 no. 2
Type: Research Article
ISSN: 1753-8270

Keywords

Book part
Publication date: 30 September 2014

Abdoul Aziz Ndoye and Michel Lubrano

We provide a Bayesian inference for a mixture of two Pareto distributions which is then used to approximate the upper tail of a wage distribution. The model is applied to the data…

Abstract

We provide a Bayesian inference for a mixture of two Pareto distributions which is then used to approximate the upper tail of a wage distribution. The model is applied to the data from the CPS Outgoing Rotation Group to analyze the recent structure of top wages in the United States from 1992 through 2009. We find an enormous earnings inequality between the very highest wage earners (the “superstars”), and the other high wage earners. These findings are largely in accordance with the alternative explanations combining the model of superstars and the model of tournaments in hierarchical organization structure. The approach can be used to analyze the recent pay gaps among top executives in large firms so as to exhibit the “superstar” effect.

Details

Economic Well-Being and Inequality: Papers from the Fifth ECINEQ Meeting
Type: Book
ISBN: 978-1-78350-556-2

Keywords

Book part
Publication date: 2 December 2021

Edwin Fourrier-Nicolaï and Michel Lubrano

The growth incidence curve of Ravallion and Chen (2003) is based on the quantile function. Its distribution-free estimator behaves erratically with usual sample sizes leading to…

Abstract

The growth incidence curve of Ravallion and Chen (2003) is based on the quantile function. Its distribution-free estimator behaves erratically with usual sample sizes leading to problems in the tails. The authors propose a series of parametric models in a Bayesian framework. A first solution consists in modeling the underlying income distribution using simple densities for which the quantile function has a closed analytical form. This solution is extended by considering a mixture model for the underlying income distribution. However, in this case, the quantile function is semi-explicit and has to be evaluated numerically. The last solution consists in adjusting directly a functional form for the Lorenz curve and deriving its first-order derivative to find the corresponding quantile function. The authors compare these models by Monte Carlo simulations and using UK data from the Family Expenditure Survey. The authors devote a particular attention to the analysis of subgroups.

Details

Research on Economic Inequality: Poverty, Inequality and Shocks
Type: Book
ISBN: 978-1-80071-558-5

Keywords

Article
Publication date: 1 July 2014

Lysa Porth, Wenjun Zhu and Ken Seng Tan

The purpose of this paper is to address some of the fundamental issues surrounding crop insurance ratemaking, from the perspective of the reinsurer, through the development of a…

Abstract

Purpose

The purpose of this paper is to address some of the fundamental issues surrounding crop insurance ratemaking, from the perspective of the reinsurer, through the development of a scientific pricing framework.

Design/methodology/approach

The generating process of the historical loss cost ratio's (LCR's) are reviewed, and the Erlang mixture distribution is proposed. A modified credibility approach is developed based on the Erlang mixture distribution and the liability weighted LCR, and information from the observed data of the individual region/province is integrated with the collective experience of the entire crop reinsurance program in Canada.

Findings

A comprehensive data set representing the entire crop insurance sector in Canada is used to show that the Erlang mixture distribution captures the tails of the data more accurately compared to conventional distributions. Further, the heterogeneous credibility premium based on the liability weighted LCR's is more conservative, and provides a more scientific approach to enhance the reinsurance pricing.

Research limitations/implications

Credibility models are in the early stages of application in the area of agriculture insurance, therefore, the credibility models presented in this paper could be verified with data from other geographical regions.

Practical implications

The credibility-based Erlang mixture model proposed in this paper should be useful for crop insurers and reinsurers to enhance their ratemaking frameworks.

Originality/value

This is the first paper to introduce the Erlang mixture model in the context of agricultural risk modeling. Two modified versions of the Bühlmann-Straub credibility model are also presented based on the liability weighted LCR to enhance the reinsurance pricing framework.

Details

Agricultural Finance Review, vol. 74 no. 2
Type: Research Article
ISSN: 0002-1466

Keywords

1 – 10 of 414