Search results

1 – 10 of over 10000
Article
Publication date: 17 April 2023

Ashlyn Maria Mathai and Mahesh Kumar

In this paper, a mixture of exponential and Rayleigh distributions in the proportions α and 1 − α and all the parameters in the mixture distribution are estimated based on fuzzy…

Abstract

Purpose

In this paper, a mixture of exponential and Rayleigh distributions in the proportions α and 1 − α and all the parameters in the mixture distribution are estimated based on fuzzy data.

Design/methodology/approach

The methods such as maximum likelihood estimation (MLE) and method of moments (MOM) are applied for estimation. Fuzzy data of triangular fuzzy numbers and Gaussian fuzzy numbers for different sample sizes are considered to illustrate the resulting estimation and to compare these methods. In addition to this, the obtained results are compared with existing results for crisp data in the literature.

Findings

The application of fuzziness in the data will be very useful to obtain precise results in the presence of vagueness in data. Mean square errors (MSEs) of the resulting estimators are computed using crisp data and fuzzy data. On comparison, in terms of MSEs, it is observed that maximum likelihood estimators perform better than moment estimators.

Originality/value

Classical methods of obtaining estimators of unknown parameters fail to give realistic estimators since these methods assume the data collected to be crisp or exact. Normally, such case of precise data is not always feasible and realistic in practice. Most of them will be incomplete and sometimes expressed in linguistic variables. Such data can be handled by generalizing the classical inference methods using fuzzy set theory.

Details

International Journal of Quality & Reliability Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 9 January 2009

Andrea Vocino

The purpose of this article is to present an empirical analysis of complex sample data with regard to the biasing effect of non‐independence of observations on standard error…

Abstract

Purpose

The purpose of this article is to present an empirical analysis of complex sample data with regard to the biasing effect of non‐independence of observations on standard error parameter estimates. Using field data structured in the form of repeated measurements it is to be shown, in a two‐factor confirmatory factor analysis model, how the bias in SE can be derived when the non‐independence is ignored.

Design/methodology/approach

Three estimation procedures are compared: normal asymptotic theory (maximum likelihood); non‐parametric standard error estimation (naïve bootstrap); and sandwich (robust covariance matrix) estimation (pseudo‐maximum likelihood).

Findings

The study reveals that, when using either normal asymptotic theory or non‐parametric standard error estimation, the SE bias produced by the non‐independence of observations can be noteworthy.

Research limitations/implications

Considering the methodological constraints in employing field data, the three analyses examined must be interpreted independently and as a result taxonomic generalisations are limited. However, the study still provides “case study” evidence suggesting the existence of the relationship between non‐independence of observations and standard error bias estimates.

Originality/value

Given the increasing popularity of structural equation models in the social sciences and in particular in the marketing discipline, the paper provides a theoretical and practical insight into how to treat repeated measures and clustered data in general, adding to previous methodological research. Some conclusions and suggestions for researchers who make use of partial least squares modelling are also drawn.

Details

Asia Pacific Journal of Marketing and Logistics, vol. 21 no. 1
Type: Research Article
ISSN: 1355-5855

Keywords

Book part
Publication date: 4 November 2021

Chaido Dritsaki and Melina Dritsaki

The term “economic growth” refers to the increase of real gross national product or gross domestic product or per capita income. National income or else national product is…

Abstract

The term “economic growth” refers to the increase of real gross national product or gross domestic product or per capita income. National income or else national product is usually expressed as a measure of total added value of a domestic economy known as gross domestic product (GDP). Generally, GDP measures the value of economic activity within a country during a specific time period. The current study aims to find the most suitable model that adjusts on a time-series data set using Box-Jenkins methodology and to examine the forecasting ability of this model. The analysis used quarterly data for Greece from the first quarter of 1995 until the third quarter of 2019. Nonlinear maximum likelihood estimation (maximum likelihood-ML) was applied to estimate the model using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm while covariance matrix was estimated using the negative of the matrix of log-likelihood second derivatives (Hessian-observed). Forecasting of the time series was achieved both with dynamic as well as static procedures using all forecasting criteria.

Details

Modeling Economic Growth in Contemporary Greece
Type: Book
ISBN: 978-1-80071-123-5

Keywords

Book part
Publication date: 10 April 2019

Iraj Rahmani and Jeffrey M. Wooldridge

We extend Vuong’s (1989) model-selection statistic to allow for complex survey samples. As a further extension, we use an M-estimation setting so that the tests apply to general…

Abstract

We extend Vuong’s (1989) model-selection statistic to allow for complex survey samples. As a further extension, we use an M-estimation setting so that the tests apply to general estimation problems – such as linear and nonlinear least squares, Poisson regression and fractional response models, to name just a few – and not only to maximum likelihood settings. With stratified sampling, we show how the difference in objective functions should be weighted in order to obtain a suitable test statistic. Interestingly, the weights are needed in computing the model-selection statistic even in cases where stratification is appropriately exogenous, in which case the usual unweighted estimators for the parameters are consistent. With cluster samples and panel data, we show how to combine the weighted objective function with a cluster-robust variance estimator in order to expand the scope of the model-selection tests. A small simulation study shows that the weighted test is promising.

Details

The Econometrics of Complex Survey Data
Type: Book
ISBN: 978-1-78756-726-9

Keywords

Book part
Publication date: 30 August 2019

Md. Nazmul Ahsan and Jean-Marie Dufour

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult…

Abstract

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult to apply due to the presence of latent variables. The existing methods are either computationally costly and/or inefficient. In this paper, we propose computationally simple estimators for the SV model, which are at the same time highly efficient. The proposed class of estimators uses a small number of moment equations derived from an ARMA representation associated with the SV model, along with the possibility of using “winsorization” to improve stability and efficiency. We call these ARMA-SV estimators. Closed-form expressions for ARMA-SV estimators are obtained, and no numerical optimization procedure or choice of initial parameter values is required. The asymptotic distributional theory of the proposed estimators is studied. Due to their computational simplicity, the ARMA-SV estimators allow one to make reliable – even exact – simulation-based inference, through the application of Monte Carlo (MC) test or bootstrap methods. We compare them in a simulation experiment with a wide array of alternative estimation methods, in terms of bias, root mean square error and computation time. In addition to confirming the enormous computational advantage of the proposed estimators, the results show that ARMA-SV estimators match (or exceed) alternative estimators in terms of precision, including the widely used Bayesian estimator. The proposed methods are applied to daily observations on the returns for three major stock prices (Coca-Cola, Walmart, Ford) and the S&P Composite Price Index (2000–2017). The results confirm the presence of stochastic volatility with strong persistence.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
ISBN: 978-1-78973-241-2

Keywords

Article
Publication date: 20 January 2023

Sakshi Soni, Ashish Kumar Shukla and Kapil Kumar

This article aims to develop procedures for estimation and prediction in case of Type-I hybrid censored samples drawn from a two-parameter generalized half-logistic distribution…

Abstract

Purpose

This article aims to develop procedures for estimation and prediction in case of Type-I hybrid censored samples drawn from a two-parameter generalized half-logistic distribution (GHLD).

Design/methodology/approach

The GHLD is a versatile model which is useful in lifetime modelling. Also, hybrid censoring is a time and cost-effective censoring scheme which is widely used in the literature. The authors derive the maximum likelihood estimates, the maximum product of spacing estimates and Bayes estimates with squared error loss function for the unknown parameters, reliability function and stress-strength reliability. The Bayesian estimation is performed under an informative prior set-up using the “importance sampling technique”. Afterwards, we discuss the Bayesian prediction problem under one and two-sample frameworks and obtain the predictive estimates and intervals with corresponding average interval lengths. Applications of the developed theory are illustrated with the help of two real data sets.

Findings

The performances of these estimates and prediction methods are examined under Type-I hybrid censoring scheme with different combinations of sample sizes and time points using Monte Carlo simulation techniques. The simulation results show that the developed estimates are quite satisfactory. Bayes estimates and predictive intervals estimate the reliability characteristics efficiently.

Originality/value

The proposed methodology may be used to estimate future observations when the available data are Type-I hybrid censored. This study would help in estimating and predicting the mission time as well as stress-strength reliability when the data are censored.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 9
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 8 August 2024

Samson Edo

The study investigates the role of macroeconomic policies in driving capital market development in emerging African countries where the markets are relatively active. It aims to…

Abstract

Purpose

The study investigates the role of macroeconomic policies in driving capital market development in emerging African countries where the markets are relatively active. It aims to determine the effects of these policies in pre-pandemic period vis-a-vis the post-pandemic period.

Design/methodology/approach

The generalized method of moments (GMM) and auto-regressive distributed lag (ARDL) are employed in estimating the role within the period 2012Q1-2023Q3. The panel unit root test is used to ascertain the stationary status of variables, while maximum likelihood estimator is employed to determine structural stability of the model.

Findings

The empirical results reveal that fiscal and monetary policies played significant positive role in capital market development in both pre- and post-pandemic periods. On the other hand, trade policy and investment return had significant impact in pre-pandemic period which could not be sustained in post-pandemic period. It is only exchange rate policy that remained insignificant in both periods. The findings therefore suggest that capital market development slowed in the post-pandemic period due to reduced performance of macroeconomic policies. Furthermore, the unit root test reveals that all the variables satisfy empirical properties that ensure estimation results are consistent and non-spurious. The maximum likelihood estimator showed there was long-term structural break, hence short-term impacts were used in comparative analysis.

Originality/value

Macroeconomic policies are fundamental to financial market development in developing countries. The role in resuscitating capital market in the post-pandemic period has yet to be adequately investigated in African countries. This study is carried out to fill this void.

Details

African Journal of Economic and Management Studies, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2040-0705

Keywords

Abstract

Details

Machine Learning and Artificial Intelligence in Marketing and Sales
Type: Book
ISBN: 978-1-80043-881-1

Article
Publication date: 6 February 2019

Sanku Dey and Fernando Antonio Moala

The purpose of this paper is to deal with the Bayesian and non-Bayesian estimation methods of multicomponent stress-strength reliability by assuming the Chen distribution.

Abstract

Purpose

The purpose of this paper is to deal with the Bayesian and non-Bayesian estimation methods of multicomponent stress-strength reliability by assuming the Chen distribution.

Design/methodology/approach

The reliability of a multicomponent stress-strength system is obtained by the maximum likelihood (MLE) and Bayesian methods and the results are compared by using MCMC technique for both small and large samples.

Findings

The simulation study shows that Bayes estimates based on γ prior with absence of prior information performs little better than the MLE with regard to both biases and mean squared errors. The Bayes credible intervals for reliability are also shorter length with competitive coverage percentages than the condence intervals. Further, the coverage probability is quite close to the nominal value in all sets of parameters when both sample sizes n and m increases.

Originality/value

The lifetime distributions used in reliability analysis as exponential, γ, lognormal and Weibull only exhibit monotonically increasing, decreasing or constant hazard rates. However, in many applications in reliability and survival analysis, the most realistic hazard rate is bathtub-shaped found in the Chen distribution. Therefore, the authors have studied the multicomponent stress-strength reliability under the Chen distribution by comparing the MLE and Bayes estimators.

Details

International Journal of Quality & Reliability Management, vol. 36 no. 2
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 1 February 2005

Ahmed Hurairah, Noor Akma Ibrahim, Isa Bin Daud and Kassim Haron

Extreme value model is one of the most important models that are applicable in air pollution data. This paper aims at introducing a new model of extreme value that is more…

2433

Abstract

Purpose

Extreme value model is one of the most important models that are applicable in air pollution data. This paper aims at introducing a new model of extreme value that is more suitable in environmental studies.

Design/methodology/approach

The parameters of the new model have been estimated by method of maximum likelihood. In order to relate to air pollution impacts, the new extreme value model was used, applied to carbon monoxide (CO) in parts per million (ppm) at several places in Malaysia. The objective of this analysis is to fit the extreme values with a new model and to examine its performance. Comparison of the new model with others is shown to illustrate the applicability of this new model.

Findings

The results show that the new model is the best fit using the method of maximum likelihood. The new model gives a significant impact of CO data, which gives the smallest standard error and p‐values. The new extreme value model is able to identify significantly problems of air pollution. The results presented by the new extreme value model can be used as an air quality management tool by providing the decision makers means to determine the required reduction of source.

Originality/value

The new extreme value model has mostly been applied in environmental studies for the statistical treatment of air pollution. The results of the numerical and simulated CO data indicate that the new model both is easy to use and can achieve even higher accuracy compared with other models.

Details

Management of Environmental Quality: An International Journal, vol. 16 no. 1
Type: Research Article
ISSN: 1477-7835

Keywords

1 – 10 of over 10000