Search results
1 – 10 of 502S.T.A. Niaki and Majid Khedmati
The purpose of this paper is to propose two control charts to monitor multi-attribute processes and then a maximum likelihood estimator for the change point of the parameter…
Abstract
Purpose
The purpose of this paper is to propose two control charts to monitor multi-attribute processes and then a maximum likelihood estimator for the change point of the parameter vector (process fraction non-conforming) of multivariate binomial processes.
Design/methodology/approach
The performance of the proposed estimator is evaluated for both control charts using some simulation experiments. At the end, the applicability of the proposed method is illustrated using a real case.
Findings
The proposed estimator provides accurate and useful estimation of the change point for almost all of the shift magnitudes, regardless of the process dimension. Moreover, based on the results obtained the estimator is robust with regard to different correlation values.
Originality/value
To the best of authors’ knowledge, there are no work available in the literature to estimate the change-point of multivariate binomial processes.
Details
Keywords
Copula modeling enables the analysis of multivariate count data that has previously required imposition of potentially undesirable correlation restrictions or has limited…
Abstract
Copula modeling enables the analysis of multivariate count data that has previously required imposition of potentially undesirable correlation restrictions or has limited attention to models with only a few outcomes. This article presents a method for analyzing correlated counts that is appealing because it retains well-known marginal distributions for each response while simultaneously allowing for flexible correlations among the outcomes. The proposed framework extends the applicability of the method to settings with high-dimensional outcomes and provides an efficient simulation method to generate the correlation matrix in a single step. Another open problem that is tackled is that of model comparison. In particular, the article presents techniques for estimating marginal likelihoods and Bayes factors in copula models. The methodology is implemented in a study of the joint behavior of four categories of US technology patents. The results reveal that patent counts exhibit high levels of correlation among categories and that joint modeling is crucial for eliciting the interactions among these variables.
Details
Keywords
Dominique Lord and Srinivas Reddy Geedipally
Purpose – This chapter provides an overview of issues related to analysing crash data characterised by excess zero responses and/or long tails and how to overcome these problems…
Abstract
Purpose – This chapter provides an overview of issues related to analysing crash data characterised by excess zero responses and/or long tails and how to overcome these problems. Factors affecting excess zeros and/or long tails are discussed, as well as how they can bias the results when traditional distributions or models are used. Recently introduced multi-parameter distributions and models developed specifically for such datasets are described. The chapter is intended to guide readers on how to properly analyse crash datasets with excess zeros and long or heavy tails.
Methodology – Key references from the literature are summarised and discussed, and two examples detailing how multi-parameter distributions and models compare with the negative binomial distribution and model are presented.
Findings – In the event that the characteristics of the crash dataset cannot be changed or modified, recently introduced multi-parameter distributions and models can be used efficiently to analyse datasets characterised by excess zero responses and/or long tails. They offer a simpler way to interpret the relationship between crashes and explanatory variables, while providing better statistical performance in terms of goodness-of-fit and predictive capabilities.
Research implications – Multi-parameter models are expected to become the next series of traditional distributions and models. The research on these models is still ongoing.
Practical implications – With the advancement of computing power and Bayesian simulation methods, multi-parameter models can now be easily coded and applied to analyse crash datasets characterised by excess zero responses and/or long tails.
Details
Keywords
The purpose of this paper is to investigate the properties of the classical goodness of fit test statistics X2, G2, GM2, and NM2 in testing quality of process represented as the…
Abstract
Purpose
The purpose of this paper is to investigate the properties of the classical goodness of fit test statistics X2, G2, GM2, and NM2 in testing quality of process represented as the trinomial distribution with dip null hypothesis and to devise a control chart for the trinomial distribution with dip null hypothesis based on demerit control chart.
Design/methodology/approach
The research involves the linear form of the test statistics, the linear function of counts since the marginal distribution of the counts in any category is binomial or approximated Poisson, in which the uniformly minimum variance unbiased estimator is the linear function of counts. A control chart is used for monitoring student characteristics in Thailand. The control chart statistics based on an average of the demerit value computed for each student as a weighted average lead to a uniformly most powerful unbiased test marginally. The two‐sided control limits were obtained using percentile estimates of the empirical distribution of the averages of the demerit.
Findings
The demerit control chart of the weight set (1, 25, 50) shows a generally good performance, robust to direction of out‐of‐control, mostly outperforms the GM2 and is recommended. The X2, NM2 are not recommended in view of inconsistency and bias. The performance of the demerit control chart of the weight set (1, 25, 50) does not dramatically change between both directions.
Practical implications
None of the multivariate control charts for counts presented in the literature deals with trinomial distribution representing the practical index of the quality of the production/process in which the classification of production outputs into three categories of “good”, “defective”, and “reworked” is common. The demerit‐based control chart presented here can be applied directly to this situation.
Originality/value
The research considers how to deal with the trinomial distribution with dip null hypothesis which no research study so far has presented. The study shows that the classical Pearson's X2, Loglikelihood, modified Loglikelihood, and Neyman modified X2 could fail to detect an “out‐of‐control”. This research provides an alternative control chart methodology based on demerit value with recommended weight set (1, 25, 50) for use in general.
Details
Keywords
Ana Pedreño-Santos and Jesus Garcia-Madariaga
The purpose of this research is to determine the relationship between frequency and recall in radio advertising by studying the main features of reach and frequency.
Abstract
Purpose
The purpose of this research is to determine the relationship between frequency and recall in radio advertising by studying the main features of reach and frequency.
Design/methodology/approach
The authors consider the outcome of a frequency model specifically designed for radio campaigns that gives the probability distribution of recall as a function of weekly exposures and GRPs over a dataset of 1,117 radio campaigns broadcast in Spain.
Findings
An increase in factors such as advertising format and creativity are more significant to achieve effective recall than increasing the number of advertising exposures.
Practical implications
This study has important managerial implications regarding radio campaigns' planning: (1) Effective frequency is a range between 4 and 17 impressions (being 7 the optimal average). (2) The way to optimize the campaign is by using the following factors: live read format (∆ 4.4%), good creativity (∆ 2.8%), endorsement format (∆ 2%), sponsorship format (∆ 1.8%), increase the length of the spot (∆ 1.5%), place the ad in first (∆ 0.8%) or last (∆ 0.7%) positions in the pod. From the results we conclude that the format is at least as important as the creativity itself.
Originality/value
This study contributes to the effective repetition literature in two ways: giving specific clues to the effective frequency in the radio medium and setting advertising factors that predict the effective frequency in radio.
Details
Keywords
The purpose of this paper is to provide an analysis of the dependence structure between returns from real estate investment trusts (REITS) and a stock market index. Further, the…
Abstract
Purpose
The purpose of this paper is to provide an analysis of the dependence structure between returns from real estate investment trusts (REITS) and a stock market index. Further, the aim is to illustrate how copula approaches can be applied to model the complex dependence structure between the assets and for risk measurement of a portfolio containing investments in REIT and equity indices.
Design/methodology/approach
The usually suggested multivariate normal or variance‐ covariance approach is applied, as well as various copula models in order to investigate the dependence structure between returns of Australian REITS and the Australian stock market. Different models including the Gaussian, Student t, Clayton and Gumbel copula are estimated and goodness‐of‐fit tests are conducted. For the return series, both the Gaussian and a non‐parametric estimate of the distribution is applied. A risk analysis is provided based on Monte Carlo simulations for the different models. The value‐at‐risk measure is also applied for quantification of the risks for a portfolio combining investments in real estate and stock markets.
Findings
The findings suggest that the multivariate normal model is not appropriate to measure the complex dependence structure between the returns of the two asset classes. Instead, a model using non‐parametric estimates for the return series in combination with a Student t copula is clearly more suitable. It further illustrates that the usually applied variance‐covariance approach leads to a significant underestimation of the actual risk for a portfolio consisting of investments in REITS and equity indices. The nature of risk is better captured by the suggested copula models.
Originality/value
To the authors', knowledge, this is one of the first studies to apply and test different copula models in real estate markets. Results help international investors and portfolio managers to deepen their understanding of the dependence structure between returns from real estate and equity markets. Additionally, the results should be helpful for implementation of a more adequate risk management for portfolios containing investments in both REITS and equity indices.
Details
Keywords
To discuss subcopula estimation for discrete models.
Abstract
Purpose
To discuss subcopula estimation for discrete models.
Design/methodology/approach
The convergence of estimators is considered under the weak convergence of distribution functions and its equivalent properties known in prior works.
Findings
The domain of the true subcopula associated with discrete random variables is found to be discrete on the interior of the unit hypercube. The construction of an estimator in which their domains have the same form as that of the true subcopula is provided, in case, the marginal distributions are binomial.
Originality/value
To the best of our knowledge, this is the first time such an estimator is defined and proved to be converged to the true subcopula.
Details
Keywords
The purpose of this paper is to investigate how to incorporate market price risk into investment decisions. The investigation focuses on investments to expand ethanol production…
Abstract
Purpose
The purpose of this paper is to investigate how to incorporate market price risk into investment decisions. The investigation focuses on investments to expand ethanol production facilities. The model is used to determine if such a real option approach can explain recent changes in the level of plant investment activity.
Design/methodology/approach
The paper demonstrates how real option analysis and Monte Carlo simulation can be used to evaluate ethanol plant investments by using available historical industry and market price data. We focus on existing small‐to‐medium, dry milling plants and the real option to expand the scale of operations. The binomial option pricing model is used to identify optimal strategies.
Findings
Increasing profitability and volatility appear to favor the strategy of investing during 2005‐2007. However, when the prices of corn and natural gas rise and plant profitability declines during 2007‐2008, the best strategy is increasingly to either postpone the investment or reject the decision to expand.
Originality/value
This paper is a first application of real option analysis to ethanol plant expansion decisions. The methodology used in the paper can be adapted by analysts, investors, and lenders in the ethanol industry to improve their investment analyses.
Details