Search results
11 – 20 of over 42000The purpose of this paper is to analyze the mean-variance capital asset pricing model (CAPM) and downside risk-based CAPM (DR-CAPM) developed by Bawa and Lindenberg (1977), Harlow…
Abstract
Purpose
The purpose of this paper is to analyze the mean-variance capital asset pricing model (CAPM) and downside risk-based CAPM (DR-CAPM) developed by Bawa and Lindenberg (1977), Harlow and Rao (1989), and Estrada (2002) to assess which downside beta better explains expected stock returns. The paper also explores whether investors respond differently to stocks that co-vary with declining market than to those of co-vary with rising market.
Design/methodology/approach
The paper uses monthly data of closing prices of stocks listed at the Karachi Stock Exchange (KSE). The data cover the period from January 2000 to December 2012. The standard, downside, and upside betas are estimated for different sub-periods,and then,their validity to quantify the risk premium is tested for subsequent sub-periods in a cross sectional regression framework. Though our empirical methodology is similar to that of Fama and MacBeth (1973) for testing the CAPM and the DR-CAPM, our approach to estimate the downside beta is different from earlier studies. In particular, we follow Estrada ' s (2002) suggestions and obtain the correct and unbiased estimation of the downside beta by running the time series regression through origin. The authors carry out the two-pass regression analysis using the generalized method of moment (GMM) in the first pass and the generalized least squares (GLS) estimation method in the second pass.
Findings
The results indicate that the mean-variance CAPM shows a negative risk premium for monthly returns of selected stocks. However, the results for the DR-CAPM of Bawa and Lindenberg (1977) and Harlow and Rao (1989) provide evidence of a positive risk premium for the downside beta. In contrast, the DR-CAPM of Estrada (2002) shows a negative risk premium in some sub-periods while the positive premium in the others. By comparing the risk premium for both downside and upside risks in a single-equation framework, the authors show that the stocks that co-vary with a declining market are compensated with a positive premium for bearing the downside risk. Yet, the risk premium for stocks that are negatively correlated with declining market returns is negative for all the three-downside betas in all the examined sub-periods.
Practical implications
The empirical findings of the paper are of great significance for investors for designing effective investment strategies. Specifically, the results help investors to identify an appropriate measure of risk and to construct well-diversified portfolio. The results are also useful for firm managers in capital budgeting decision-making process as they enable them to cost equities appropriately. The results also suggest that the risk-return relationship implied by mean-variance CAPM is negative and therefore this model is not suitable for gauging the risk associated with stocks traded in KSE. Yet, the authors show that DR-CAPM out performs in quantifying the risk premium.
Originality/value
Unlike prior empirical studies, the authors follow Estrada’s (2002) suggestions where downside beta is calculated using regression through origin to find correct and unbiased beta. Departing from the existing literature the authors estimate three different versions of DR-CAPM along with the standard CAPM for comparison purpose. Finally, the authors apply sophisticated econometrics methods that help in lessening the problem of non-synchronous trading and the issue of non-normality of returns distribution.
Details
Keywords
Shuran Zhao, Jinchen Li, Yaping Jiang and Peimin Ren
The purpose of this paper is twofold: to improve the traditional conditional autoregressive Wishart (CAW) and heterogeneous autoregressive (HAR)-CAW model to account for…
Abstract
Purpose
The purpose of this paper is twofold: to improve the traditional conditional autoregressive Wishart (CAW) and heterogeneous autoregressive (HAR)-CAW model to account for heterogeneous leverage effect and to adjust the high-frequency volatility. The other is to confirm whether CAW-type models that have statistical advantages have economic advantages.
Design/methodology/approach
Based on the high-frequency data, this study proposed a new model to describe the volatility process according to the heterogeneous market hypothesis. Thus, the authors acquire needed and credible high-frequency data.
Findings
By designing two mean-variance frameworks and considering several economic performance measures, the authors find that compared with five other models based on daily data, CAW-type models, especially LHAR-CAW and HAR-CAW, indeed generate the substantial economic values, and matrix adjustment method significantly improves the three CAW-type performances.
Research limitations/implications
The findings in this study suggest that from the aspect of economics, LHAR-CAW model can more accurately built the dynamic process of return rates and covariance matrix, respectively, and the matrix adjustment can reduce bias of realized volatility as covariance matrix estimator of return rates, and greatly improves the performance of unadjusted CAW-type models.
Practical implications
Compared with traditional low-frequency models, investors should allocate assets according to the LHAR-CAW model so as to get more economic values.
Originality/value
This study proposes LHAR-CAW model with the matrix adjustment, to account for heterogeneous leverage effect and empirically show their economic advantage. The new model and the new bias adjustment approach are pioneering and promote the evolution of financial econometrics based on high-frequency data.
Details
Keywords
Hyeong-Uk Park, Jae-Woo Lee, Joon Chung and Kamran Behdinan
The purpose of this paper is to study the consideration of uncertainty from analysis modules for aircraft conceptual design by implementing uncertainty-based design optimization…
Abstract
Purpose
The purpose of this paper is to study the consideration of uncertainty from analysis modules for aircraft conceptual design by implementing uncertainty-based design optimization methods. Reliability-Based Design Optimization (RBDO), Possibility-Based Design Optimization (PBDO) and Robust Design Optimization (RDO) methods were developed to handle uncertainties of design optimization. The RBDO method is found suitable for uncertain parameters when sufficient information is available. On the other hand, the PBDO method is proposed when uncertain parameters have insufficient information. The RDO method can apply to both cases. The RBDO, PBDO and RDO methods were considered with the Multidisciplinary Design Optimization (MDO) method to generate conservative design results when low fidelity analysis tools are used.
Design/methodology/approach
Methods combining MDO with RBDO, PBDO and RDO were developed and have been applied to a numerical analysis and an aircraft conceptual design. This research evaluates and compares the characteristics of each method in both cases.
Findings
The RBDO result can be improved when the amount of data concerning uncertain parameters is increased. Conversely, increasing information regarding uncertain parameters does not improve the PBDO result. The PBDO provides a conservative result when less information about uncertain parameters is available.
Research limitations/implications
The formulation of RDO is more complex than other methods. If the uncertainty information is increased in aircraft conceptual design case, the accuracy of RBDO will be enhanced.
Practical implications
This research increases the probability of a feasible design when it considers the uncertainty. This result gives more practical optimization results on a conceptual design level for fabrication.
Originality/value
It is RBDO, PBDO and RDO methods combined with MDO that satisfy the target probability when the uncertainties of low fidelity analysis models are considered.
Details
Keywords
Mary P. Mindak, Pradyot K. Sen and Jens Stephan
The purpose of this paper is to document at the firm-specific level whether firms manage earnings up or down to barely miss or meet/beat three common earnings threshold targets…
Abstract
Purpose
The purpose of this paper is to document at the firm-specific level whether firms manage earnings up or down to barely miss or meet/beat three common earnings threshold targets, namely, analysts’ forecasts (AFs), last year’s earnings and zero earnings, and whether the market rewards or punishes up versus down earnings management.
Design/methodology/approach
The authors assign each firm to its most likely earnings target using an algorithm that reflects management’s economic incentives to manage earnings. The authors place reported (managed) earnings in standard width intervals surrounding the earnings target. Jacob and Jorgensen’s (2007) proxy for unmanaged earnings is also placed into the intervals. Thus, a firm with unmanaged earnings in the interval just below the target and reported earnings in the interval just above the target would be deemed to have managed earnings up. The authors also document whether the market rewarded or punished the earnings management strategy with three-day cumulative abnormal returns.
Findings
The authors find that most firms which barely meet/beat their target did so by managing earnings up. The market rewarded this earnings management strategy. The market did not, however, reward firms that managed earnings down (i.e. created a cookie jar of reserves) to barely meet/beat their target. Thus, the meet/beat premium does not apply to all firms. The authors’ explanation is that most earnings targets are set by AFs; that these are usually the highest of the three targets; and that these are, therefore, considered to be “good” firms by the market because they have the ability to find that extra penny to meet/beat the target. Firms that were assigned to the last year’s earnings and/or zero earnings thresholds are not as “good” because they usually do not target the highest threshold and must manage earnings down, as they are more likely to have to reverse income-increasing accruals booked during interim quarters.
Research limitations/implications
The primary limitation in this study is the algorithm used to assign firms to their threshold target. It is ad hoc in nature, but relies on reasonable assumptions about the management’s incentives to manage earnings.
Practical implications
This study has practical implications because investors and regulators can adopt this methodology to identify potential candidates for earnings management that would allow further insight into accounting and reporting practices. This methodology may also be useful to the auditor who wants to understand the tendencies of a new client. It may also be a useful tool for framing auditing hypotheses in a way that would be appropriate for clients who manage earnings.
Originality/value
This paper documents for the first time at the firm-specific level the market reaction to upward versus downward earnings management designed to barely meet/beat the earnings threshold. It also documents the frequency with which firms target the three earnings thresholds and the frequency with which firms miss or meet/beat their threshold.
Details
Keywords
AT THIS PERIOD of British Industrial history, executives from the highest echelons of management down to the ordinary worker on the shop floor must be wondering what the future…
Abstract
AT THIS PERIOD of British Industrial history, executives from the highest echelons of management down to the ordinary worker on the shop floor must be wondering what the future has in store for them. What with takeovers and Government sell‐outs the position of anybody can no longer be regarded as safe.
Tariq Aziz and Valeed Ahmad Ansari
The purpose of this paper is to examine the role of value-at-risk (VaR) in the cross-section of stock returns in the Indian stock market during the period 1999-2014.
Abstract
Purpose
The purpose of this paper is to examine the role of value-at-risk (VaR) in the cross-section of stock returns in the Indian stock market during the period 1999-2014.
Design/methodology/approach
The paper follows the methodology of Bali and Cakici (2004) to investigate the relationship between VaR and stock returns and employs Fama and French’s (1993) and Fama and Macbeth’s (1973) methods to find out the predictive power of VaR in time-series and cross-section settings. Further, it follows Fama and French (2008) to estimate separate cross-section regressions for small, medium and big stocks to verify the pervasiveness of the anomaly.
Findings
This study finds positive risk premium associated with VaR in the Indian stock market during 2001-2008, the period of short selling constraint for institutional investors. This premium is confined to small stocks and low institutional holdings. The positive premium can be attributed to short selling constraints.
Practical implications
The risk-return tradeoff can be utilized by investors and fund managers. As it is confined to small stocks, transaction costs may affect the profitability of the investment strategy.
Originality/value
The study contributes to the scanty empirical literature on the role of VaR in the cross-section of expected stock returns. Moreover, this is the first study that explores the relationship between VaR and stock returns in the asset pricing context for the Indian stock market.
Details
Keywords
Lasse Mertins and Lourdes Ferreira White
This study examines the impact of different Balanced Scorecard (BSC) formats (table, graph without summary measure, graph with a summary measure) on various decision outcomes…
Abstract
Purpose
This study examines the impact of different Balanced Scorecard (BSC) formats (table, graph without summary measure, graph with a summary measure) on various decision outcomes: performance ratings, perceived informativeness, and decision efficiency.
Methodology/approach
Using an original case developed by the researchers, a total of 135 individuals participated in the experiment and rated the performance of carwash managers in two different scenarios: one manager excelled financially but failed to meet targets for all other three BSC perspectives and the other manager had the opposite results.
Findings
The evaluators rated managerial performance significantly lower in the graph format compared to a table presentation of the BSC. Performance ratings were significantly higher for the scenario where the manager failed to meet only financial perspective targets but exceeded targets for all other nonfinancial BSC perspectives, contrary to the usual predictions based on the financial measure bias. The evaluators reported that informativeness of the BSC was highest in the table or graph without summary measure formats, and, surprisingly, adding a summary measure to the graph format significantly reduced perceived informativeness compared to the table format. Decision efficiency was better for the graph formats (with or without summary measure) than for the table format.
Originality/value
Ours is the first study to compare tables, graphs with and without a summary measure in the context of managerial performance evaluations and to examine their impact on ratings, informativeness, and efficiency. We developed an original case to test the boundaries of the financial measure bias.
Details
Keywords
This study analyzes the variability of rates of return for 11,772 U.S. commercial banks from 1979 through 1985. The objective is to determine whether variability that is not…
Abstract
This study analyzes the variability of rates of return for 11,772 U.S. commercial banks from 1979 through 1985. The objective is to determine whether variability that is not explained by exogenous variables can be explained by prospect theory. Below target, strong correlations are shown, consistent with prospect theory. When regression analysis is applied, the results are confirmed.
M.A. Rahim and Khaled S. Al‐Sultan
Recently, there has been a lot of interest in the economics of quality control. Many researchers have considered the problem of determining the optimal target mean for a process…
Abstract
Recently, there has been a lot of interest in the economics of quality control. Many researchers have considered the problem of determining the optimal target mean for a process, but almost all of them have assumed that the process variance is fixed and known in advance. The problem of simultaneously determining the optimal target mean and target variance for a process is considered. This might result in a reduction in variability and in the total cost of the production process. A reduction in variability upholds the modern concept of Taguchi’s loss function, which states that any deviation from the target value incurs economic loss, even when the quality characteristic lies within the specification limits. Taguchi’s loss function is incorporated to extend this study further to jointly determine the optimal target mean and variance.
Details
Keywords
Jami Kovach, Byung Rae Cho and Jiju Antony
Robust design is a well‐known quality improvement method that focuses on building quality into the design of products and services. Yet, most well established robust design models…
Abstract
Purpose
Robust design is a well‐known quality improvement method that focuses on building quality into the design of products and services. Yet, most well established robust design models only consider a single performance measure and their prioritization schemes do not always address the inherent goal of robust design. This paper aims to propose a new robust design method for multiple quality characteristics where the goal is to first reduce the variability of the system under investigation and then attempt to locate the mean at the desired target value.
Design/methodology/approach
The paper investigates the use of a response surface approach and a sequential optimization strategy to create a flexible and structured method for modeling multiresponse problems in the context of robust design. Nonlinear programming is used as an optimization tool.
Findings
The proposed methodology is demonstrated through a numerical example. The results obtained from this example are compared to that of the traditional robust design method. For comparison purposes, the traditional robust design optimization models are reformulated within the nonlinear programming framework developed here. The proposed methodology provides enhanced optimal robust design solutions consistently.
Originality/value
This paper is perhaps the first study on the prioritized response robust design with the consideration of multiple quality characteristics. The findings and key observations of this paper will be of significant value to the quality and reliability engineering/management community.
Details