Search results

1 – 10 of 60
Article
Publication date: 26 September 2011

Samih Azar

This paper seeks to reconsider the Euler equation of the Consumption Capital Asset Pricing Model (CCAPM), to derive a regression‐based model to test it, and to present evidence…

Abstract

Purpose

This paper seeks to reconsider the Euler equation of the Consumption Capital Asset Pricing Model (CCAPM), to derive a regression‐based model to test it, and to present evidence that the model is consistent with reasonable values for the coefficient of relative risk aversion (CRRA). This runs contrary to the findings of the literature on the equity premium puzzle, but is in agreement with the literature that estimates the CRRA for the purpose of computing the social discount rate, and is in line with the research on labor supply. Tests based on General Method of Moments (GMM) for the same sample produce results that are extremely disparate and unstable. The paper aims to check and find support for the robustness of the regression‐based tests. Habit formation models are also to be evaluated with regression‐based and GMM tests. However, the validity of the regression‐based models depends critically on their functional forms.

Design/methodology/approach

The paper presents empirical evidence that the conventional use of GMM fails because of four pathological features of GMM that are referred to under the general caption of “weak identification”. In addition to GMM, the paper employs linear regression analysis to test the CCAPM, and it is found that the regression residuals follow well‐behaved distributional properties, making valid all statistical inferences, while GMM estimates are highly unstable.

Findings

Four unexpected findings are reported. The first is that the regression‐based models are consistent with reasonable values for the CRRA, i.e. estimates that are below 4. The second is that the regression‐based tests are robust, while the GMM‐based tests are not. The third is that regression‐based tests with habit formation depend crucially on the specification of the model. The fourth is that there is evidence that market stock returns are sensitive to both consumption and dividends. The author calls the latter “extra sensitivity of market stock returns”, and it is described as a new puzzle.

Originality/value

The regression‐based models of the CCAPM Euler equation are novel. The comparison between GMM and regression‐based models for the same sample is original. The regression‐based models with habit formation are new. The equity premium puzzle disappears because the estimates of the CRRA are reasonable. But another puzzle is documented, which is the “extra sensitivity of market stock returns” to consumption and dividends together.

Open Access
Article
Publication date: 21 March 2023

Eunyoung Cho

This paper aims to examine the time-varying preferences for environment, social and corporate governance (ESG) investing in an emerging market. The investors seek ESG-conscious…

1482

Abstract

This paper aims to examine the time-varying preferences for environment, social and corporate governance (ESG) investing in an emerging market. The investors seek ESG-conscious investments during a positive economic outlook, reflecting the time-varying nature of ESG demand. Specifically, the author shows that high-ESG stocks have negative abnormal returns during bad economic times but turn into positive abnormal returns in good economic times. The author also suggests that the alpha spread between high-ESG and low-ESG stocks is larger in good economic times than in bad times. Furthermore, individual investors prefer high ESG scoring stocks in good economic times. The author highlights that this ESG premium is shaped by economic projection and the households' financial wealth.

Details

Journal of Derivatives and Quantitative Studies: 선물연구, vol. 31 no. 2
Type: Research Article
ISSN: 1229-988X

Keywords

Article
Publication date: 1 November 2003

Richard Heaney

Are share markets too volatile? While it is difficult to ignore share market volatility it is important to determine whether volatility is excessive. This paper replicates the…

Abstract

Are share markets too volatile? While it is difficult to ignore share market volatility it is important to determine whether volatility is excessive. This paper replicates the Shiller (1981) test as well as applying standard time series analysis to annual Australian stock market data for the period 1883 to 1999. While Shiller’s test suggests the possibility of excess volatility, time series analysis identifies a long‐run relationship between share market value and dividends, consistent with the share market reverting to its fundamental discounted cash flow value over time.

Details

Managerial Finance, vol. 29 no. 10
Type: Research Article
ISSN: 0307-4358

Keywords

Article
Publication date: 1 February 1994

RayBall

The nature and extent of our knowledge of stock market efficiency are examined. The development of “efficiency”, as a way of thinking about stock markets, is traced from Roberts…

2129

Abstract

The nature and extent of our knowledge of stock market efficiency are examined. The development of “efficiency”, as a way of thinking about stock markets, is traced from Roberts (1959) and Fama (1965) onward. The early work successfully introduced competitive economic theory to the study of stock markets and paved the way for a flood of empirical research on the relation between information and stock prices. This literature irreversibly altered our views on stock market behavior. The theory and evidence of seemingly‐rational use of information lay in sharp contrast to prior beliefs. It was associated with a widespread increase in respect for stock markets, financial markets, and markets in general, at the time. Researchers began developing and using a variety of formal models of security prices. Nevertheless, “efficiency” has its limitations, both theoretically (as a way of characterizing markets) and empirically (by stretching the quality of the data, the estimation techniques used, and our knowledge of price behavior in competitive markets). Extensive evidence of anomalies suggests either that the market systematically misprices securities or that the theoretical or empirical limitations are binding, or both. The less interesting research question now is whether markets are efficient, and the more interesting question is how we can learn more about price and transactions behavior in competitive stock markets. The concept of an “efficient stock market” has stimulated both insight and controversy since Fama (1965) introduced it to the financial economics literature. As a construct, “efficiency” models the stock market in terms of the reaction of prices to the flow of information. Like all theory choices, modelling the market in this fashion involved tradeoffs. The benefits included opening the literature to an abundance of high‐quality researchable data, covering a variety of information, and the resulting insights obtained on the role of information in setting prices. The opportunity costs included temporarily closing the literature to alternative ways of viewing stock markets, for example by modelling public information as a homogenous good and thus ignoring factors such as differences in beliefs among investors, differences in information processing costs, and the “animal spirits” that might drive group behavior. The costs also included reliance on particular asset‐pricing models of how an “efficient” market would set prices. Not surprisingly, the ensuing deluge of research has produced some startling evidence, for and against the proposition that financial markets are “efficient”. Strongly‐conflicting views and puzzling anomalies remain. The early evidence seemed unexpectedly consistent with the theory. The theory, and its implications, also seemed clear at the time. After a period that seems short in retrospect, the growing body of evidence in favor of the efficient market hypothesis emerged as one of the most influential empirical areas of economics. Fama's (1970) review described a flourishing, coherent and confident literature. This research had an irreversible effect on our knowledge of and attitude toward stock markets, and financial markets generally. It coincided with an emergence of interest in, and respect for, all markets among economists and politicians, and influenced the worldwide trend toward “liberalizing” financial and other markets. The research consistently appeared to show an unbiased reaction of stock prices to public information. The property of “unbiased reaction” to public information, which formed the basis of the early definitions of “efficiency”, was seen to be an implication of rational, maximizing investor behavior in competitive securities markets (Fama 1965, p.4). Reduced to a basic level, the reasoning was that any systematicallybiased reaction to public information is costlessly publicly observable, and thus provides pure profit opportunities to be competed away. Characterizing the market in terms of its reaction to information is only one of many feasible ways of modelling stock price behavior, but it introduced economic theoryto the empirical studyof stock prices, which had received little serious attention from economists prior to that point. Despite the subsequent spate of anomalies, the early efficiency literature not only adapted standard economic theoryto provide the first formal economic insights into how stock prices behave, but it helped pave the way for an outporing of theoretical and empirical work on stock markets and capital markets in general. Subsequent empirical research was not as consistent with the theory. Evidence of “anomalous” return behavior now is widespread and well‐known. It generallytakes the form of variables (for example, size, day‐of‐the‐week, P/E ratio, market/book value ratio, rank of scaled earnings change, dividend yield) that are significantly but inexplicablyrelated to subsequent abnormal stock returns. Much of this evidence has defied rational economic explanation to date and appears to have caused many researchers to strongly qualify their views on market efficiency. Disagreement has not been not confined to the evidence. The literature has produced a variety of research designs, ranging from the “market model” of Fama, Fisher, Jensen and Roll (FFJR, 1969) to Shiller's (1981a,b) variance‐bounds tests. The very term “efficiency” has engendered controversy: there is a modest literature on precisely what efficiency means, on the role of transaction costs, and on whether efficient markets are logically feasible. Making sense of this literature requires careful definition of “efficiency” in this context and careful analysis of the type of evidence that has been offered in relation to it. This involves an assessment of the strengths and weaknesses of both the theory of efficient markets, as a way of characterizing stock markets, and of the data and research designs used in testing it. Not surprisingly, a mixed conclusion emerges. While the concept of efficient markets was an audacious departure from the comparative ignorance and suspicion among economists of stock markets that preceded it, and provides valuable insights into their behavior, the concept has its limitations, in terms of both its internal logical coherence and its fit with the data. Section 1 ofthis survey sketches the development of the efficient market theory, reviewing the principal contributions in terms of their usefulness in guiding and evaluating empirical research. Section 2 addresses the limitations inherent in what is knowable about stock market efficiency, given the present state of theory about how security prices might behave in an “efficient” market. It argues that there are binding limitations in the theoryof asset pricing, some of which are known and others of which are unknown or even unknowable. These limitations must be borne in mind when choosing whether to interpret the data as evidence of: (1) market efficiency, under the maintained hypothesis that a specific research design, including a specific model of asset pricing used to benchmark price behavior, correctly describes pricing in an efficient market; or (2) the ability of our models and research designs to encapsulate how prices behave in an efficient market, under the maintained hypothesis of efficiency. Against this background, section 3 then provides an assessment of the accomplishments of the theory of stock market efficiency, including an interpretation of the evidence. It focuses on the nature and influence of the evidence and does not attempt to provide a comprehensive literature taxonomy. The final section offers conclusions. The principal conclusion is that the theory of efficient markets has irreversibly enhanced our knowledge of and respect for stock markets (and perhaps for all financial market or even for markets in general) but that, like all theories, it is fundamentally flawed.

Details

Managerial Finance, vol. 20 no. 2
Type: Research Article
ISSN: 0307-4358

Article
Publication date: 18 August 2021

Thomas D. Willett

This study aims to critically review recent contributions to the methodology of financial economics and discuss how they relate to one another and directions for further research.

Abstract

Purpose

This study aims to critically review recent contributions to the methodology of financial economics and discuss how they relate to one another and directions for further research.

Design/methodology/approach

A critical review of recent literature on new methodologies for financial economics.

Findings

Recent books have made important contributions to the study of financial economics. They suggest new approaches that include an emphasis on radical uncertainty, adaptive markets, agent-based modeling and narrative economics, as well as extensions of behavioral finance to include concepts such as diagnostic expectations. Many of these contributions can be seen more as complements than substitutes and provide fruitful directions for further research. Efficient markets can be seen as holding under particular circumstances. A major them of most of these contributions is that the study of financial crises and other aspects of financial economics requires the use of multiple theories and approaches. No one approach will be sufficient.

Research limitations/implications

There are great opportunities for further research in financial economics making use of these new approaches.

Practical implications

These recent contributions can be quite useful for improved analysis by researchers, private participants in the financial sector and macroeconomic and regulatory officials.

Originality/value

Provides an introduction to these new approaches and highlights fruitful areas for their extensions and applications.

Article
Publication date: 6 March 2009

John H. Huston and Roger W. Spencer

The purpose of this paper is to develop a single variable indicative of the state of market speculation; to determine whether the Federal Reserve has attempted to quell…

Abstract

Purpose

The purpose of this paper is to develop a single variable indicative of the state of market speculation; to determine whether the Federal Reserve has attempted to quell speculation when it has been most rampant and whether such attempts were successful.

Design/methodology/approach

The paper examine the literature on market “bubbles” and the Federal Reserve's treatment of them; to determine a single variable reflective of market speculation via principle components integration; to examine the Federal Reserve's interaction with market speculation by estimating a vector autoregression version of the Taylor rule.

Findings

It is possible to construct a single variable representative of market speculation, termed the index of speculative excess that correlates well with standard views of market excess; the Federal Reserve did attempt to retard market speculation during the three major bull markets of the past century; monetary policy did little to inhibit market speculation.

Originality/value

Highly original in the construction of a single variable reflective of market speculation; joins the ongoing debate as to the extent of Federal Reserve concern with speculative activity and the Fed's poor record of accomplishment in this area.

Details

Studies in Economics and Finance, vol. 26 no. 1
Type: Research Article
ISSN: 1086-7376

Keywords

Article
Publication date: 6 May 2021

Chung Yim Edward Yiu and Ka Shing Cheung

The repeat sales house price index (HPI) has been widely used to measure house price movements on the assumption that the quality of properties does not change over time. This…

Abstract

Purpose

The repeat sales house price index (HPI) has been widely used to measure house price movements on the assumption that the quality of properties does not change over time. This study aims to develop a novel improvement-value adjusted repeat sales (IVARS) HPI to remedy the bias owing to the constant-quality assumption.

Design/methodology/approach

This study compares the performance of the IVARS model with the traditional hedonic price model and the repeat sales model by using half a million repeated sales pairs of housing transactions in the Auckland Region of New Zealand, and by a simulation approach.

Findings

The results demonstrate that using the information on improvement values from mass appraisal can significantly mitigate the time-varying attribute bias. Simulation analysis further reveals that if the improvement work done is not considered, the repeat sales HPI may be overestimated by 2.7% per annum. The more quality enhancement a property has, the more likely it is that the property will be resold.

Practical implications

This novel index may have the potential to enable the inclusion of home condition reporting in property value assessments prior to listing open market sales.

Originality/value

The novel IVARS index can help gauge house price movements with housing quality changes.

Details

International Journal of Housing Markets and Analysis, vol. 15 no. 2
Type: Research Article
ISSN: 1753-8270

Keywords

Article
Publication date: 29 August 2023

Yves Gendron, Luc Paugam and Hervé Stolowy

This essay takes issue with the incommensurability thesis, which assumes that meaningful research work across different paradigms cannot occur. Could it be that the thesis…

Abstract

Purpose

This essay takes issue with the incommensurability thesis, which assumes that meaningful research work across different paradigms cannot occur. Could it be that the thesis understates the case for meaningful relationships to develop across paradigms? Is it possible that researchers can authentically and rewardingly collaborate across paradigms and create joint studies published in established journals?

Design/methodology/approach

Based on the observation that interparadigmatic research exists, the authors investigate two questions. How is interparadigmatic research expressed in the accounting research literature? How can we comprehend the process that underlies the development and publication of interparadigmatic research, focusing on cohabitation involving the positivist and interpretive paradigms of research?

Findings

To deal with the first question, the authors focus on two interparadigmatic articles: Greenwood et al. (2002) and Paugam et al. (2021). The authors find each article showcases a dominant paradigm – whereas the role of the other paradigm is represented as secondary; that is, complementing and enriching the dominant paradigm. To address the second question, the authors rely especially on their involvement as coauthors of three interparadigmatic studies, published between 2019 and 2022 in FT50 journals. The authors’ analysis brings to the fore a range of facilitators that fit their experiences, such as the development of cross-paradigmatic agreement within the authorship to cope with the complexity surrounding the object of study, the crafting of methodological compromises (e.g. regarding the number of documents to analyze) and the strategizing that the authorship enacted in dealing with journal gatekeepers.

Originality/value

From the authors’ experiences, they develop a model, which provides a tentative template to make sense of the process by which interparadigmatic research takes place. The model highlights the role of what the authors call “epistemic mediation” in producing interparadigmatic studies.

Details

Qualitative Research in Accounting & Management, vol. 20 no. 5
Type: Research Article
ISSN: 1176-6093

Keywords

Article
Publication date: 20 February 2020

Xiyang Li, Bin Li, Tarlok Singh and Kan Shi

This study aims to draw on a less explored predictor – the average correlation of pairwise returns on industry portfolios – to predict stock market returns (SMRs) in the USA.

Abstract

Purpose

This study aims to draw on a less explored predictor – the average correlation of pairwise returns on industry portfolios – to predict stock market returns (SMRs) in the USA.

Design/methodology/approach

This study uses the average correlation approach of Pollet and Wilson (2010) and predicts the SMRs in the USA. The model is estimated using monthly data for a long time horizon, from July 1963 to December 2018, for the portfolios comprising 48 Fama-French industries. The model is extended to examine the effects of a longer lag structure of one-month to four-month lags and to control for the effects of a number of variables – average variance (AV), cyclically adjusted price-to-earnings ratio (CAPE), term spread (TS), default spread (DS), risk-free rate returns (R_f) and lagged excess market returns (R_s).

Findings

The study finds that the two-month lagged average correlation of returns on individual industry portfolios, used individually and collectively with financial predictors and economic factors, predicts excess returns on the stock market in an effective manner.

Research limitations/implications

The methodology and results are of interest to academics as they could further explore the use of average correlation to improve the predictive powers of their models.

Practical implications

Market practitioners could include the average correlation in their asset pricing models to improve the predictions for the future trend in stock market returns. Investors could consider including average correlation in their forecasting models, along with the traditional financial ratios and economic indicators. They could adjust their expected returns to a lower level when the average correlation increases during a recession.

Social implications

The finding that recession periods have effects on the SMRs would be useful for the policymakers. The understanding of the co-movement of returns on industry portfolios during a recession would be useful for the formulation of policies aimed at ensuring the stability of the financial markets.

Originality/value

The study contributes to the literature on three counts. First, the study uses industry portfolio returns – as compared to individual stock returns used in Pollet and Wilson (2010) – in constructing average correlation. When stock market becomes more volatile on returns, the individual stocks are more diverse on their performance; the comovement between individual stock returns might be dominated by the idiosyncratic component, which may not have any implications for future SMRs. Using the industry portfolio returns can potentially reduce such an effect by a large extent, and thus, can provide more reliable estimates. Second, the effects of business cycles could be better identified in a long sample period and through several sub-sample tests. This study uses a data set, which spans the period from July 1963 to December 2018. This long sample period covers multiple phases of business cycles. The daily data are used to compute the monthly and equally-weighted average correlation of returns on 48 Fama-French industry portfolios. Third, previous studies have often ignored the use of investors’ sentiments in their prediction models, while investors’ irrational decisions could have an important impact on expected returns (Huang et al., 2015). This study extends the analysis and incorporates investors’ sentiments in the model.

Details

Accounting Research Journal, vol. 33 no. 2
Type: Research Article
ISSN: 1030-9616

Keywords

Article
Publication date: 6 November 2017

Nicholas Apergis and Chi Keung Marco Lau

This paper aims to provide fresh empirical evidence on how Federal Open Market Committee (FOMC) monetary policy decisions from a benchmark monetary policy rule affect the…

Abstract

Purpose

This paper aims to provide fresh empirical evidence on how Federal Open Market Committee (FOMC) monetary policy decisions from a benchmark monetary policy rule affect the profitability of US banking institutions.

Design/methodology/approach

It thereby provides a link between the literature on central bank monetary policy implementation through monetary rules and banks’ profitability. It uses a novel data set from 11,894 US banks, spanning the period 1990 to 2013.

Findings

The empirical findings show that deviations of FOMC monetary policy decisions from a number of benchmark linear and non-linear monetary (Taylor type) rules exert a negative and statistically significant impact on banks’ profitability.

Originality/value

The results are expected to have substantial implications for the capacity of banking institutions to more readily interpret monetary policy information and accordingly to reshape and hedge their lending behaviour. This would make the monetary policy decision process less noisy and, thus, enhance their capability to attach the correct weight to this information.

Details

Journal of Financial Economic Policy, vol. 9 no. 4
Type: Research Article
ISSN: 1757-6385

Keywords

1 – 10 of 60