Search results

1 – 10 of over 10000
Book part
Publication date: 14 November 2011

Michael Lacina, B. Brian Lee and Randall Zhaohui Xu

We evaluate the performance of financial analysts versus naïve models in making long-term earnings forecasts. Long-term earnings forecasts are generally defined as third-…

Abstract

We evaluate the performance of financial analysts versus naïve models in making long-term earnings forecasts. Long-term earnings forecasts are generally defined as third-, fourth-, and fifth-year earnings forecasts. We find that for the fourth and fifth years, analysts' forecasts are no more accurate than naïve random walk (RW) forecasts or naïve RW with economic growth forecasts. Furthermore, naïve model forecasts contain a large amount of incremental information over analysts' long-term forecasts in explaining future actual earnings. Tests based on subsamples show that the performance of analysts' long-term forecasts declines relative to naïve model forecasts for firms with high past earnings growth and low analyst coverage. Furthermore, a model that combines a naïve benchmark (last year's earnings) with the analyst long-term earnings growth forecast does not perform better than analysts' forecasts or naïve model forecasts. Our findings suggest that analysts' long-term earnings forecasts should be used with caution by researchers and practitioners. Also, when analysts' earnings forecasts are unavailable, naïve model earnings forecasts may be sufficient for measuring long-term earnings expectations.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-0-85724-959-3

Article
Publication date: 30 March 2012

Marcelo Mendoza

Automatic text categorization has applications in several domains, for example e‐mail spam detection, sexual content filtering, directory maintenance, and focused crawling, among…

Abstract

Purpose

Automatic text categorization has applications in several domains, for example e‐mail spam detection, sexual content filtering, directory maintenance, and focused crawling, among others. Most information retrieval systems contain several components which use text categorization methods. One of the first text categorization methods was designed using a naïve Bayes representation of the text. Currently, a number of variations of naïve Bayes have been discussed. The purpose of this paper is to evaluate naïve Bayes approaches on text categorization introducing new competitive extensions to previous approaches.

Design/methodology/approach

The paper focuses on introducing a new Bayesian text categorization method based on an extension of the naïve Bayes approach. Some modifications to document representations are introduced based on the well‐known BM25 text information retrieval method. The performance of the method is compared to several extensions of naïve Bayes using benchmark datasets designed for this purpose. The method is compared also to training‐based methods such as support vector machines and logistic regression.

Findings

The proposed text categorizer outperforms state‐of‐the‐art methods without introducing new computational costs. It also achieves performance results very similar to more complex methods based on criterion function optimization as support vector machines or logistic regression.

Practical implications

The proposed method scales well regarding the size of the collection involved. The presented results demonstrate the efficiency and effectiveness of the approach.

Originality/value

The paper introduces a novel naïve Bayes text categorization approach based on the well‐known BM25 information retrieval model, which offers a set of good properties for this problem.

Details

International Journal of Web Information Systems, vol. 8 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 29 July 2021

Rick Neil Francis

The purpose of this paper is to enlarge the exposure of the Theil–Sen (TS) methodology to the academic, analyst and practitioner communities using an earnings forecast setting…

Abstract

Purpose

The purpose of this paper is to enlarge the exposure of the Theil–Sen (TS) methodology to the academic, analyst and practitioner communities using an earnings forecast setting. The study includes an appendix that describes the TS model in very basic terms and SAS code to assist readers in the implementation of the TS model. The study also presents an alternative approach to deflating or scaling variables.

Design/methodology/approach

Archival in nature using a combination of regression analysis and binomial tests.

Findings

The binomial test results support the hypothesis that the forecasting performance of the naïve no-change model is at least equal to or better than the ordinary least squares (OLS) model when earnings volatility is low. However, the results do not support the same hypothesis for the TS model nor do the results support the hypothesis that the OLS and TS models will outperform the naïve no-change model when cash flow volatility is high. Nevertheless, the study makes notable contributions to the literature, as the results indicate that the performance of the naïve model is at least as good as the OLS and TS models across 18 of the 20 binomial tests. Moreover, the results indicate that the performance of the TS model is always superior to the OLS model.

Research limitations/implications

The results are generalizable to US firms and may not extend to non-US firms.

Practical implications

The TS methodology is advantageous to OLS in that the results are robust to outlier observations, and there is no heteroscedasticity. Researchers will find this study to be useful given the use of a model (i.e. TS) which has to date received little attention, and the provision of the details for the mechanics of the model. A bonus for researchers is that the study includes SAS code for implementing the procedure.

Social implications

Awareness of alternative forecast methodologies could lead to improved forecasting results in certain contexts. The study also helps the financial community in general, as improved forecasting abilities are important for all capital market participants as they improve market efficiency.

Originality/value

Although a healthy literature exists for examining out-of-sample forecasts for earnings, the literature lacks an answer for a simple question before pursuing additional analyses: Are the results any better than those from a naive no-change forecast? The current study emphasizes the idea that the naïve no-change forecast is the most elementary model possible, and the researcher must first establish the superiority of a more complex model before conducting further analyses.

Details

Journal of Applied Accounting Research, vol. 23 no. 2
Type: Research Article
ISSN: 0967-5426

Keywords

Article
Publication date: 1 March 2006

Neil Hartnett

This paper aims to extend the research into company financial forecasts by modelling naïve earnings forecasts derived from normalised historic accounting data disclosed during…

1048

Abstract

Purpose

This paper aims to extend the research into company financial forecasts by modelling naïve earnings forecasts derived from normalised historic accounting data disclosed during Australian initial public offerings (IPOs). It seeks to investigate naïve forecast errors and compare them against their management forecast counterparts. It also seeks to investigate determinants of differential error behaviour.

Design/methodology/approach

IPOs were sampled and their prospectus forecasts, historic financial data and subsequent actual financial performance were analysed. Directional and absolute forecast error behaviour was analysed using univariate and multivariate techniques.

Findings

Systematic factors associated with error behaviour were observed across the management forecasts and the naïve forecasts, the most notable being audit quality. In certain circumstances, the naïve forecasts performed at least as well as management forecasts. In particular, forecast interval was an important discriminator for accuracy, with the superiority of management forecasts only observed for shorter forecast intervals.

Originality/value

The results imply a level of “disclosure management” regarding company IPO forecasts and normalised historic accounting data, with forecast overestimation and error size more extreme in the absence of higher quality third‐party monitoring services via the audit process. The results also raise questions regarding the serviceability of normalised historic financial information disclosed in prospectuses, in that many of those data do not appear to enhance the forecasting process, particularly when accompanied by published management forecasts and shorter forecast intervals.

Details

Asian Review of Accounting, vol. 14 no. 1/2
Type: Research Article
ISSN: 1321-7348

Keywords

Article
Publication date: 20 January 2012

Jingqiu Chen, Lei Wang, Minyan Huang and Julie Spencer‐Rodgers

This research aims to examine the relationships among employee commitment to change, naïve dialecticism, and performance change in the context of change in Chinese state‐owned…

1270

Abstract

Purpose

This research aims to examine the relationships among employee commitment to change, naïve dialecticism, and performance change in the context of change in Chinese state‐owned enterprises (SOEs).

Design/methodology/approach

A total of 287 employee‐supervisor matched data were collected by questionnaire from three Chinese SOEs implementing a major change on sector restructuring.

Findings

Results showed that affective commitment to change was related to performance change. Change thinking was positively related to all three components of commitment to change, whereas contradictory thinking was negatively related to affective commitment to change. Affective commitment to change fully mediated the association between contradictory thinking and performance change.

Originality/value

This research integrates “outside in” and “inside out” approaches to contextualize commitment to change studies in China. An “outside in” approach was followed to investigate the relationship between commitment to change and performance change, whereas an “inside out” approach was followed to add valuable insights to the commitment to change model from the point of view of Chinese naïve dialecticism.

Details

Journal of Managerial Psychology, vol. 27 no. 1
Type: Research Article
ISSN: 0268-3946

Keywords

Article
Publication date: 4 December 2020

Marcel Grein, Annika Wiecek and Daniel Wentzel

Existing research on product design has found that a design’s complexity is an important antecedent of consumers’ aesthetic and behavioural responses. This paper aims to shed new…

Abstract

Purpose

Existing research on product design has found that a design’s complexity is an important antecedent of consumers’ aesthetic and behavioural responses. This paper aims to shed new light on the relationship between design complexity and perceptions of design quality by taking the effects of consumers’ naïve theories into account.

Design/methodology/approach

The hypotheses of this paper are tested in a series of three experiments.

Findings

The findings from three studies show that the extent to which consumers prefer more complex product designs to simpler ones depends on the extent to which they believe that the complexity of a design is indicative of the effort or of the talent of the designers involved in the design process. These competing naïve theories, in turn, are triggered by contextual information that consumers have at their disposal, such as the professional background of a designer or the brand that is associated with a particular design.

Research limitations/implications

This research was limited to a design's complexity as the central design element and to the effects of two naïve theories. Future research may also take other design factors and consumer heuristics into account.

Practical implications

This research reveals that the extent to which managers may successfully introduce both complex and simple designs may depend on the reputation of a company’s designers and the prestige of a brand.

Originality/value

This research examines design complexity from a novel theoretical perspective and shows that the effect of design complexity on perceptions of design quality is contingent on two specific naïve theories of consumers.

Details

European Journal of Marketing, vol. 55 no. 5
Type: Research Article
ISSN: 0309-0566

Keywords

Article
Publication date: 6 February 2023

Francina Malan and Johannes Lodewyk Jooste

The purpose of this paper is to compare the effectiveness of the various text mining techniques that can be used to classify maintenance work-order records into their respective…

Abstract

Purpose

The purpose of this paper is to compare the effectiveness of the various text mining techniques that can be used to classify maintenance work-order records into their respective failure modes, focussing on the choice of algorithm and preprocessing transforms. Three algorithms are evaluated, namely Bernoulli Naïve Bayes, multinomial Naïve Bayes and support vector machines.

Design/methodology/approach

The paper has both a theoretical and experimental component. In the literature review, the various algorithms and preprocessing techniques used in text classification is considered from three perspectives: the domain-specific maintenance literature, the broader short-form literature and the general text classification literature. The experimental component consists of a 5 × 2 nested cross-validation with an inner optimisation loop performed using a randomised search procedure.

Findings

From the literature review, the aspects most affected by short document length are identified as the feature representation scheme, higher-order n-grams, document length normalisation, stemming, stop-word removal and algorithm selection. However, from the experimental analysis, the selection of preprocessing transforms seemed more dependent on the particular algorithm than on short document length. Multinomial Naïve Bayes performs marginally better than the other algorithms, but overall, the performances of the optimised models are comparable.

Originality/value

This work highlights the importance of model optimisation, including the selection of preprocessing transforms. Not only did the optimisation improve the performance of all the algorithms substantially, but it also affects model comparisons, with multinomial Naïve Bayes going from the worst to the best performing algorithm.

Details

Journal of Quality in Maintenance Engineering, vol. 29 no. 3
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 10 January 2023

Atul Rawal and Bechoo Lal

The uncertainty of getting admission into universities/institutions is one of the global problems in an academic environment. The students are having good marks with highest…

Abstract

Purpose

The uncertainty of getting admission into universities/institutions is one of the global problems in an academic environment. The students are having good marks with highest credential, but they are not sure about getting their admission into universities/institutions. In this research study, the researcher builds a predictive model using Naïve Bayes classifiers – machine learning algorithm to extract and analyze hidden pattern in students’ academic records and their credentials. The main purpose of this research study is to reduce the uncertainty for getting admission into universities/institutions based on their previous credentials and some other essential parameters.

Design/methodology/approach

This research study presents a joint venture of Naïve Bayes Classification and Kernel Density Estimations (KDE) to predict the student’s admission into universities or any higher institutions. The researcher collected data from the Kaggle data sets based on grade point average (GPA), graduate record examinations (GRE) and RANK of universities which are essential to take admission in higher education.

Findings

The classification model is built on the training data set of students’ examination score such as GPA, GRE, RANK and some other essential features that offered the admission with a predictive accuracy rate 72% and has been experimentally verified. To improve the quality of accuracy, the researcher used the Shapiro–Walk Normality Test and Gaussian distribution on large data sets.

Research limitations/implications

The limitation of this research study is that the developed predictive model is not applicable for getting admission into all courses. The researcher used the limited data attributes such as GRE, GPA and RANK which does not define the admission into all possible courses. It is stated that it is applicable only for student’s admission into universities/institutions, and the researcher used only three attributes of admission parameters, namely, GRE, GPA and RANK.

Practical implications

The researcher used the Naïve Bayes classifiers and KDE machine learning algorithms to develop a predictive model which is more reliable and efficient to classify the admission category (Admitted/Not Admitted) into universities/institutions. During the research study, the researcher found that accuracy performance of the predictive Model 1 and that of predictive Model 2 are very close to each other, with predictive Model 1 having truly predictive and falsely predictive rate of 70.46% and 29.53%, respectively.

Social implications

Yes, it is having a significant contribution for society; students and parents can get prior information about the possibilities of admission in higher academic institutions and universities.

Originality/value

The classification model can reduce the admission uncertainty and enhance the university’s decision-making capabilities. The significance of this research study is to reduce human intervention for making decisions with respect to the student’s admission into universities or any higher academic institutions, and it demonstrates many universities and higher-level institutions could use this predictive model to improve their admission process without human intervention.

Details

Journal of Indian Business Research, vol. 15 no. 2
Type: Research Article
ISSN: 1755-4195

Keywords

Article
Publication date: 1 April 1999

J. Keith Murnighan, Linda Babcock, Leigh Thompson and Madan Pillutla

This paper investigates the information dilemma in negotiations: if negotiators reveal information about their priorities and preferences, more efficient agreements may be reached…

1549

Abstract

This paper investigates the information dilemma in negotiations: if negotiators reveal information about their priorities and preferences, more efficient agreements may be reached but the shared information may be used strategically by the other negotiator, to the revealers' disadvantage. We present a theoretical model that focuses on the characteristics of the negotiators, the structure of the negotiation, and the available incentives; it predicts that experienced negotiators will out‐perform naive negotiators on distributive (competitive) tasks, especially when they have information about their counterpart's preferences and the incentives are high—unless the task is primarily integrative, in which case information will contribute to the negotiators maximizing joint gain. Two experiments (one small, one large) showed that the revelation of one's preferences was costly and that experienced negotialors outperformed their naive counterparts by a wide margin, particularly when the task and issues were distributive and incentives were large. Our results help to identify the underlying dynamics of the information dilemma and lead to a discussion of the connections between information and social dilemmas and the potential for avoiding inefficiencies.

Details

International Journal of Conflict Management, vol. 10 no. 4
Type: Research Article
ISSN: 1044-4068

Article
Publication date: 7 August 2017

Alexandre Carneiro and Ricardo Leal

The purpose of this paper is to contrast three investment choices within the reach of individual investors: naive portfolios of Brazilian stocks; actively managed stock funds; and…

Abstract

Purpose

The purpose of this paper is to contrast three investment choices within the reach of individual investors: naive portfolios of Brazilian stocks; actively managed stock funds; and the Ibovespa index, which represents passive management as well as to offer insights on the performance of professional asset managers in this large emerging market.

Design/methodology/approach

Equally weighted portfolios contained between 5 and 30 stocks to keep transaction costs low. Stock selection used the Ibovespa constituents and considered value (dividend yield (DY) and price-to-book ratio), momentum (past returns), and liquidity, as well as the Sharpe ratio (SR) over the 2003-2012 period, rebalancing three times a year.

Findings

Cumulative returns of naive portfolios are large. They frequently outperform the index for all values of n. They also outperform stock funds, particularly when the invested amount exceeds US$25,000, due to transaction costs. Yet, expected out-of-sample SRs corrected for errors in estimates are very low, suggesting that one should not count on this historical performance in the future. Naive portfolios may simply be more exposed to additional value, size, and momentum risks. Results are sensitive to time period selection.

Practical implications

Naive portfolios may be attractive to individual investors in Brazil relative to stock funds, which seem to strive to keep volatility low and may be better when the investment amount is low. There may be merit for value or momentum stock selection strategies when forming small equally weighted portfolios.

Originality/value

The paper contrasts realistic stock investing alternatives for individuals, it provides a view of stock fund performance in Brazil, and offers practical implications that may be pertinent in other emerging stock markets.

Objetivo

Contrastar três opções de investimento ao alcance de investidores individuais: carteiras ingênuas de ações brasileiras; fundos de ações de gestão ativa; e o índice Ibovespa, que representa a gestão passiva. Oferecer informações sobre o desempenho de gestores de ativos profissionais neste grande mercado emergente.

Método

As carteiras igualmente ponderadas continham entre 5 e 30 ações para manter os custos de transação baixos. A seleção de ações utilizou os componentes do Ibovespa e considerou o valor (rendimento de dividendos e relação preço/valor patrimonial), momentum (retornos passados) e liquidez, bem como o Índice de Sharpe no período 2003-2012, rebalanceando três vezes ao ano.

Resultados

Os retornos acumulados de carteiras ingênuas são grandes. Eles frequentemente superam o índice para todos os valores de N. Eles também superam os fundos de ações, particularmente quando o montante investido excede US$ 25,000, devido aos custos de transação. Contudo, os Índices de Sharpe esperados fora de amostra corrigidos por erros nas estimativas são muito baixos, sugerindo que não se deve contar com este desempenho histórico no futuro. As carteiras ingênuas podem simplesmente estar mais expostas a fatores riscos adicionais, tal como os de valor, tamanho e momentum. Os resultados são sensíveis à seleção do período de tempo.

Implicações práticas

As carteiras ingênuas podem ser atrativas para os investidores individuais no Brasil em relação aos fundos de ações, que parecem se esforçar para manter a volatilidade baixa e podem ser melhores quando o valor do investimento é baixo. Pode haver mérito para estratégias de seleção de ações de valor ou momentum ao formar carteiras igualmente ponderadas pequenas.

Originalidade/valor

O artigo contrasta alternativas realistas de investimento em ações para indivíduos, oferece uma visão do desempenho dos fundos de ações no Brasil e oferece implicações práticas que podem ser pertinentes em outros mercados emergentes.

1 – 10 of over 10000