Search results

1 – 10 of over 5000
Article
Publication date: 1 June 2010

Eleftherios Giovanis

The purpose of this paper is to examine two different approaches in the prediction of the economic recession periods in the US economy.

Abstract

Purpose

The purpose of this paper is to examine two different approaches in the prediction of the economic recession periods in the US economy.

Design/methodology/approach

A logit regression was applied and the prediction performance in two out‐of‐sample periods, 2007‐2009 and 2010 was examined. On the other hand, feed‐forwards neural networks with Levenberg‐Marquardt error backpropagation algorithm were applied and then neural networks self‐organizing map (SOM) on the training outputs was estimated.

Findings

The paper presents the cluster results from SOM training in order to find the patterns of economic recessions and expansions. It is concluded that logit model forecasts the current financial crisis period at 75 percent accuracy, but logit model is useful as it provides a warning signal three quarters before the current financial crisis started officially. Also, it is estimated that the financial crisis, even if it reached its peak in 2009, the economic recession will be continued in 2010 too. Furthermore, the patterns generated by SOM neural networks show various possible versions with one common characteristic, that financial crisis is not over in 2009 and the economic recession will be continued in the USA even up to 2011‐2012, if government does not apply direct drastic measures.

Originality/value

Both logistic regression (logit) and SOMs procedures are useful. The first one is useful to examine the significance and the magnitude of each variable, while the second one is useful for clustering and identifying patterns in economic recessions and expansions.

Details

Journal of Financial Economic Policy, vol. 2 no. 2
Type: Research Article
ISSN: 1757-6385

Keywords

Article
Publication date: 25 October 2013

Iris Stuart, Yong-Chul Shin, Donald P. Cram and Vijay Karan

The use of choice-based, matched, and other stratified sample designs is common in auditing research. However, it is not widely appreciated that the data analysis for these…

Abstract

The use of choice-based, matched, and other stratified sample designs is common in auditing research. However, it is not widely appreciated that the data analysis for these studies has to take into account the non-random nature of sample selection in these designs. A choice-based, matched or otherwise stratified sample is a nonrandom sample that must be analyzed using conditional analysis techniques. We review five research streams in the auditing area. These streams include work on determinants of audit litigation, audit fees, auditor reporting in financially distressed firms, audit quality and auditor switches. Cram, Karan, and Stuart (CKS) (2009) demonstrated the accuracy of conditional analysis, compared to unconditional analysis, of nonrandom samples through the use of simulations, replications, and mathematical proofs. Papers since published have continued to rely upon questionable research, however, and it is hard for researchers to identify what is the reliability of a given work. We complement and extend CKS (2009) by identifying audit papers in selected research streams whose results will likely differ if the data gathered are analyzed using conditional analysis techniques. Thus research can be advanced either by replication and reanalysis, or by refocus of new research upon issues that should no longer be viewed as settled.

Article
Publication date: 17 February 2012

Deniz Tudor and Bolong Cao

The purpose of this paper is to examine the ability of hedge funds and funds of hedge funds to generate absolute returns using fund level data.

1334

Abstract

Purpose

The purpose of this paper is to examine the ability of hedge funds and funds of hedge funds to generate absolute returns using fund level data.

Design/methodology/approach

The absolute return profiles are identified using properties of the empirical distributions of fund returns. The authors use both Bayesian multinomial probit and frequentist multinomial logit regressions to examine the relationship between the return profiles and fund characteristics.

Findings

Some evidence is found that only some hedge funds strategies, but not all of them, demonstrate higher tendency to produce absolute returns. Also identified are some investment provisions and fund characteristics that can influence the chance of generating absolute returns. Finally, no evidence was found for performance persistence in terms of absolute returns for hedge funds but some limited evidence for funds of funds.

Practical implications

This paper is the first attempt to examine the hedge fund return profiles based on the notion of absolute return in great details. Investors and managers of funds of funds can utilize the identification method in this paper to evaluate the performance of their interested hedge funds from a new angle.

Originality/value

Using the properties of the empirical distribution of the hedge fund returns to classify them into different absolute return profiles is the unique contribution of this paper. The application of the multinomial probit and multinomial logit models in the fund performance and fund characteristics literature is also new since the dependent variable in the authors' regressions is multinomial.

Article
Publication date: 16 March 2010

Cataldo Zuccaro

The purpose of this paper is to discuss and assess the structural characteristics (conceptual utility) of the most popular classification and predictive techniques employed in…

2293

Abstract

Purpose

The purpose of this paper is to discuss and assess the structural characteristics (conceptual utility) of the most popular classification and predictive techniques employed in customer relationship management and customer scoring and to evaluate their classification and predictive precision.

Design/methodology/approach

A sample of customers' credit rating and socio‐demographic profiles are employed to evaluate the analytic and classification properties of discriminant analysis, binary logistic regression, artificial neural networks, C5 algorithm, and regression trees employing Chi‐squared Automatic Interaction Detector (CHAID).

Findings

With regards to interpretability and the conceptual utility of the parameters generated by the five techniques, logistic regression provides easily interpretable parameters through its logit. The logits can be interpreted in the same way as regression slopes. In addition, the logits can be converted to odds providing a common sense evaluation of the relative importance of each independent variable. Finally, the technique provides robust statistical tests to evaluate the model parameters. Finally, both CHAID and the C5 algorithm provide visual tools (regression tree) and semantic rules (rule set for classification) to facilitate the interpretation of the model parameters. These can be highly desirable properties when the researcher attempts to explain the conceptual and operational foundations of the model.

Originality/value

Most treatments of complex classification procedures have been undertaken idiosyncratically, that is, evaluating only one technique. This paper evaluates and compares the conceptual utility and predictive precision of five different classification techniques on a moderate sample size and provides clear guidelines in technique selection when undertaking customer scoring and classification.

Details

Journal of Modelling in Management, vol. 5 no. 1
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 1 August 2006

O. Nicolis and G. Tondini

The objective of this study is to formulate a model for forecasting the performance of firms in terms of trends in turnover, investments, exports, employment and flexibility, and…

Abstract

Purpose

The objective of this study is to formulate a model for forecasting the performance of firms in terms of trends in turnover, investments, exports, employment and flexibility, and to identify the principal correlations with selected dependent variables, such as the level of computerization, the extent of collaboration with competitors and the characteristics of the product.

Design/methodology/approach

This paper analyses data, which refers to a survey conducted on a sample of 89 firms from the Treviso province in the north east of Italy by using the Logit model.

Findings

From the application of the logit models it emerges that the most important variables contributing to the economic success of the firms are technological flexibility, collaboration, with competitors, and investments in certain areas such as research and development, marketing and fixed technology. Moreover, the findings show that the factors which contribute most significantly to technological flexibility (a key factor for the growth of the firm) are flexibility to demand, the level of computerization and staff training.

Practical implications

From the application of logit models it emerges that the most important variables influencing the good performance of firms are flexibility in keeping pace with technology, collaboration with competitors, and the choice of certain types of investment. Moreover, the variables which contribute most to greater flexibility are investments in human capital and in information technology, as well website use, the technological characteristics of the product and the firm's flexibility in following the demand trend.

Originality/value

In this study logit models are analysed from both a theoretical and applied point of view.

Details

Managerial Finance, vol. 32 no. 8
Type: Research Article
ISSN: 0307-4358

Keywords

Article
Publication date: 22 February 2013

Samir K. Srivastava and Avishek Ray

The purpose of this paper is to benchmark the solvency status of Indian general insurance firms.

1228

Abstract

Purpose

The purpose of this paper is to benchmark the solvency status of Indian general insurance firms.

Design/methodology/approach

The paper collects, compiles and analyses the key financial, operational and business data of eight Indian insurance firms. The authors first decide on initial firm‐specific economic variables and use data of last five years from IRDA Reports and Company Annual Reports. The NAIC IRIS ratios method was used to obtain an initial risk classification. This was used as a proxy of insolvency risk. Linear regression and logit techniques were thereafter applied to estimate the significant factors (direction‐wise and magnitude‐wise) which influence insurer solvency.

Findings

The results suggest that the factors that most significantly influence Indian non‐life insurers are lines of business, the firm's market share, the premium growth rate, the underwriting performance and the claims incurred. Further, the factors which have the strongest effect are market share, change in inflation rate, firm size, lines of business and claims incurred.

Research limitations/implications

The sample of Indian general insurers used is limited with regard to the time span. No holdout sample was used and the entire data set was subjected to statistical analysis. These somewhat limit the findings and implications.

Practical implications

The paper provides insurers with easy‐to‐use operational and marketing indicators to benchmark their solvency risk. It will lead to competitive goal setting for continuous improvement. Estimation of appropriate market/economic parameters can be a useful input for regulators. A few suggested indicators are new.

Originality/value

Previous studies of insurance companies have focused on developed economies (USA, Europe) or the Asian Markets (China and Japan). This paper determines a set of marketing, financial and operational variables to predict benchmark financial strength of general insurance firms in India. It incorporates qualitative inputs from practising managers and industry experts before carrying out quantitative modeling and analysis.

Details

Benchmarking: An International Journal, vol. 20 no. 1
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 6 July 2021

Peter A. Jones, Vincent Reitano, J.S. Butler and Robert Greer

Public management researchers commonly model dichotomous dependent variables with parametric methods despite their relatively strong assumptions about the data generating process…

Abstract

Purpose

Public management researchers commonly model dichotomous dependent variables with parametric methods despite their relatively strong assumptions about the data generating process. Without testing for those assumptions and consideration of semiparametric alternatives, such as maximum score, estimates might be biased, or predictions might not be as accurate as possible.

Design/methodology/approach

To guide researchers, this paper provides an evaluative framework for comparing parametric estimators with semiparametric and nonparametric estimators for dichotomous dependent variables. To illustrate the framework, the article estimates the factors associated with the passage of school district bond referenda in all Texas school districts from 1998 to 2015.

Findings

Estimates show that the correct prediction of a bond passing increases from 77.2 to 78%, with maximum score estimation relative to a commonly used parametric alternative. While this is a small increase, it is meaningful in comparison to the random prediction base model.

Originality/value

Future research modeling any dichotomous dependent variable can use the framework to identify the most appropriate estimator and relevant statistical programs.

Details

International Journal of Public Sector Management, vol. 34 no. 6
Type: Research Article
ISSN: 0951-3558

Keywords

Article
Publication date: 12 September 2008

Nerijus Maciulis

The purpose of this paper is to propose predictive models of speculative revaluation attacks, which would facilitate currency risk hedging in emerging and developed countries.

1622

Abstract

Purpose

The purpose of this paper is to propose predictive models of speculative revaluation attacks, which would facilitate currency risk hedging in emerging and developed countries.

Design/methodology/approach

The purpose of this paper is achieved using the methodology of multiple triangulation. Paper combines different theoretical perspectives (three generations of speculative attack models), two sources of data (emerging countries and developed countries) and three methods (logit regression, probit regression and artificial neural networks, ANN) for identification of leading indicators and forecasting of speculative attacks. Combination of multiple observations (data), underlying theories and methods allowed achieving least biased results.

Findings

A list of leading indicators of speculative revaluation attacks was generated based on previous researches and three generations of speculative attacks' models. Qualitative and quantitative differences of speculative revaluation attacks in emerging and developed countries were identified. The decision matrix of currency risk hedging in the context of speculative devaluation and revaluation attacks was proposed.

Research limitations/implications

Although the sample of this researcher includes a wide range of countries (65 in total), their separation into developed and emerging countries is arbitrary (in the course of 35 years some countries have changed the status from emerging towards developed). The initial list of leading indicators is limited, includes mostly economic variables. It could be improved by encompassing political variables, credit ratings, consumer and business confidence indices.

Practical implications

Developed predictive models of speculative revaluation attacks may significantly reduce important element of risk – uncertainty – and, consequently, the cost of financial hedging.

Originality/value

This paper is one of the first public attempts to apply alternative methodology of ANN for forecasting speculative attacks. The results showed that latter method is more accurate than probit and logit regressions. Also, to the author's best knowledge, this is a first public attempt to separately analyse the phenomenon of speculative revaluation attacks.

Details

Baltic Journal of Management, vol. 3 no. 3
Type: Research Article
ISSN: 1746-5265

Keywords

Article
Publication date: 6 May 2014

Esam-Aldin M. Algebaly, Yusnidah Ibrahim and Nurwati A. Ahmad-Zaluki

– The purpose of this paper is to examine the determinants of involuntary delisting rate for the Egyptian initial public offerings (IPOs) issued over the period 1992-2009.

Abstract

Purpose

The purpose of this paper is to examine the determinants of involuntary delisting rate for the Egyptian initial public offerings (IPOs) issued over the period 1992-2009.

Design/methodology/approach

A definition of survival time that considers the date when the new Egyptian listing rules were enforced to track delisting status for each IPO firm for five survival years is relied on. Binary logit regression analysis is used to identify these determinants. Total sample is divided into two subsamples: the first subsample covers the period from 1992 to 2004. It is used to estimate the logit equations and to predict delisting status of firms included in the second subsample, which covers the period from 2005 to 2009.

Findings

The probability of involuntary delisting decreases significantly with the increase in firm size, institutional ownership, assets growth rate, operating efficiency, offering size, initial returns and insider ownership. However, it increases significantly in IPO firms with high financial leverage. Based on the estimated logit regression equations, the status of the six firms included in the second subsample are correctly predicted.

Practical implications

The results provide several implications for investors, issuing firms and setters of listing rules.

Originality/value

This study uses new variables, such as firm type, institutional ownership and listing variables. In addition, several theories are tested and supported.

Details

Review of Accounting and Finance, vol. 13 no. 2
Type: Research Article
ISSN: 1475-7702

Keywords

Article
Publication date: 11 February 2019

Stephen Amponsah, Zangina Isshaq and Daniel Agyapong

The purpose of this study is to examine tax stamp evasion at Twifu Atti-Morkwa and Hemang Lower Denkyira districts in the central region of Ghana.

Abstract

Purpose

The purpose of this study is to examine tax stamp evasion at Twifu Atti-Morkwa and Hemang Lower Denkyira districts in the central region of Ghana.

Design/methodology/approach

A cross-sectional survey design was adopted to sample 305 micro-taxpayers through the use of multi-stage sampling technique. Primary data were collected from the micro-taxpayers using structured interview. Binary and multinomial logit regression models were used to regress the tax stamp evasion on economic and non-economic factors.

Findings

The study found that the likelihood of micro taxpayers to evade tax stamp is predicted by age, application of sanctions, guilt feeling, transportation cost to tax office and rate of tax audit. Thus, the study found partial support for expected utility, planned behaviour and attributory theories in explaining tax evasion behaviour of micro-taxpayers.

Practical/implication

There are several measures of addressing tax evasion behaviour of micro taxpayers. Evasion behaviour can be deterred by enforcement strategies such as application of sanctions and regular tax audit, establishment of more tax offices in the districts and writing normative messages on the faces of tax stamp stickers.

Originality/value

This study helps explains the tax evasion behaviour of micro-taxpayers of a developing economy like Ghana using a special type of tax design meant to capture such taxpayers in the tax bracket. To the best of our knowledge, the study is unique in terms of the means of measuring tax evasion and the methodologies used.

Details

International Journal of Law and Management, vol. 61 no. 1
Type: Research Article
ISSN: 1754-243X

Keywords

1 – 10 of over 5000