Search results

1 – 10 of over 10000
Content available
Article
Publication date: 2 December 2016

Taylor Boyd, Grace Docken and John Ruggiero

The purpose of this paper is to improve the estimation of the production frontier in cases where outliers exist. We focus on the case when outliers appear above the true

Downloads
1880

Abstract

Purpose

The purpose of this paper is to improve the estimation of the production frontier in cases where outliers exist. We focus on the case when outliers appear above the true frontier due to measurement error.

Design/methodology/approach

The authors use stochastic data envelopment analysis (SDEA) to allow observed points above the frontier. They supplement SDEA with assumptions on the efficiency and show that the true frontier in the presence of outliers can be derived.

Findings

This paper finds that the authors’ maximum likelihood approach outperforms super-efficiency measures. Using simulations, this paper shows that SDEA is a useful model for outlier detection.

Originality/value

The model developed in this paper is original; the authors add distributional assumptions to derive the optimal quantile with SDEA to remove outliers. The authors believe that the value of the paper will lead to many citations because real-world data are often subject to outliers.

Details

Journal of Centrum Cathedra, vol. 9 no. 2
Type: Research Article
ISSN: 1851-6599

Keywords

Content available
Article
Publication date: 13 October 2017

Ümit Erol

The purpose of this paper is to show that major reversals of an index (specifically BIST-30 index) can be detected uniquely on the date of reversal by checking the extreme…

Abstract

Purpose

The purpose of this paper is to show that major reversals of an index (specifically BIST-30 index) can be detected uniquely on the date of reversal by checking the extreme outliers in the rate of change series using daily closing prices.

Design/methodology/approach

The extreme outliers are determined by checking if either the rate of change series or the volatility of the rate of change series displays more than two standard deviations on the date of reversal. Furthermore; wavelet analysis is also utilized for this purpose by checking the extreme outlier characteristics of the A1 (approximation level 1) and D3 (detail level 3) wavelet components.

Findings

Paper investigates ten major reversals of BIST-30 index during a five year period. It conclusively shows that all these major reversals are characterized by extreme outliers mentioned above. The paper also checks if these major reversals are unique in the sense of being observed only on the date of reversal but not before. The empirical results confirm the uniqueness. The paper also demonstrates empirically the fact that extreme outliers are associated only with major reversals but not minor ones.

Practical implications

The results are important for fund managers for whom the timely identification of the initial phase of a major bullish or bearish trend is crucial. Such timely identification of the major reversals is also important for the hedging applications since a major issue in the practical implementation of the stock index futures as a hedging instrument is the correct timing of derivatives positions.

Originality/value

To the best of the author’ knowledge; this is the first study dealing with the issue of major reversal identification. This is evidently so for the BIST-30 index and the use of extreme outliers for this purpose is also a novelty in the sense that neither the use of rate of change extremity nor the use of wavelet decomposition for this purpose was addressed before in the international literature.

Details

Journal of Capital Markets Studies, vol. 1 no. 1
Type: Research Article
ISSN: 2514-4774

Keywords

To view the access options for this content please click here
Article
Publication date: 2 May 2017

Kannan S. and Somasundaram K.

Due to the large-size, non-uniform transactions per day, the money laundering detection (MLD) is a time-consuming and difficult process. The major purpose of the proposed…

Abstract

Purpose

Due to the large-size, non-uniform transactions per day, the money laundering detection (MLD) is a time-consuming and difficult process. The major purpose of the proposed auto-regressive (AR) outlier-based MLD (AROMLD) is to reduce the time consumption for handling large-sized non-uniform transactions.

Design/methodology/approach

The AR-based outlier design produces consistent asymptotic distributed results that enhance the demand-forecasting abilities. Besides, the inter-quartile range (IQR) formulations proposed in this paper support the detailed analysis of time-series data pairs.

Findings

The prediction of high-dimensionality and the difficulties in the relationship/difference between the data pairs makes the time-series mining as a complex task. The presence of domain invariance in time-series mining initiates the regressive formulation for outlier detection. The deep analysis of time-varying process and the demand of forecasting combine the AR and the IQR formulations for an effective outlier detection.

Research limitations/implications

The present research focuses on the detection of an outlier in the previous financial transaction, by using the AR model. Prediction of the possibility of an outlier in future transactions remains a major issue.

Originality/value

The lack of prior segmentation of ML detection suffers from dimensionality. Besides, the absence of boundary to isolate the normal and suspicious transactions induces the limitations. The lack of deep analysis and the time consumption are overwhelmed by using the regression formulation.

Details

Journal of Money Laundering Control, vol. 20 no. 2
Type: Research Article
ISSN: 1368-5201

Keywords

To view the access options for this content please click here
Article
Publication date: 1 March 1996

Robert A. Connor

There has been increased interest in expanding the Medicare Prospective Payment System (PPS) to non-Medicare payers to provide incentives for hospitals to contain costs…

Abstract

There has been increased interest in expanding the Medicare Prospective Payment System (PPS) to non-Medicare payers to provide incentives for hospitals to contain costs and to concentrate in those Diagnosis-Related Groups (DRGs) which they can provide efficiently. However, this should not force low-volume, low-cost payers to subsidize high cost payers and should not penalize low Length-of-Stay (LOS), low-cost hospitals. This article proposes a new method proportional pricing to expand PPS incentives to non-Medicare payers with equity for payers and hospitals. It would also allow all-payer rate setting and premium price competition among payers to coexist.

Details

Journal of Public Budgeting, Accounting & Financial Management, vol. 10 no. 3
Type: Research Article
ISSN: 1096-3367

To view the access options for this content please click here
Article
Publication date: 14 March 2016

Gebeyehu Belay Gebremeskel, Chai Yi, Zhongshi He and Dawit Haile

Among the growing number of data mining (DM) techniques, outlier detection has gained importance in many applications and also attracted much attention in recent times. In…

Abstract

Purpose

Among the growing number of data mining (DM) techniques, outlier detection has gained importance in many applications and also attracted much attention in recent times. In the past, outlier detection researched papers appeared in a safety care that can view as searching for the needles in the haystack. However, outliers are not always erroneous. Therefore, the purpose of this paper is to investigate the role of outliers in healthcare services in general and patient safety care, in particular.

Design/methodology/approach

It is a combined DM (clustering and the nearest neighbor) technique for outliers’ detection, which provides a clear understanding and meaningful insights to visualize the data behaviors for healthcare safety. The outcomes or the knowledge implicit is vitally essential to a proper clinical decision-making process. The method is important to the semantic, and the novel tactic of patients’ events and situations prove that play a significant role in the process of patient care safety and medications.

Findings

The outcomes of the paper is discussing a novel and integrated methodology, which can be inferring for different biological data analysis. It is discussed as integrated DM techniques to optimize its performance in the field of health and medical science. It is an integrated method of outliers detection that can be extending for searching valuable information and knowledge implicit based on selected patient factors. Based on these facts, outliers are detected as clusters and point events, and novel ideas proposed to empower clinical services in consideration of customers’ satisfactions. It is also essential to be a baseline for further healthcare strategic development and research works.

Research limitations/implications

This paper mainly focussed on outliers detections. Outlier isolation that are essential to investigate the reason how it happened and communications how to mitigate it did not touch. Therefore, the research can be extended more about the hierarchy of patient problems.

Originality/value

DM is a dynamic and successful gateway for discovering useful knowledge for enhancing healthcare performances and patient safety. Clinical data based outlier detection is a basic task to achieve healthcare strategy. Therefore, in this paper, the authors focussed on combined DM techniques for a deep analysis of clinical data, which provide an optimal level of clinical decision-making processes. Proper clinical decisions can obtain in terms of attributes selections that important to know the influential factors or parameters of healthcare services. Therefore, using integrated clustering and nearest neighbors techniques give more acceptable searched such complex data outliers, which could be fundamental to further analysis of healthcare and patient safety situational analysis.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 9 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 12 June 2017

Richard Hauser and John H. Thornton Jr

The purpose of this paper is to investigate an empirical solution to dividend policy relevance.

Downloads
2858

Abstract

Purpose

The purpose of this paper is to investigate an empirical solution to dividend policy relevance.

Design/methodology/approach

The paper combines measures of firm maturity in a logit regression to define a comprehensive life-cycle model of the likelihood of dividend payment. The valuation of firms that conform to the model is compared to the valuation of firms that do not fit the model. Valuation is measured by the market to book (M/B) ratio.

Findings

The analysis indicates that dividend policy is related to firm value. Dividend-paying firms that fit the life-cycle model have a higher median valuation than dividend-paying firms that do not fit the life-cycle model. Similarly, non-paying firms that fit the life-cycle model have a higher median valuation than non-paying firms that do not fit the life-cycle model. The results also provide evidence that the disappearing dividend phenomenon is related to shifts in valuation.

Research limitations/implications

This paper focuses on the payment of dividends. Stock repurchases are not considered.

Practical implications

The results indicate that dividend policy is related to firm value. Approximately 15 percent of sample observations have a dividend policy counter to the life-cycle model.

Originality/value

This paper shows that the relation between a firm’s M/B ratio and dividend policy changes over the firm’s life-cycle. It also shows that the catering motive for dividends is strongest among firms that are outliers in the life-cycle model and firms of intermediate maturity.

Details

Managerial Finance, vol. 43 no. 6
Type: Research Article
ISSN: 0307-4358

Keywords

To view the access options for this content please click here
Article
Publication date: 1 February 2002

MARTIN SKITMORE and H.P. LO

Construction contract auctions are characterized by (1) a heavy emphasis on the lowest bid as it is that which usually determines the winner of the auction, (2…

Abstract

Construction contract auctions are characterized by (1) a heavy emphasis on the lowest bid as it is that which usually determines the winner of the auction, (2) anticipated high outliers because of the presence of non‐competitive bids, (3) very small samples, and (4) uncertainty of the appropriate underlying density function model of the bids. This paper describes a method for simultaneously identifying outliers and density function by systematically identifying and removing candidate (high) outliers and examining the composite goodness‐of‐fit of the resulting reduced samples with censored normal and lognormal density function. The special importance of the lowest bid value in this context is utilized in the goodness‐of‐fit test by the probability of the lowest bid recorded for each auction as a lowest order statistic. Six different identification strategies are tested empirically by application, both independently and in pooled form, to eight sets of auction data gathered from around the world. The results indicate the most conservative identification strategy to be a multiple of the auction standard deviation assuming a lognormal composite density. Surprisingly, the normal density alternative was the second most conservative solution. The method is also used to evaluate some methods used in practice and to identify potential improvements.

Details

Engineering, Construction and Architectural Management, vol. 9 no. 2
Type: Research Article
ISSN: 0969-9988

Keywords

To view the access options for this content please click here
Article
Publication date: 13 July 2015

K. Stephen Haggard, Jeffrey Scott Jones and H Douglas Witte

The purpose of this paper is to determine the extent to which outliers have persisted in augmenting the Halloween effect over time and to offer an econometric test of…

Abstract

Purpose

The purpose of this paper is to determine the extent to which outliers have persisted in augmenting the Halloween effect over time and to offer an econometric test of seasonality in return skewness that might provide a partial explanation for the Halloween effect.

Design/methodology/approach

The authors split the Morgan Stanley Capital International data for 37 countries into two subperiods and, using median regression and influence vectors, examine these periods for a possible change in the interplay between outliers and the Halloween effect. The authors perform a statistical assessment of whether outliers are a significant contributor to the overall Halloween effect using a bootstrap test of seasonal differences in return skewness.

Findings

Large returns (positive and negative) persist in being generally favorable to the Halloween effect in most countries. The authors find seasonality in return skewness to be statistically significant in many countries. Returns over the May through October timeframe are negatively skewed relative to returns over the November through April period.

Originality/value

This paper offers the first statistical test of seasonality in return skewness in the context of the Halloween effect. The authors show the Halloween effect to be a more complex phenomenon than the simple seasonality in mean returns documented in prior research.

Details

Managerial Finance, vol. 41 no. 7
Type: Research Article
ISSN: 0307-4358

Keywords

To view the access options for this content please click here
Article
Publication date: 3 April 2017

Ahmad Hakimi, Amirhossein Amiri and Reza Kamranrad

The purpose of this paper is to develop some robust approaches to estimate the logistic regression profile parameters in order to decrease the effects of outliers on the…

Downloads
1669

Abstract

Purpose

The purpose of this paper is to develop some robust approaches to estimate the logistic regression profile parameters in order to decrease the effects of outliers on the performance of T2 control chart. In addition, the performance of the non-robust and the proposed robust control charts is evaluated in Phase II.

Design/methodology/approach

In this paper some, robust approaches including weighted maximum likelihood estimation, redescending M-estimator and a combination of these two approaches (WRM) are used to decrease the effects of outliers on estimating the logistic regression parameters as well as the performance of the T2 control chart.

Findings

The results of the simulation studies in both Phases I and II show the better performance of the proposed robust control charts rather than the non-robust control chart for estimating the logistic regression profile parameters and monitoring the logistic regression profiles.

Practical implications

In many practical applications, there are outliers in processes which may affect the estimation of parameters in Phase I and as a result of deteriorate the statistical performance of control charts in Phase II. The methods developed in this paper are effective for decreasing the effect of outliers in both Phases I and II.

Originality/value

This paper considers monitoring the logistic regression profile in Phase I under the presence of outliers. Also, three robust approaches are developed to decrease the effects of outliers on the parameter estimation and monitoring the logistic regression profiles in both Phases I and II.

Details

International Journal of Quality & Reliability Management, vol. 34 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

To view the access options for this content please click here
Book part
Publication date: 15 January 2010

Danny Campbell, Stephane Hess, Riccardo Scarpa and John M. Rose

The presence of respondents with apparently extreme sensitivities in choice data may have an important influence on model results, yet their role is rarely assessed or…

Abstract

The presence of respondents with apparently extreme sensitivities in choice data may have an important influence on model results, yet their role is rarely assessed or even explored. Irrespective of whether such outliers are due to genuine preference expressions, their presence suggests that specifications relying on preference heterogeneity may be more appropriate. In this paper, we compare the potential of discrete and continuous mixture distributions in identifying and accommodating extreme coefficient values. To test our methodology, we use five stated preference datasets (four simulated and one real). The real data were collected to estimate the existence value of rare and endangered fish species in Ireland.

Details

Choice Modelling: The State-of-the-art and The State-of-practice
Type: Book
ISBN: 978-1-84950-773-8

1 – 10 of over 10000