Search results

1 – 10 of 838
Book part
Publication date: 18 July 2022

Yakub Kayode Saheed, Usman Ahmad Baba and Mustafa Ayobami Raji

Purpose: This chapter aims to examine machine learning (ML) models for predicting credit card fraud (CCF).Need for the study: With the advance of technology, the world is…

Abstract

Purpose: This chapter aims to examine machine learning (ML) models for predicting credit card fraud (CCF).

Need for the study: With the advance of technology, the world is increasingly relying on credit cards rather than cash in daily life. This creates a slew of new opportunities for fraudulent individuals to abuse these cards. As of December 2020, global card losses reached $28.65billion, up 2.9% from $27.85 billion in 2018, according to the Nilson 2019 research. To safeguard the safety of credit card users, the credit card issuer should include a service that protects customers from potential risks. CCF has become a severe threat as internet buying has grown. To this goal, various studies in the field of automatic and real-time fraud detection are required. Due to their advantageous properties, the most recent ones employ a variety of ML algorithms and techniques to construct a well-fitting model to detect fraudulent transactions. When it comes to recognising credit card risk is huge and high-dimensional data, feature selection (FS) is critical for improving classification accuracy and fraud detection.

Methodology/design/approach: The objectives of this chapter are to construct a new model for credit card fraud detection (CCFD) based on principal component analysis (PCA) for FS and using supervised ML techniques such as K-nearest neighbour (KNN), ridge classifier, gradient boosting, quadratic discriminant analysis, AdaBoost, and random forest for classification of fraudulent and legitimate transactions. When compared to earlier experiments, the suggested approach demonstrates a high capacity for detecting fraudulent transactions. To be more precise, our model’s resilience is constructed by integrating the power of PCA for determining the most useful predictive features. The experimental analysis was performed on German credit card and Taiwan credit card data sets.

Findings: The experimental findings revealed that the KNN achieved an accuracy of 96.29%, recall of 100%, and precision of 96.29%, which is the best performing model on the German data set. While the ridge classifier was the best performing model on Taiwan Credit data with an accuracy of 81.75%, recall of 34.89, and precision of 66.61%.

Practical implications: The poor performance of the models on the Taiwan data revealed that it is an imbalanced credit card data set. The comparison of our proposed models with state-of-the-art credit card ML models showed that our results were competitive.

Abstract

Details

Prioritization of Failure Modes in Manufacturing Processes
Type: Book
ISBN: 978-1-83982-142-4

Book part
Publication date: 7 December 2016

Arch G. Woodside

Chapter 2 describes how behavioral science research methods that management and marketing scholars apply in studying processes involving decisions and organizational outcomes…

Abstract

Synopsis

Chapter 2 describes how behavioral science research methods that management and marketing scholars apply in studying processes involving decisions and organizational outcomes relate to three principal research objectives: fulfilling generality of findings, achieving accuracy of process actions and outcomes, and capturing complexity of nuances and conditions. The chapter's unique contribution is in advocating and describing the possibilities of researchers replacing Thorngate's (1976) “postulate of commensurate complexity” — it is impossible for a theory of social behavior to be simultaneously general, accurate, and simple and as a result organizational theorists inevitably have to make tradeoffs in their theory development — with a new postulate of disproportionate achievement. This new postulate proposes the possibilities and advocates the building and testing of useful process models that achieve all three principal research objectives. Rather than assuming the stance that a researcher must make tradeoffs that permit achieving any two, but not all three, principal research objectives as, Weick (1979) clock analogy shows, this chapter advocates embracing a property space (a three-dimensional box rather than a clock) view of research objectives and research methods. Tradeoffs need not be made; having-your-cake-and-eating-it-too is possible. The chapter includes a brief review of principal criticisms that case study researchers often express of surveys of respondents using fixed-point surveys. Likewise, the chapter reviews principal criticisms of case study research studies that researchers who favor the use of fixed-point surveys express.

Details

Case Study Research
Type: Book
ISBN: 978-1-78560-461-4

Book part
Publication date: 1 December 2016

Roman Liesenfeld, Jean-François Richard and Jan Vogler

We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and…

Abstract

We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and non-Gaussian response variables. The class of models under consideration includes specifications for discrete choices, event counts and limited-dependent variables (truncation, censoring, and sample selection) among others. Our algorithm relies upon a novel implementation of efficient importance sampling (EIS) specifically designed to exploit typical sparsity of high-dimensional spatial precision (or covariance) matrices. It is numerically very accurate and computationally feasible even for very high-dimensional latent processes. Thus, maximum likelihood (ML) estimation of high-dimensional non-Gaussian spatial models, hitherto considered to be computationally prohibitive, becomes feasible. We illustrate our approach with ML estimation of a spatial probit for US presidential voting decisions and spatial count data models (Poisson and Negbin) for firm location choices.

Details

Spatial Econometrics: Qualitative and Limited Dependent Variables
Type: Book
ISBN: 978-1-78560-986-2

Keywords

Book part
Publication date: 19 November 2014

Benjamin J. Gillen, Matthew Shum and Hyungsik Roger Moon

Structural models of demand founded on the classic work of Berry, Levinsohn, and Pakes (1995) link variation in aggregate market shares for a product to the influence of product…

Abstract

Structural models of demand founded on the classic work of Berry, Levinsohn, and Pakes (1995) link variation in aggregate market shares for a product to the influence of product attributes on heterogeneous consumer tastes. We consider implementing these models in settings with complicated products where consumer preferences for product attributes are sparse, that is, where a small proportion of a high-dimensional product characteristics influence consumer tastes. We propose a multistep estimator to efficiently perform uniform inference. Our estimator employs a penalized pre-estimation model specification stage to consistently estimate nonlinear features of the BLP model. We then perform selection via a Triple-LASSO for explanatory controls, treatment selection controls, and instrument selection. After selecting variables, we use an unpenalized GMM estimator for inference. Monte Carlo simulations verify the performance of these estimators.

Abstract

Details

Machine Learning and Artificial Intelligence in Marketing and Sales
Type: Book
ISBN: 978-1-80043-881-1

Book part
Publication date: 24 March 2006

Valeriy V. Gavrishchaka

Increasing availability of the financial data has opened new opportunities for quantitative modeling. It has also exposed limitations of the existing frameworks, such as low…

Abstract

Increasing availability of the financial data has opened new opportunities for quantitative modeling. It has also exposed limitations of the existing frameworks, such as low accuracy of the simplified analytical models and insufficient interpretability and stability of the adaptive data-driven algorithms. I make the case that boosting (a novel, ensemble learning technique) can serve as a simple and robust framework for combining the best features of the analytical and data-driven models. Boosting-based frameworks for typical financial and econometric applications are outlined. The implementation of a standard boosting procedure is illustrated in the context of the problem of symbolic volatility forecasting for IBM stock time series. It is shown that the boosted collection of the generalized autoregressive conditional heteroskedastic (GARCH)-type models is systematically more accurate than both the best single model in the collection and the widely used GARCH(1,1) model.

Details

Econometric Analysis of Financial and Economic Time Series
Type: Book
ISBN: 978-1-84950-388-4

Book part
Publication date: 13 December 2013

Refet S. Gürkaynak, Burçin Kısacıkoğlu and Barbara Rossi

Recently, it has been suggested that macroeconomic forecasts from estimated dynamic stochastic general equilibrium (DSGE) models tend to be more accurate out-of-sample than random…

Abstract

Recently, it has been suggested that macroeconomic forecasts from estimated dynamic stochastic general equilibrium (DSGE) models tend to be more accurate out-of-sample than random walk forecasts or Bayesian vector autoregression (VAR) forecasts. Del Negro and Schorfheide (2013) in particular suggest that the DSGE model forecast should become the benchmark for forecasting horse-races. We compare the real-time forecasting accuracy of the Smets and Wouters (2007) DSGE model with that of several reduced-form time series models. We first demonstrate that none of the forecasting models is efficient. Our second finding is that there is no single best forecasting method. For example, typically simple AR models are most accurate at short horizons and DSGE models are most accurate at long horizons when forecasting output growth, while for inflation forecasts the results are reversed. Moreover, the relative accuracy of all models tends to evolve over time. Third, we show that there is no support to the common practice of using large-scale Bayesian VAR models as the forecast benchmark when evaluating DSGE models. Indeed, low-dimensional unrestricted AR and VAR forecasts may forecast more accurately.

Details

VAR Models in Macroeconomics – New Developments and Applications: Essays in Honor of Christopher A. Sims
Type: Book
ISBN: 978-1-78190-752-8

Keywords

Book part
Publication date: 14 July 2006

David Shelby Harrison and Larry N. Killough

Activity-based costing (ABC) is presented in accounting textbooks as a costing system that can be used to make valuable managerial decisions. Little experimental or empirical…

Abstract

Activity-based costing (ABC) is presented in accounting textbooks as a costing system that can be used to make valuable managerial decisions. Little experimental or empirical evidence, however, has demonstrated the benefits of ABC under controlled conditions. Similarly, although case studies and business surveys often comment on business environments that appear to favor ABC methods, experimental studies of actual behavioral issues affecting ABCs usage are limited.

This study used an interactive computer simulation, under controlled, laboratory conditions, to test the decision usefulness of ABC information. The effects of presentation format (theory of cognitive fit and decision framing), decision commitment (cognitive dissonance), and their interactions were also examined. ABC information yielded better profitability decisions, requiring no additional decision time. Graphic presentations required less decision time, however, presentation formats did not significantly affect decision quality (simulation profits). Decision commitment beneficially affected profitability decisions, requiring no additional time. Decision commitment was especially influential (helpful) in non-ABC decision environments.

Details

Advances in Management Accounting
Type: Book
ISBN: 978-1-84950-447-8

Book part
Publication date: 18 January 2022

Andreas Pick and Matthijs Carpay

This chapter investigates the performance of different dimension reduction approaches for large vector autoregressions in multi-step ahead forecasts. The authors consider factor…

Abstract

This chapter investigates the performance of different dimension reduction approaches for large vector autoregressions in multi-step ahead forecasts. The authors consider factor augmented VAR models using principal components and partial least squares, random subset regression, random projection, random compression, and estimation via LASSO and Bayesian VAR. The authors compare the accuracy of iterated and direct multi-step point and density forecasts. The comparison is based on macroeconomic and financial variables from the FRED-MD data base. Our findings suggest that random subspace methods and LASSO estimation deliver the most precise forecasts.

Details

Essays in Honor of M. Hashem Pesaran: Prediction and Macro Modeling
Type: Book
ISBN: 978-1-80262-062-7

Keywords

1 – 10 of 838