Search results

21 – 30 of 238
Book part
Publication date: 22 July 2021

Chien-Hung Chang

This chapter introduces a risk control framework on credit card fraud instead of providing a solely binary classifier model. The anomaly detection approach is adopted to identify…

Abstract

This chapter introduces a risk control framework on credit card fraud instead of providing a solely binary classifier model. The anomaly detection approach is adopted to identify fraud events as the outliers of the reconstruction error of a trained autoencoder (AE). The trained AE shows fitness and robustness on the normal transactions and heterogeneous behavior on fraud activities. The cost of false-positive normal transactions is controlled, and the loss of false-negative frauds can be evaluated by the thresholds from the percentiles of reconstruction error of trained AE on normal transactions. To align the risk assessment of the economic and financial situation, the risk manager can adjust the threshold to meet the risk control requirements. Using the 95th percentile as the threshold, the rate of wrongly detecting normal transactions is controlled at 5% and the true positive rate is 86%. For the 99th percentile threshold, the well-controlled false positive rate is around 1% and 83% for the truly detecting fraud activities. The performance of a false positive rate and the true positive rate is competitive with other supervised learning algorithms.

Details

Advances in Pacific Basin Business, Economics and Finance
Type: Book
ISBN: 978-1-80043-870-5

Keywords

Article
Publication date: 31 May 2022

Mark E. Lokanan

This paper aims to reviews the literature on applying visualization techniques to detect credit card fraud (CCF) and suspicious money laundering transactions.

Abstract

Purpose

This paper aims to reviews the literature on applying visualization techniques to detect credit card fraud (CCF) and suspicious money laundering transactions.

Design/methodology/approach

In surveying the literature on visual fraud detection in these two domains, this paper reviews: the current use of visualization techniques, the variations of visual analytics used and the challenges of these techniques.

Findings

The findings reveal how visual analytics is used to detect outliers in CCF detection and identify links to criminal networks in money laundering transactions. Graph methodology and unsupervised clustering analyses are the most dominant types of visual analytics used for CCF detection. In contrast, network and graph analytics are heavily used in identifying criminal relationships in money laundering transactions.

Originality/value

Some common challenges in using visualization techniques to identify fraudulent transactions in both domains relate to data complexity and fraudsters’ ability to evade monitoring mechanisms.

Details

Journal of Money Laundering Control, vol. 26 no. 3
Type: Research Article
ISSN: 1368-5201

Keywords

Article
Publication date: 24 December 2021

Neetika Jain and Sangeeta Mittal

A cost-effective way to achieve fuel economy is to reinforce positive driving behaviour. Driving behaviour can be controlled if drivers can be alerted for behaviour that results…

Abstract

Purpose

A cost-effective way to achieve fuel economy is to reinforce positive driving behaviour. Driving behaviour can be controlled if drivers can be alerted for behaviour that results in poor fuel economy. Fuel consumption must be tracked and monitored instantaneously rather than tracking average fuel economy for the entire trip duration. A single-step application of machine learning (ML) is not sufficient to model prediction of instantaneous fuel consumption and detection of anomalous fuel economy. The study designs an ML pipeline to track and monitor instantaneous fuel economy and detect anomalies.

Design/methodology/approach

This research iteratively applies different variations of a two-step ML pipeline to the driving dataset for hatchback cars. The first step addresses the problem of accurate measurement and prediction of fuel economy using time series driving data, and the second step detects abnormal fuel economy in relation to contextual information. Long short-term memory autoencoder method learns and uses the most salient features of time series data to build a regression model. The contextual anomaly is detected by following two approaches, kernel quantile estimator and one-class support vector machine. The kernel quantile estimator sets dynamic threshold for detecting anomalous behaviour. Any error beyond a threshold is classified as an anomaly. The one-class support vector machine learns training error pattern and applies the model to test data for anomaly detection. The two-step ML pipeline is further modified by replacing long short term memory autoencoder with gated recurrent network autoencoder, and the performance of both models is compared. The speed recommendations and feedback are issued to the driver based on detected anomalies for controlling aggressive behaviour.

Findings

A composite long short-term memory autoencoder was compared with gated recurrent unit autoencoder. Both models achieve prediction accuracy within a range of 98%–100% for prediction as a first step. Recall and accuracy metrics for anomaly detection using kernel quantile estimator remains within 98%–100%, whereas the one-class support vector machine approach performs within the range of 99.3%–100%.

Research limitations/implications

The proposed approach does not consider socio-demographics or physiological information of drivers due to privacy concerns. However, it can be extended to correlate driver's physiological state such as fatigue, sleep and stress to correlate with driving behaviour and fuel economy. The anomaly detection approach here is limited to providing feedback to driver, it can be extended to give contextual feedback to the steering controller or throttle controller. In the future, a controller-based system can be associated with an anomaly detection approach to control the acceleration and braking action of the driver.

Practical implications

The suggested approach is helpful in monitoring and reinforcing fuel-economical driving behaviour among fleet drivers as per different environmental contexts. It can also be used as a training tool for improving driving efficiency for new drivers. It keeps drivers engaged positively by issuing a relevant warning for significant contextual anomalies and avoids issuing a warning for minor operational errors.

Originality/value

This paper contributes to the existing literature by providing an ML pipeline approach to track and monitor instantaneous fuel economy rather than relying on average fuel economy values. The approach is further extended to detect contextual driving behaviour anomalies and optimises fuel economy. The main contributions for this approach are as follows: (1) a prediction model is applied to fine-grained time series driving data to predict instantaneous fuel consumption. (2) Anomalous fuel economy is detected by comparing prediction error against a threshold and analysing error patterns based on contextual information.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 15 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 3 June 2019

Tran Khanh Dang, Duc Minh Chau Pham and Duc Dan Ho

Data crawling in e-commerce for market research often come with the risk of poor authenticity due to modification attacks. The purpose of this paper is to propose a novel data…

Abstract

Purpose

Data crawling in e-commerce for market research often come with the risk of poor authenticity due to modification attacks. The purpose of this paper is to propose a novel data authentication model for such systems.

Design/methodology/approach

The data modification problem requires careful examinations in which the data are re-collected to verify their reliability by overlapping the two datasets. This approach is to use different anomaly detection techniques to determine which data are potential for frauds and to be re-collected. The paper also proposes a data selection model using their weights of importance in addition to anomaly detection. The target is to significantly reduce the amount of data in need of verification, but still guarantee that they achieve their high authenticity. Empirical experiments are conducted with real-world datasets to evaluate the efficiency of the proposed scheme.

Findings

The authors examine several techniques for detecting anomalies in the data of users and products, which give the accuracy of 80 per cent approximately. The integration with the weight selection model is also proved to be able to detect more than 80 per cent of the existing fraudulent ones while being careful not to accidentally include ones which are not, especially when the proportion of frauds is high.

Originality/value

With the rapid development of e-commerce fields, fraud detection on their data, as well as in Web crawling systems is new and necessary for research. This paper contributes a novel approach in crawling systems data authentication problem which has not been studied much.

Details

International Journal of Web Information Systems, vol. 15 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 22 May 2020

Aryana Collins Jackson and Seán Lacey

The discrete Fourier transformation (DFT) has been proven to be a successful method for determining whether a discrete time series is seasonal and, if so, for detecting the…

Abstract

Purpose

The discrete Fourier transformation (DFT) has been proven to be a successful method for determining whether a discrete time series is seasonal and, if so, for detecting the period. This paper deals exclusively with rare data, in which instances occur periodically at a low frequency.

Design/methodology/approach

Data based on real-world situations is simulated for analysis.

Findings

Cycle number detection is done with spectral analysis, period detection is completed using DFT coefficients and signal shifts in the time domain are found using the convolution theorem. Additionally, a new method for detecting anomalies in binary, rare data is presented: the sum of distances. Using this method, expected events which have not occurred and unexpected events which have occurred at various sampling frequencies can be detected. Anomalies which are not considered outliers to be found.

Research limitations/implications

Aliasing can contribute to extra frequencies which point to extra periods in the time domain. This can be reduced or removed with techniques such as windowing. In future work, this will be explored.

Practical implications

Applications include determining seasonality and thus investigating the underlying causes of hard drive failure, power outages and other undesired events. This work will also lend itself well to finding patterns among missing desired events, such as a scheduled hard drive backup or an employee's regular login to a server.

Originality/value

This paper has shown how seasonality and anomalies are successfully detected in seasonal, discrete, rare and binary data. Previously, the DFT has only been used for non-rare data.

Details

Data Technologies and Applications, vol. 54 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 23 November 2012

Bailing Zhang, Yungang Zhang and Wenjin Lu

The task of internet intrusion detection is to detect anomalous network connections caused by intrusive activities. There have been many intrusion detection schemes proposed, most…

2609

Abstract

Purpose

The task of internet intrusion detection is to detect anomalous network connections caused by intrusive activities. There have been many intrusion detection schemes proposed, most of which apply both normal and intrusion data to construct classifiers. However, normal data and intrusion data are often seriously imbalanced because intrusive connection data are usually difficult to collect. Internet intrusion detection can be considered as a novelty detection problem, which is the identification of new or unknown data, to which a learning system has not been exposed during training. This paper aims to address this issue.

Design/methodology/approach

In this paper, a novelty detection‐based intrusion detection system is proposed by combining the self‐organizing map (SOM) and the kernel auto‐associator (KAA) model proposed earlier by the first author. The KAA model is a generalization of auto‐associative networks by training to recall the inputs through kernel subspace. For anomaly detection, the SOM organizes the prototypes of samples while the KAA provides data description for the normal connection patterns. The hybrid SOM/KAA model can also be applied to classify different types of attacks.

Findings

Using the KDD CUP, 1999 dataset, the performance of the proposed scheme in separating normal connection patterns from intrusive connection patterns was compared with some state‐of‐art novelty detection methods, showing marked improvements in terms of the high intrusion detection accuracy and low false positives. Simulations on the classification of attack categories also demonstrate favorable results of the accuracy, which are comparable to the entries from the KDD CUP, 1999 data mining competition.

Originality/value

The hybrid model of SOM and the KAA model can achieve significant results for intrusion detection.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 5 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 14 August 2017

Stephen Mayowa Famurewa, Liangwei Zhang and Matthias Asplund

The purpose of this paper is to present a framework for maintenance analytics that is useful for the assessment of rail condition and for maintenance decision support. The…

Abstract

Purpose

The purpose of this paper is to present a framework for maintenance analytics that is useful for the assessment of rail condition and for maintenance decision support. The framework covers three essential maintenance aspects: diagnostic, prediction and prescription. The paper also presents principal component analysis (PCA) and local outlier factor methods for detecting anomalous rail wear occurrences using field measurement data.

Design/methodology/approach

The approach used in this paper includes a review of the concept of analytics and appropriate adaptation to railway infrastructure maintenance. The diagnostics aspect of the proposed framework is demonstrated with a case study using historical rail profile data collected between 2007 and 2016 for nine sharp curves on the heavy haul line in Sweden.

Findings

The framework presented for maintenance analytics is suitable for extracting useful information from condition data as required for effective rail maintenance decision support. The findings of the case study include: combination of the two statistics from PCA model (T2 and Q) can help to identify systematic and random variations in rail wear pattern that are beyond normal: the visualisation approach is a better tool for anomaly detection as it categorises wear observations into normal, suspicious and anomalous observations.

Practical implications

A practical implication of this paper is that the framework and the diagnostic tool can be considered as an integral part of e-maintenance solution. It can be easily adapted as online or on-board maintenance analytic tool with data from automated vehicle-based measurement system.

Originality/value

This research adapts the concept of analytics to railway infrastructure maintenance for enhanced decision making. It proposes a graphical method for combining and visualising different outlier statistics as a reliable anomaly detection tool.

Details

Journal of Quality in Maintenance Engineering, vol. 23 no. 3
Type: Research Article
ISSN: 1355-2511

Keywords

Abstract

Details

Rutgers Studies in Accounting Analytics: Audit Analytics in the Financial Industry
Type: Book
ISBN: 978-1-78743-086-0

Open Access
Article
Publication date: 28 April 2023

Prudence Kadebu, Robert T.R. Shoniwa, Kudakwashe Zvarevashe, Addlight Mukwazvure, Innocent Mapanga, Nyasha Fadzai Thusabantu and Tatenda Trust Gotora

Given how smart today’s malware authors have become through employing highly sophisticated techniques, it is only logical that methods be developed to combat the most potent…

2420

Abstract

Purpose

Given how smart today’s malware authors have become through employing highly sophisticated techniques, it is only logical that methods be developed to combat the most potent threats, particularly where the malware is stealthy and makes indicators of compromise (IOC) difficult to detect. After the analysis is completed, the output can be employed to detect and then counteract the attack. The goal of this work is to propose a machine learning approach to improve malware detection by combining the strengths of both supervised and unsupervised machine learning techniques. This study is essential as malware has certainly become ubiquitous as cyber-criminals use it to attack systems in cyberspace. Malware analysis is required to reveal hidden IOC, to comprehend the attacker’s goal and the severity of the damage and to find vulnerabilities within the system.

Design/methodology/approach

This research proposes a hybrid approach for dynamic and static malware analysis that combines unsupervised and supervised machine learning algorithms and goes on to show how Malware exploiting steganography can be exposed.

Findings

The tactics used by malware developers to circumvent detection are becoming more advanced with steganography becoming a popular technique applied in obfuscation to evade mechanisms for detection. Malware analysis continues to call for continuous improvement of existing techniques. State-of-the-art approaches applying machine learning have become increasingly popular with highly promising results.

Originality/value

Cyber security researchers globally are grappling with devising innovative strategies to identify and defend against the threat of extremely sophisticated malware attacks on key infrastructure containing sensitive data. The process of detecting the presence of malware requires expertise in malware analysis. Applying intelligent methods to this process can aid practitioners in identifying malware’s behaviour and features. This is especially expedient where the malware is stealthy, hiding IOC.

Details

International Journal of Industrial Engineering and Operations Management, vol. 5 no. 2
Type: Research Article
ISSN: 2690-6090

Keywords

Article
Publication date: 10 April 2017

Raman Singh, Harish Kumar, Ravinder Kumar Singla and Ramachandran Ramkumar Ketti

The paper addresses various cyber threats and their effects on the internet. A review of the literature on intrusion detection systems (IDSs) as a means of mitigating internet…

2503

Abstract

Purpose

The paper addresses various cyber threats and their effects on the internet. A review of the literature on intrusion detection systems (IDSs) as a means of mitigating internet attacks is presented, and gaps in the research are identified. The purpose of this paper is to identify the limitations of the current research and presents future directions for intrusion/malware detection research.

Design/methodology/approach

The paper presents a review of the research literature on IDSs, prior to identifying research gaps and limitations and suggesting future directions.

Findings

The popularity of the internet makes it vulnerable against various cyber-attacks. Ongoing research on intrusion detection methods aims to overcome the limitations of earlier approaches to internet security. However, findings from the literature review indicate a number of different limitations of existing techniques: poor accuracy, high detection time, and low flexibility in detecting zero-day attacks.

Originality/value

This paper provides a review of major issues in intrusion detection approaches. On the basis of a systematic and detailed review of the literature, various research limitations are discovered. Clear and concise directions for future research are provided.

Details

Online Information Review, vol. 41 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

21 – 30 of 238