Search results

11 – 20 of 216
Article
Publication date: 21 November 2018

Kiran Ahuja and Arun Khosla

This paper aims to focus on data analytic tools and integrated data analyzing approaches used on smart energy meters (SEMs). Furthermore, while observing the diverse techniques…

Abstract

Purpose

This paper aims to focus on data analytic tools and integrated data analyzing approaches used on smart energy meters (SEMs). Furthermore, while observing the diverse techniques and frameworks of data analysis of SEM, the authors propose a novel framework for SEM by using gamification approach for enhancing the involvement of consumers to conserve energy and improve efficiency.

Design/methodology/approach

A few research strategies have been accounted for analyzing the raw data, yet at the same time, a considerable measure of work should be done in making these commercially reasonable. Data analytic tools and integrated data analyzing approaches are used on SEMs. Furthermore, while observing the diverse techniques and frameworks of data analysis of SEM, the authors propose a novel framework for SEM by using gamification approach for enhancing the involvement of consumers to conserve energy and improve efficiency. Advantages of SEM’s are additionally discussed for inspiring consumers, utilities and their respective partners.

Findings

Consumers, utilities and researchers can also take benefit of the recommended framework by planning their routine activities and enjoying rewards offered by gamification approach. Through gamification, consumers’ commitment enhances, and it changes their less manageable conduct on an intentional premise. The practical implementation of such approaches showed the improved energy efficiency as a consequence.

Details

International Journal of Energy Sector Management, vol. 13 no. 2
Type: Research Article
ISSN: 1750-6220

Keywords

Article
Publication date: 8 September 2022

Ziming Zeng, Tingting Li, Jingjing Sun, Shouqiang Sun and Yu Zhang

The proliferation of bots in social networks has profoundly affected the interactions of legitimate users. Detecting and rejecting these unwelcome bots has become part of the…

Abstract

Purpose

The proliferation of bots in social networks has profoundly affected the interactions of legitimate users. Detecting and rejecting these unwelcome bots has become part of the collective Internet agenda. Unfortunately, as bot creators use more sophisticated approaches to avoid being discovered, it has become increasingly difficult to distinguish social bots from legitimate users. Therefore, this paper proposes a novel social bot detection mechanism to adapt to new and different kinds of bots.

Design/methodology/approach

This paper proposes a research framework to enhance the generalization of social bot detection from two dimensions: feature extraction and detection approaches. First, 36 features are extracted from four views for social bot detection. Then, this paper analyzes the feature contribution in different kinds of social bots, and the features with stronger generalization are proposed. Finally, this paper introduces outlier detection approaches to enhance the ever-changing social bot detection.

Findings

The experimental results show that the more important features can be more effectively generalized to different social bot detection tasks. Compared with the traditional binary-class classifier, the proposed outlier detection approaches can better adapt to the ever-changing social bots with a performance of 89.23 per cent measured using the F1 score.

Originality/value

Based on the visual interpretation of the feature contribution, the features with stronger generalization in different detection tasks are found. The outlier detection approaches are first introduced to enhance the detection of ever-changing social bots.

Details

Data Technologies and Applications, vol. 57 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 1 December 2003

Joseph S. Sherif and Rod Ayers

This paper is part II of a previous article of the same title: Intrusion detection. Part II is concerned with intrusion threats, attacks, defense, models, methods and systems.

1416

Abstract

This paper is part II of a previous article of the same title: Intrusion detection. Part II is concerned with intrusion threats, attacks, defense, models, methods and systems.

Details

Information Management & Computer Security, vol. 11 no. 5
Type: Research Article
ISSN: 0968-5227

Keywords

Article
Publication date: 1 October 2003

Joseph S. Sherif, Rod Ayers and Tommy G. Dearmond

Organizations more often than not lack comprehensive security policies and are not adequately prepared to protect their systems against intrusions. This paper puts forward a…

1548

Abstract

Organizations more often than not lack comprehensive security policies and are not adequately prepared to protect their systems against intrusions. This paper puts forward a review of state of the art and state of the applicability of intrusion detection systems and models. The paper also presents a classification of literature pertaining to intrusion detection.

Details

Information Management & Computer Security, vol. 11 no. 4
Type: Research Article
ISSN: 0968-5227

Keywords

Article
Publication date: 19 April 2022

D. Divya, Bhasi Marath and M.B. Santosh Kumar

This study aims to bring awareness to the developing of fault detection systems using the data collected from sensor devices/physical devices of various systems for predictive…

1663

Abstract

Purpose

This study aims to bring awareness to the developing of fault detection systems using the data collected from sensor devices/physical devices of various systems for predictive maintenance. Opportunities and challenges in developing anomaly detection algorithms for predictive maintenance and unexplored areas in this context are also discussed.

Design/methodology/approach

For conducting a systematic review on the state-of-the-art algorithms in fault detection for predictive maintenance, review papers from the years 2017–2021 available in the Scopus database were selected. A total of 93 papers were chosen. They are classified under electrical and electronics, civil and constructions, automobile, production and mechanical. In addition to this, the paper provides a detailed discussion of various fault-detection algorithms that can be categorised under supervised, semi-supervised, unsupervised learning and traditional statistical method along with an analysis of various forms of anomalies prevalent across different sectors of industry.

Findings

Based on the literature reviewed, seven propositions with a focus on the following areas are presented: need for a uniform framework while scaling the number of sensors; the need for identification of erroneous parameters; why there is a need for new algorithms based on unsupervised and semi-supervised learning; the importance of ensemble learning and data fusion algorithms; the necessity of automatic fault diagnostic systems; concerns about multiple fault detection; and cost-effective fault detection. These propositions shed light on the unsolved issues of predictive maintenance using fault detection algorithms. A novel architecture based on the methodologies and propositions gives more clarity for the reader to further explore in this area.

Originality/value

Papers for this study were selected from the Scopus database for predictive maintenance in the field of fault detection. Review papers published in this area deal only with methods used to detect anomalies, whereas this paper attempts to establish a link between different industrial domains and the methods used in each industry that uses fault detection for predictive maintenance.

Details

Journal of Quality in Maintenance Engineering, vol. 29 no. 2
Type: Research Article
ISSN: 1355-2511

Keywords

Abstract

Details

Rutgers Studies in Accounting Analytics: Audit Analytics in the Financial Industry
Type: Book
ISBN: 978-1-78743-086-0

Article
Publication date: 18 June 2019

Mauricio Loyola

The purpose of this paper is to propose a simple, fast, and effective method for detecting measurement errors in data collected with low-cost environmental sensors typically used…

Abstract

Purpose

The purpose of this paper is to propose a simple, fast, and effective method for detecting measurement errors in data collected with low-cost environmental sensors typically used in building monitoring, evaluation, and automation applications.

Design/methodology/approach

The method combines two unsupervised learning techniques: a distance-based anomaly detection algorithm analyzing temporal patterns in data, and a density-based algorithm comparing data across different spatially related sensors.

Findings

Results of tests using 60,000 observations of temperature and humidity collected from 20 sensors during three weeks show that the method effectively identified measurement errors and was not affected by valid unusual events. Precision, recall, and accuracy were 0.999 or higher for all cases tested.

Originality/value

The method is simple to implement, computationally inexpensive, and fast enough to be used in real-time with modest open-source microprocessors and a wide variety of environmental sensors. It is a robust and convenient approach for overcoming the hardware constraints of low-cost sensors, allowing users to improve the quality of collected data at almost no additional cost and effort.

Details

Smart and Sustainable Built Environment, vol. 8 no. 4
Type: Research Article
ISSN: 2046-6099

Keywords

Book part
Publication date: 29 May 2023

Divya Nair and Neeta Mhavan

A zero-day vulnerability is a complimentary ticket to the attackers for gaining entry into the network. Thus, there is necessity to device appropriate threat detection systems and…

Abstract

A zero-day vulnerability is a complimentary ticket to the attackers for gaining entry into the network. Thus, there is necessity to device appropriate threat detection systems and establish an innovative and safe solution that prevents unauthorised intrusions for defending various components of cybersecurity. We present a survey of recent Intrusion Detection Systems (IDS) in detecting zero-day vulnerabilities based on the following dimensions: types of cyber-attacks, datasets used and kinds of network detection systems.

Purpose: The study focuses on presenting an exhaustive review on the effectiveness of the recent IDS with respect to zero-day vulnerabilities.

Methodology: Systematic exploration was done at the IEEE, Elsevier, Springer, RAID, ESCORICS, Google Scholar, and other relevant platforms of studies published in English between 2015 and 2021 using keywords and combinations of relevant terms.

Findings: It is possible to train IDS for zero-day attacks. The existing IDS have strengths that make them capable of effective detection against zero-day attacks. However, they display certain limitations that reduce their credibility. Novel strategies like deep learning, machine learning, fuzzing technique, runtime verification technique, and Hidden Markov Models can be used to design IDS to detect malicious traffic.

Implication: This paper explored and highlighted the advantages and limitations of existing IDS enabling the selection of best possible IDS to protect the system. Moreover, the comparison between signature-based and anomaly-based IDS exemplifies that one viable approach to accurately detect the zero-day vulnerabilities would be the integration of hybrid mechanism.

Details

Smart Analytics, Artificial Intelligence and Sustainable Performance Management in a Global Digitalised Economy
Type: Book
ISBN: 978-1-80382-555-7

Keywords

Article
Publication date: 10 June 2022

Yasser Alharbi

This strategy significantly reduces the computational overhead and storage overhead required when using the kernel density estimation method to calculate the abnormal evaluation…

Abstract

Purpose

This strategy significantly reduces the computational overhead and storage overhead required when using the kernel density estimation method to calculate the abnormal evaluation value of the test sample.

Design/methodology/approach

To effectively deal with the security threats of botnets to the home and personal Internet of Things (IoT), especially for the objective problem of insufficient resources for anomaly detection in the home environment, a novel kernel density estimation-based federated learning-based lightweight Internet of Things anomaly traffic detection based on nuclear density estimation (KDE-LIATD) method. First, the KDE-LIATD method uses Gaussian kernel density estimation method to estimate every normal sample in the training set. The eigenvalue probability density function of the dimensional feature and the corresponding probability density; then, a feature selection algorithm based on kernel density estimation, obtained features that make outstanding contributions to anomaly detection, thereby reducing the feature dimension while improving the accuracy of anomaly detection; finally, the anomaly evaluation value of the test sample is calculated by the cubic spine interpolation method and anomaly detection is performed.

Findings

The simulation experiment results show that the proposed KDE-LIATD method is relatively strong in the detection of abnormal traffic for heterogeneous IoT devices.

Originality/value

With its robustness and compatibility, it can effectively detect abnormal traffic of household and personal IoT botnets.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

Book part
Publication date: 22 July 2021

Chien-Hung Chang

This chapter introduces a risk control framework on credit card fraud instead of providing a solely binary classifier model. The anomaly detection approach is adopted to identify…

Abstract

This chapter introduces a risk control framework on credit card fraud instead of providing a solely binary classifier model. The anomaly detection approach is adopted to identify fraud events as the outliers of the reconstruction error of a trained autoencoder (AE). The trained AE shows fitness and robustness on the normal transactions and heterogeneous behavior on fraud activities. The cost of false-positive normal transactions is controlled, and the loss of false-negative frauds can be evaluated by the thresholds from the percentiles of reconstruction error of trained AE on normal transactions. To align the risk assessment of the economic and financial situation, the risk manager can adjust the threshold to meet the risk control requirements. Using the 95th percentile as the threshold, the rate of wrongly detecting normal transactions is controlled at 5% and the true positive rate is 86%. For the 99th percentile threshold, the well-controlled false positive rate is around 1% and 83% for the truly detecting fraud activities. The performance of a false positive rate and the true positive rate is competitive with other supervised learning algorithms.

Details

Advances in Pacific Basin Business, Economics and Finance
Type: Book
ISBN: 978-1-80043-870-5

Keywords

11 – 20 of 216