Search results

1 – 10 of over 3000
Article
Publication date: 13 June 2016

Muskan Garg and Mukesh Kumar

Social Media is one of the largest platforms to voluntarily communicate thoughts. With increase in multimedia data on social networking websites, information about human behaviour…

1451

Abstract

Purpose

Social Media is one of the largest platforms to voluntarily communicate thoughts. With increase in multimedia data on social networking websites, information about human behaviour is increasing. This user-generated data are present on the internet in different modalities including text, images, audio, video, gesture, etc. The purpose of this paper is to consider multiple variables for event detection and analysis including weather data, temporal data, geo-location data, traffic data, weekday’s data, etc.

Design/methodology/approach

In this paper, evolution of different approaches have been studied and explored for multivariate event analysis of uncertain social media data.

Findings

Based on burst of outbreak information from social media including natural disasters, contagious disease spread, etc. can be controlled. This can be path breaking input for instant emergency management resources. This has received much attention from academic researchers and practitioners to study the latent patterns for event detection from social media signals.

Originality/value

This paper provides useful insights into existing methodologies and recommendations for future attempts in this area of research. An overview of architecture of event analysis and statistical approaches are used to determine the events in social media which need attention.

Details

Online Information Review, vol. 40 no. 3
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 20 August 2018

Dharini Ramachandran and Parvathi Ramasubramanian

“What’s happening?” around you can be spread through the very pronounced social media to everybody. It provides a powerful platform that brings to light the latest news, trends and

Abstract

Purpose

“What’s happening?” around you can be spread through the very pronounced social media to everybody. It provides a powerful platform that brings to light the latest news, trends and happenings around the world in “near instant” time. Microblog is a popular Web service that enables users to post small pieces of digital content, such as text, picture, video and link to external resource. The raw data from microblog prove indispensable in extracting information from it, offering a way to single out the physical events and popular topics prevalent in social media. This study aims to present and review the varied methods carried out for event detection from microblogs. An event is an activity or action with a clear finite duration in which the target entity plays a key role. Event detection helps in the timely understanding of people’s opinion and actual condition of the detected events.

Design/methodology/approach

This paper presents a study of various approaches adopted for event detection from microblogs. The approaches are reviewed according to the techniques used, applications and the element detected (event or topic).

Findings

Various ideas explored, important observations inferred, corresponding outcomes and assessment of results from those approaches are discussed.

Originality/value

The approaches and techniques for event detection are studied in two categories: first, based on the kind of event being detected (physical occurrence or emerging/popular topic) and second, within each category, the approaches further categorized into supervised- and unsupervised-based techniques.

Article
Publication date: 5 November 2018

Xiaojuan Zhang, Shuguang Han and Wei Lu

The purpose of this paper is to predict news intent by exploring contextual and temporal features directly mined from a general search engine query log.

204

Abstract

Purpose

The purpose of this paper is to predict news intent by exploring contextual and temporal features directly mined from a general search engine query log.

Design/methodology/approach

First, a ground-truth data set with correctly marked news and non-news queries was built. Second, a detailed analysis of the search goals and topics distribution of news/non-news queries was conducted. Third, three news features, that is, the relationship between entity and contextual words extended from query sessions, topical similarity among clicked results and temporal burst point were obtained. Finally, to understand the utilities of the new features and prior features, extensive prediction experiments on SogouQ (a Chinese search engine query log) were conducted.

Findings

News intent can be predicted with high accuracy by using the proposed contextual and temporal features, and the macro average F1 of classification is around 0.8677. Contextual features are more effective than temporal features. All the three new features are useful and significant in improving the accuracy of news intent prediction.

Originality/value

This paper provides a new and different perspective in recognizing queries with news intent without use of such large corpora as social media (e.g. Wikipedia, Twitter and blogs) and news data sets. The research will be helpful for general-purpose search engines to address search intents for news events. In addition, the authors believe that the approaches described here in this paper are general enough to apply to other verticals with dynamic content and interest, such as blog or financial data.

Details

The Electronic Library, vol. 36 no. 5
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 26 March 2021

Hima Bindu Valiveti, Anil Kumar B., Lakshmi Chaitanya Duggineni, Swetha Namburu and Swaraja Kuraparthi

Road accidents, an inadvertent mishap can be detected automatically and alerts sent instantly with the collaboration of image processing techniques and on-road video surveillance…

Abstract

Purpose

Road accidents, an inadvertent mishap can be detected automatically and alerts sent instantly with the collaboration of image processing techniques and on-road video surveillance systems. However, to rely exclusively on visual information especially under adverse conditions like night times, dark areas and unfavourable weather conditions such as snowfall, rain, and fog which result in faint visibility lead to incertitude. The main goal of the proposed work is certainty of accident occurrence.

Design/methodology/approach

The authors of this work propose a method for detecting road accidents by analyzing audio signals to identify hazardous situations such as tire skidding and car crashes. The motive of this project is to build a simple and complete audio event detection system using signal feature extraction methods to improve its detection accuracy. The experimental analysis is carried out on a publicly available real time data-set consisting of audio samples like car crashes and tire skidding. The Temporal features of the recorded audio signal like Energy Volume Zero Crossing Rate 28ZCR2529 and the Spectral features like Spectral Centroid Spectral Spread Spectral Roll of factor Spectral Flux the Psychoacoustic features Energy Sub Bands ratio and Gammatonegram are computed. The extracted features are pre-processed and trained and tested using Support Vector Machine (SVM) and K-nearest neighborhood (KNN) classification algorithms for exact prediction of the accident occurrence for various SNR ranges. The combination of Gammatonegram with Temporal and Spectral features of the validates to be superior compared to the existing detection techniques.

Findings

Temporal, Spectral, Psychoacoustic features, gammetonegram of the recorded audio signal are extracted. A High level vector is generated based on centroid and the extracted features are classified with the help of machine learning algorithms like SVM, KNN and DT. The audio samples collected have varied SNR ranges and the accuracy of the classification algorithms is thoroughly tested.

Practical implications

Denoising of the audio samples for perfect feature extraction was a tedious chore.

Originality/value

The existing literature cites extraction of Temporal and Spectral features and then the application of classification algorithms. For perfect classification, the authors have chosen to construct a high level vector from all the four extracted Temporal, Spectral, Psycho acoustic and Gammetonegram features. The classification algorithms are employed on samples collected at varied SNR ranges.

Details

International Journal of Pervasive Computing and Communications, vol. 17 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 9 February 2022

Abel Yeboah-Ofori, Cameron Swart, Francisca Afua Opoku-Boateng and Shareeful Islam

Cyber resilience in cyber supply chain (CSC) systems security has become inevitable as attacks, risks and vulnerabilities increase in real-time critical infrastructure systems…

Abstract

Purpose

Cyber resilience in cyber supply chain (CSC) systems security has become inevitable as attacks, risks and vulnerabilities increase in real-time critical infrastructure systems with little time for system failures. Cyber resilience approaches ensure the ability of a supply chain system to prepare, absorb, recover and adapt to adverse effects in the complex CPS environment. However, threats within the CSC context can pose a severe disruption to the overall business continuity. The paper aims to use machine learning (ML) techniques to predict threats on cyber supply chain systems, improve cyber resilience that focuses on critical assets and reduce the attack surface.

Design/methodology/approach

The approach follows two main cyber resilience design principles that focus on common critical assets and reduce the attack surface for this purpose. ML techniques are applied to various classification algorithms to learn a dataset for performance accuracies and threats predictions based on the CSC resilience design principles. The critical assets include Cyber Digital, Cyber Physical and physical elements. We consider Logistic Regression, Decision Tree, Naïve Bayes and Random Forest classification algorithms in a Majority Voting to predicate the results. Finally, we mapped the threats with known attacks for inferences to improve resilience on the critical assets.

Findings

The paper contributes to CSC system resilience based on the understanding and prediction of the threats. The result shows a 70% performance accuracy for the threat prediction with cyber resilience design principles that focus on critical assets and controls and reduce the threat.

Research limitations/implications

Therefore, there is a need to understand and predicate the threat so that appropriate control actions can ensure system resilience. However, due to the invincibility and dynamic nature of cyber attacks, there are limited controls and attributions. This poses serious implications for cyber supply chain systems and its cascading impacts.

Practical implications

ML techniques are used on a dataset to analyse and predict the threats based on the CSC resilience design principles.

Social implications

There are no social implications rather it has serious implications for organizations and third-party vendors.

Originality/value

The originality of the paper lies in the fact that cyber resilience design principles that focus on common critical assets are used including Cyber Digital, Cyber Physical and physical elements to determine the attack surface. ML techniques are applied to various classification algorithms to learn a dataset for performance accuracies and threats predictions based on the CSC resilience design principles to reduce the attack surface for this purpose.

Details

Continuity & Resilience Review, vol. 4 no. 1
Type: Research Article
ISSN: 2516-7502

Keywords

Article
Publication date: 14 March 2016

Gebeyehu Belay Gebremeskel, Chai Yi, Zhongshi He and Dawit Haile

Among the growing number of data mining (DM) techniques, outlier detection has gained importance in many applications and also attracted much attention in recent times. In the…

Abstract

Purpose

Among the growing number of data mining (DM) techniques, outlier detection has gained importance in many applications and also attracted much attention in recent times. In the past, outlier detection researched papers appeared in a safety care that can view as searching for the needles in the haystack. However, outliers are not always erroneous. Therefore, the purpose of this paper is to investigate the role of outliers in healthcare services in general and patient safety care, in particular.

Design/methodology/approach

It is a combined DM (clustering and the nearest neighbor) technique for outliers’ detection, which provides a clear understanding and meaningful insights to visualize the data behaviors for healthcare safety. The outcomes or the knowledge implicit is vitally essential to a proper clinical decision-making process. The method is important to the semantic, and the novel tactic of patients’ events and situations prove that play a significant role in the process of patient care safety and medications.

Findings

The outcomes of the paper is discussing a novel and integrated methodology, which can be inferring for different biological data analysis. It is discussed as integrated DM techniques to optimize its performance in the field of health and medical science. It is an integrated method of outliers detection that can be extending for searching valuable information and knowledge implicit based on selected patient factors. Based on these facts, outliers are detected as clusters and point events, and novel ideas proposed to empower clinical services in consideration of customers’ satisfactions. It is also essential to be a baseline for further healthcare strategic development and research works.

Research limitations/implications

This paper mainly focussed on outliers detections. Outlier isolation that are essential to investigate the reason how it happened and communications how to mitigate it did not touch. Therefore, the research can be extended more about the hierarchy of patient problems.

Originality/value

DM is a dynamic and successful gateway for discovering useful knowledge for enhancing healthcare performances and patient safety. Clinical data based outlier detection is a basic task to achieve healthcare strategy. Therefore, in this paper, the authors focussed on combined DM techniques for a deep analysis of clinical data, which provide an optimal level of clinical decision-making processes. Proper clinical decisions can obtain in terms of attributes selections that important to know the influential factors or parameters of healthcare services. Therefore, using integrated clustering and nearest neighbors techniques give more acceptable searched such complex data outliers, which could be fundamental to further analysis of healthcare and patient safety situational analysis.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 9 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Open Access
Article
Publication date: 4 November 2022

Bianca Caiazzo, Teresa Murino, Alberto Petrillo, Gianluca Piccirillo and Stefania Santini

This work aims at proposing a novel Internet of Things (IoT)-based and cloud-assisted monitoring architecture for smart manufacturing systems able to evaluate their overall status…

2026

Abstract

Purpose

This work aims at proposing a novel Internet of Things (IoT)-based and cloud-assisted monitoring architecture for smart manufacturing systems able to evaluate their overall status and detect eventual anomalies occurring into the production. A novel artificial intelligence (AI) based technique, able to identify the specific anomalous event and the related risk classification for possible intervention, is hence proposed.

Design/methodology/approach

The proposed solution is a five-layer scalable and modular platform in Industry 5.0 perspective, where the crucial layer is the Cloud Cyber one. This embeds a novel anomaly detection solution, designed by leveraging control charts, autoencoders (AE) long short-term memory (LSTM) and Fuzzy Inference System (FIS). The proper combination of these methods allows, not only detecting the products defects, but also recognizing their causalities.

Findings

The proposed architecture, experimentally validated on a manufacturing system involved into the production of a solar thermal high-vacuum flat panel, provides to human operators information about anomalous events, where they occur, and crucial information about their risk levels.

Practical implications

Thanks to the abnormal risk panel; human operators and business managers are able, not only of remotely visualizing the real-time status of each production parameter, but also to properly face with the eventual anomalous events, only when necessary. This is especially relevant in an emergency situation, such as the COVID-19 pandemic.

Originality/value

The monitoring platform is one of the first attempts in leading modern manufacturing systems toward the Industry 5.0 concept. Indeed, it combines human strengths, IoT technology on machines, cloud-based solutions with AI and zero detect manufacturing strategies in a unified framework so to detect causalities in complex dynamic systems by enabling the possibility of products’ waste avoidance.

Details

Journal of Manufacturing Technology Management, vol. 34 no. 4
Type: Research Article
ISSN: 1741-038X

Keywords

Article
Publication date: 25 May 2012

Timothy W. Armistead

The purpose of this paper is to discuss unresolved problems that are reflected in the social scientific research on the linguistic detection of deception in statements, with…

1049

Abstract

Purpose

The purpose of this paper is to discuss unresolved problems that are reflected in the social scientific research on the linguistic detection of deception in statements, with particular attention to problems of methodology, practical utility for law enforcement statement analysts, and epistemology.

Design/methodology/approach

The author reviewed the design, data, statistical calculations, and findings of English language peer‐reviewed studies of the linguistic detection of deception in statements. In some cases, the author re‐analyzed the study data.

Findings

Social scientific research holds promise for the development of new methods of linguistic detection of deception that are more thoroughly validated than the linguistic methods law enforcement investigators have been using for many years. Nonetheless, published studies reflect one or more of the following sources of weakness in developing and evaluating detection models: the use of analytes (statements) of uncertain validity; the problematic universality and practical utility of linguistic variables; the widespread use of deficient proportion‐of‐stimuli‐correct “hit rate” calculations to assess the accuracy of detection methods; a possibly irresolvable epistemological limit to the ability of any linguistic detection method to prove deception without confirmation by means external to the analysis.

Research limitations/implications

The research was limited to English language studies in the linguistic detection of deception literature and to the re‐calculation of data in the research literature. Whether the paper has implications for future studies depends on the success of two arguments that are made: the published research projects in the field reflect one or more of four methodological problems that create doubt about the validity and/or the practical utility of their results; and the linguistic detection of deception is subject to an epistemological problem which theoretically limits the ability of any linguistic method of detection to establish with certainty the status of any particular questioned statement.

Originality/value

This is the first published paper to identify and discuss a possibly irresolvable epistemological issue in the detection of deception by linguistic means, as well as unresolved issues of methodology and of utility to law enforcement analysts that characterize the research and the detection models in this field. It is also the first published paper to deconstruct the simple hit rate (and its variants) in order to demonstrate its deficiencies.

Details

Policing: An International Journal of Police Strategies & Management, vol. 35 no. 2
Type: Research Article
ISSN: 1363-951X

Keywords

Article
Publication date: 29 January 2020

Dianchen Zhu, Huiying Wen and Yichuan Deng

To improve insufficient management by artificial management, especially for traffic accidents that occur at crossroads, the purpose of this paper is to develop a pro-active…

374

Abstract

Purpose

To improve insufficient management by artificial management, especially for traffic accidents that occur at crossroads, the purpose of this paper is to develop a pro-active warning system for crossroads at construction sites. Although prior studies have made efforts to develop warning systems for construction sites, most of them paid attention to the construction process, while the accidents that occur at crossroads were probably overlooked.

Design/methodology/approach

By summarizing the main reasons resulting for those accidents occurring at crossroads, a pro-active warning system that could provide six functions for countermeasures was designed. Several approaches relating to computer vision and a prediction algorithm were applied and proposed to realize the setting functions.

Findings

One 12-hour video that films a crossroad at a construction site was selected as the original data. The test results show that all designed functions could operate normally, several predicted dangerous situations could be detected and corresponding proper warnings could be given. To validate the applicability of this system, another 36-hour video data were chosen for a performance test, and the findings indicate that all applied algorithms show a significant fitness of the data.

Originality/value

Computer vision algorithms have been widely used in previous studies to address video data or monitoring information; however, few of them have demonstrated the high applicability of identification and classification of the different participants at construction sites. In addition, none of these studies attempted to use a dynamic prediction algorithm to predict risky events, which could provide significant information for relevant active warnings.

Details

Engineering, Construction and Architectural Management, vol. 27 no. 5
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 21 December 2023

Majid Rahi, Ali Ebrahimnejad and Homayun Motameni

Taking into consideration the current human need for agricultural produce such as rice that requires water for growth, the optimal consumption of this valuable liquid is…

Abstract

Purpose

Taking into consideration the current human need for agricultural produce such as rice that requires water for growth, the optimal consumption of this valuable liquid is important. Unfortunately, the traditional use of water by humans for agricultural purposes contradicts the concept of optimal consumption. Therefore, designing and implementing a mechanized irrigation system is of the highest importance. This system includes hardware equipment such as liquid altimeter sensors, valves and pumps which have a failure phenomenon as an integral part, causing faults in the system. Naturally, these faults occur at probable time intervals, and the probability function with exponential distribution is used to simulate this interval. Thus, before the implementation of such high-cost systems, its evaluation is essential during the design phase.

Design/methodology/approach

The proposed approach included two main steps: offline and online. The offline phase included the simulation of the studied system (i.e. the irrigation system of paddy fields) and the acquisition of a data set for training machine learning algorithms such as decision trees to detect, locate (classification) and evaluate faults. In the online phase, C5.0 decision trees trained in the offline phase were used on a stream of data generated by the system.

Findings

The proposed approach is a comprehensive online component-oriented method, which is a combination of supervised machine learning methods to investigate system faults. Each of these methods is considered a component determined by the dimensions and complexity of the case study (to discover, classify and evaluate fault tolerance). These components are placed together in the form of a process framework so that the appropriate method for each component is obtained based on comparison with other machine learning methods. As a result, depending on the conditions under study, the most efficient method is selected in the components. Before the system implementation phase, its reliability is checked by evaluating the predicted faults (in the system design phase). Therefore, this approach avoids the construction of a high-risk system. Compared to existing methods, the proposed approach is more comprehensive and has greater flexibility.

Research limitations/implications

By expanding the dimensions of the problem, the model verification space grows exponentially using automata.

Originality/value

Unlike the existing methods that only examine one or two aspects of fault analysis such as fault detection, classification and fault-tolerance evaluation, this paper proposes a comprehensive process-oriented approach that investigates all three aspects of fault analysis concurrently.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of over 3000