Search results

1 – 10 of over 1000
Article
Publication date: 11 December 2020

Hui Liu, Tinglong Tang, Jake Luo, Meng Zhao, Baole Zheng and Yirong Wu

This study aims to address the challenge of training a detection model for the robot to detect the abnormal samples in the industrial environment, while abnormal patterns are very…

Abstract

Purpose

This study aims to address the challenge of training a detection model for the robot to detect the abnormal samples in the industrial environment, while abnormal patterns are very rare under this condition.

Design/methodology/approach

The authors propose a new model with double encoder–decoder (DED) generative adversarial networks to detect anomalies when the model is trained without any abnormal patterns. The DED approach is used to map high-dimensional input images to a low-dimensional space, through which the latent variables are obtained. Minimizing the change in the latent variables during the training process helps the model learn the data distribution. Anomaly detection is achieved by calculating the distance between two low-dimensional vectors obtained from two encoders.

Findings

The proposed method has better accuracy and F1 score when compared with traditional anomaly detection models.

Originality/value

A new architecture with a DED pipeline is designed to capture the distribution of images in the training process so that anomalous samples are accurately identified. A new weight function is introduced to control the proportion of losses in the encoding reconstruction and adversarial phases to achieve better results. An anomaly detection model is proposed to achieve superior performance against prior state-of-the-art approaches.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Open Access
Article
Publication date: 4 November 2022

Bianca Caiazzo, Teresa Murino, Alberto Petrillo, Gianluca Piccirillo and Stefania Santini

This work aims at proposing a novel Internet of Things (IoT)-based and cloud-assisted monitoring architecture for smart manufacturing systems able to evaluate their overall status…

2026

Abstract

Purpose

This work aims at proposing a novel Internet of Things (IoT)-based and cloud-assisted monitoring architecture for smart manufacturing systems able to evaluate their overall status and detect eventual anomalies occurring into the production. A novel artificial intelligence (AI) based technique, able to identify the specific anomalous event and the related risk classification for possible intervention, is hence proposed.

Design/methodology/approach

The proposed solution is a five-layer scalable and modular platform in Industry 5.0 perspective, where the crucial layer is the Cloud Cyber one. This embeds a novel anomaly detection solution, designed by leveraging control charts, autoencoders (AE) long short-term memory (LSTM) and Fuzzy Inference System (FIS). The proper combination of these methods allows, not only detecting the products defects, but also recognizing their causalities.

Findings

The proposed architecture, experimentally validated on a manufacturing system involved into the production of a solar thermal high-vacuum flat panel, provides to human operators information about anomalous events, where they occur, and crucial information about their risk levels.

Practical implications

Thanks to the abnormal risk panel; human operators and business managers are able, not only of remotely visualizing the real-time status of each production parameter, but also to properly face with the eventual anomalous events, only when necessary. This is especially relevant in an emergency situation, such as the COVID-19 pandemic.

Originality/value

The monitoring platform is one of the first attempts in leading modern manufacturing systems toward the Industry 5.0 concept. Indeed, it combines human strengths, IoT technology on machines, cloud-based solutions with AI and zero detect manufacturing strategies in a unified framework so to detect causalities in complex dynamic systems by enabling the possibility of products’ waste avoidance.

Details

Journal of Manufacturing Technology Management, vol. 34 no. 4
Type: Research Article
ISSN: 1741-038X

Keywords

Article
Publication date: 3 May 2022

Carlos Alberto Escobar, Daniela Macias, Megan McGovern, Marcela Hernandez-de-Menendez and Ruben Morales-Menendez

Manufacturing companies can competitively be recognized among the most advanced and influential companies in the world by successfully implementing Quality 4.0. However, its…

1492

Abstract

Purpose

Manufacturing companies can competitively be recognized among the most advanced and influential companies in the world by successfully implementing Quality 4.0. However, its successful implementation poses one of the most relevant challenges to the Industry 4.0. According to recent surveys, 80%–87% of data science projects never make it to production. Regardless of the low deployment success rate, more than 75% of investors are maintaining or increasing their investments in artificial intelligence (AI). To help quality decision-makers improve the current situation, this paper aims to review Process Monitoring for Quality (PMQ), a Quality 4.0 initiative, along with its practical and managerial implications. Furthermore, a real case study is presented to demonstrate its application.

Design/methodology/approach

The proposed Quality 4.0 initiative improves conventional quality control methods by monitoring a process and detecting defective items in real time. Defect detection is formulated as a binary classification problem. Using the same path of Six Sigma define, measure, analyze, improve, control, Quality 4.0-based innovation is guided by Identify, Acsensorize, Discover, Learn, Predict, Redesign and Relearn (IADLPR2) – an ad hoc seven-step problem-solving approach.

Findings

The IADLPR2 approach has the ability to identify and solve engineering intractable problems using AI. This is especially intriguing because numerous quality-driven manufacturing decision-makers consistently cite difficulties in developing a business vision for this technology.

Practical implications

From the proposed method, quality-driven decision-makers will learn how to launch a Quality 4.0 initiative, while quality-driven engineers will learn how to systematically solve intractable problems through AI.

Originality/value

An anthology of the own projects enables the presentation of a comprehensive Quality 4.0 initiative and reports the approach’s first case study IADLPR2. Each of the steps is used to solve a real General Motors’ case study.

Details

International Journal of Lean Six Sigma, vol. 13 no. 6
Type: Research Article
ISSN: 2040-4166

Keywords

Article
Publication date: 21 February 2020

Alison Leary, Robert Cook, Sarahjane Jones, Mark Radford, Judtih Smith, Malcolm Gough and Geoffrey Punshon

Incident reporting systems are commonly deployed in healthcare but resulting datasets are largely warehoused. This study explores if intelligence from such datasets could be used…

Abstract

Purpose

Incident reporting systems are commonly deployed in healthcare but resulting datasets are largely warehoused. This study explores if intelligence from such datasets could be used to improve quality, efficiency, and safety.

Design/methodology/approach

Incident reporting data recorded in one NHS acute Trust was mined for insight (n = 133,893 April 2005–July 2016 across 201 fields, 26,912,493 items). An a priori dataset was overlaid consisting of staffing, vital signs, and national safety indicators such as falls. Analysis was primarily nonlinear statistical approaches using Mathematica V11.

Findings

The organization developed a deeper understanding of the use of incident reporting systems both in terms of usability and possible reflection of culture. Signals emerged which focused areas of improvement or risk. An example of this is a deeper understanding of the timing and staffing levels associated with falls. Insight into the nature and grading of reporting was also gained.

Practical implications

Healthcare incident reporting data is underused and with a small amount of analysis can provide real insight and application to patient safety.

Originality/value

This study shows that insight can be gained by mining incident reporting datasets, particularly when integrated with other routinely collected data.

Details

International Journal of Health Care Quality Assurance, vol. 33 no. 2
Type: Research Article
ISSN: 0952-6862

Keywords

Book part
Publication date: 6 September 2019

Son Nguyen, Gao Niu, John Quinn, Alan Olinsky, Jonathan Ormsbee, Richard M. Smith and James Bishop

In recent years, the problem of classification with imbalanced data has been growing in popularity in the data-mining and machine-learning communities due to the emergence of an…

Abstract

In recent years, the problem of classification with imbalanced data has been growing in popularity in the data-mining and machine-learning communities due to the emergence of an abundance of imbalanced data in many fields. In this chapter, we compare the performance of six classification methods on an imbalanced dataset under the influence of four resampling techniques. These classification methods are the random forest, the support vector machine, logistic regression, k-nearest neighbor (KNN), the decision tree, and AdaBoost. Our study has shown that all of the classification methods have difficulty when working with the imbalanced data, with the KNN performing the worst, detecting only 27.4% of the minority class. However, with the help of resampling techniques, all of the classification methods experience improvement on overall performances. In particular, the Random Forest, in combination with the random over-sampling technique, performs the best, achieving 82.8% balanced accuracy (the average of the true-positive rate and true-negative rate).

We then propose a new procedure to resample the data. Our method is based on the idea of eliminating “easy” majority observations before under-sampling them. It has further improved the balanced accuracy of the Random Forest to 83.7%, making it the best approach for the imbalanced data.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-1-78754-290-7

Keywords

Article
Publication date: 22 May 2020

Aryana Collins Jackson and Seán Lacey

The discrete Fourier transformation (DFT) has been proven to be a successful method for determining whether a discrete time series is seasonal and, if so, for detecting the…

Abstract

Purpose

The discrete Fourier transformation (DFT) has been proven to be a successful method for determining whether a discrete time series is seasonal and, if so, for detecting the period. This paper deals exclusively with rare data, in which instances occur periodically at a low frequency.

Design/methodology/approach

Data based on real-world situations is simulated for analysis.

Findings

Cycle number detection is done with spectral analysis, period detection is completed using DFT coefficients and signal shifts in the time domain are found using the convolution theorem. Additionally, a new method for detecting anomalies in binary, rare data is presented: the sum of distances. Using this method, expected events which have not occurred and unexpected events which have occurred at various sampling frequencies can be detected. Anomalies which are not considered outliers to be found.

Research limitations/implications

Aliasing can contribute to extra frequencies which point to extra periods in the time domain. This can be reduced or removed with techniques such as windowing. In future work, this will be explored.

Practical implications

Applications include determining seasonality and thus investigating the underlying causes of hard drive failure, power outages and other undesired events. This work will also lend itself well to finding patterns among missing desired events, such as a scheduled hard drive backup or an employee's regular login to a server.

Originality/value

This paper has shown how seasonality and anomalies are successfully detected in seasonal, discrete, rare and binary data. Previously, the DFT has only been used for non-rare data.

Details

Data Technologies and Applications, vol. 54 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

Book part
Publication date: 1 December 2016

Raffaella Calabrese and Johan A. Elkink

The most used spatial regression models for binary-dependent variable consider a symmetric link function, such as the logistic or the probit models. When the dependent variable…

Abstract

The most used spatial regression models for binary-dependent variable consider a symmetric link function, such as the logistic or the probit models. When the dependent variable represents a rare event, a symmetric link function can underestimate the probability that the rare event occurs. Following Calabrese and Osmetti (2013), we suggest the quantile function of the generalized extreme value (GEV) distribution as link function in a spatial generalized linear model and we call this model the spatial GEV (SGEV) regression model. To estimate the parameters of such model, a modified version of the Gibbs sampling method of Wang and Dey (2010) is proposed. We analyze the performance of our model by Monte Carlo simulations and evaluate the prediction accuracy in empirical data on state failure.

Details

Spatial Econometrics: Qualitative and Limited Dependent Variables
Type: Book
ISBN: 978-1-78560-986-2

Keywords

Article
Publication date: 22 August 2023

Xunfa Lu, Jingjing Sun, Guo Wei and Ching-Ter Chang

The purpose of this paper is to investigate dynamics of causal interactions and financial risk contagion among BRICS stock markets under rare events.

Abstract

Purpose

The purpose of this paper is to investigate dynamics of causal interactions and financial risk contagion among BRICS stock markets under rare events.

Design/methodology/approach

Two methods are adopted: The new causal inference technique, namely, the Liang causality analysis based on information flow theory and the dynamic causal index (DCI) are used to measure the financial risk contagion.

Findings

The causal relationships among the BRICS stock markets estimated by the Liang causality analysis are significantly stronger in the mid-periods of rare events than in the pre- and post-periods. Moreover, different rare events have heterogeneous effects on the causal relationships. Notably, under rare events, there is almost no significant Liang's causality between the Chinese and other four stock markets, except for a few moments, indicating that the former can provide a relatively safe haven within the BRICS. According to the DCIs, the causal linkages have significantly increased during rare events, implying that their connectivity becomes stronger under extreme conditions.

Practical implications

The obtained results not only provide important implications for investors to reasonably allocate regional financial assets, but also yield some suggestions for policymakers and financial regulators in effective supervision, especially in extreme environments.

Originality/value

This paper uses the Liang causality analysis to construct the causal networks among BRICS stock indices and characterize their causal linkages. Furthermore, the DCI derived from the causal networks is applied to measure the financial risk contagion of the BRICS countries under three rare events.

Details

International Journal of Emerging Markets, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1746-8809

Keywords

Article
Publication date: 14 January 2020

Sanjeewa Wickramaratne, S. Chan Wirasinghe and Janaka Ruwanpura

Based on the existing provisions/operations of tsunami warning in the Indian Ocean, authors observed that detection as well as arrival time estimations of regional tsunami service…

Abstract

Purpose

Based on the existing provisions/operations of tsunami warning in the Indian Ocean, authors observed that detection as well as arrival time estimations of regional tsunami service providers (RTSPs) could be improved. In particular, the detection mechanisms have been eccentrically focussed on Sunda and Makran tsunamis, although tsunamis from Carlsberg ridge and Chagos archipelago could generate devastating tsunamis for which inadequate provisions exist for detection and arrival time/wave height estimation. RTSPs resort to assess estimated arrival time/wave heights from a scenario-based, pre-simulated database. These estimations in terms of Sri Lanka have been found inconsistent. In addition, current warning mechanism poorly manages non-seismic tsunamis. Thus, the purpose of this study is to investigate these drawbacks and attempt to carve out a series of suggestions to improve them.

Design/methodology/approach

The work initiated with data retrieved from global earthquake and tsunami databases, followed by an estimation of probabilities of tsunamis in the Indian Ocean with particular emphasis on Carlsberg and Chagos tsunamis. Second, probabilities of tsunami detection in each sub-region have been estimated with the use of available tide gauge and tsunami buoy data. Third, the difficulties in tsunami detection in the Indian Ocean are critically assessed with case studies, followed by recommendations to improve the detection and warning.

Findings

Probabilistic estimates show that given the occurrence of a significant earthquake, both Makran and Carlsberg/Chagos regions possess higher probabilities to harbour a tsunami than the Sunda subduction zone. Meanwhile, reliability figures of tsunami buoys have been declined from 79-92 to 68-91 per cent over the past eight years. In addition, a Chagos tsunami is left to be detected by only one tide gauge prior to it reaching Sri Lankan coasts.

Research limitations/implications

The study uses an averaged tsunami speed of 882 km/h based on 2004 Asian tsunami. However, using exact bathymetric data, Tsunamis could be simulated to derive speeds and arrival times more accurately. Yet, such refinements do not change the main derivations and conclusions of this study.

Practical implications

Tsunami detection and warning in the Indian Ocean region have shown room for improvement, based on the inadequate detection levels for Carlesberg and Chagos tsunamis, and inconsistent warnings of regional tsunami service providers. The authors attempted to remedy these drawbacks by proposing a series of suggestions, including a deployment of a new tsunami buoy south of Maldives, revival of offline buoys, real-time tsunami simulations and a strategy to deal with landslide tsunamis, etc.

Social implications

Indian Ocean is prone to mega tsunamis as witnessed in 2004. However, more than 50 per cent of people in the Indian Ocean rim countries dwell near the coast. This is verified with deaths of 227,898 people in 14 countries during the 2004 tsunami event. Thus, it is of paramount importance that sufficient detection levels are maintained throughout the Indian Ocean without being overly biased towards Sunda tsunamis. With respect to Sri Lanka, Makran, Carlesberg or Chagos tsunamis could directly hit the most populated west coast and bring about far worse repercussions than a Sunda tsunami.

Originality/value

This is the first instance where the threats from Carlesberg and Chagos tsunamis to Sri Lanka are discussed, probabilities of tsunamis are quantified and their detection levels assessed. In addition, reliability levels of tsunami buoys and tide gauges in the Indian Ocean are recomputed after eight years to discover that there is a drop in reliability of the buoy data. The work also proposes a unique approach to handle inconsistencies in the bulletins of regional tsunami service providers, and to uphold and improve dwindling interest on tsunami buoys.

Details

International Journal of Disaster Resilience in the Built Environment, vol. 11 no. 2
Type: Research Article
ISSN: 1759-5908

Keywords

Article
Publication date: 27 November 2020

Hoda Daou

Social media is characterized by its volume, its speed of generation and its easy and open access; all this making it an important source of information that provides valuable…

Abstract

Purpose

Social media is characterized by its volume, its speed of generation and its easy and open access; all this making it an important source of information that provides valuable insights. Content characteristics such as valence and emotions play an important role in the diffusion of information; in fact, emotions can shape virality of topics in social media. The purpose of this research is to fill the gap in event detection applied on online content by incorporating sentiment, more specifically strong sentiment, as main attribute in identifying relevant content.

Design/methodology/approach

The study proposes a methodology based on strong sentiment classification using machine learning and an advanced scoring technique.

Findings

The results show the following key findings: the proposed methodology is able to automatically capture trending topics and achieve better classification compared to state-of-the-art topic detection algorithms. In addition, the methodology is not context specific; it is able to successfully identify important events from various datasets within the context of politics, rallies, various news and real tragedies.

Originality/value

This study fills the gap of topic detection applied on online content by building on the assumption that important events trigger strong sentiment among the society. In addition, classic topic detection algorithms require tuning in terms of number of topics to search for. This methodology involves scoring the posts and, thus, does not require limiting the number topics; it also allows ordering the topics by relevance based on the value of the score.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-12-2019-0373

Details

Online Information Review, vol. 45 no. 1
Type: Research Article
ISSN: 1468-4527

Keywords

1 – 10 of over 1000