Search results

1 – 10 of 36
Open Access
Article
Publication date: 29 July 2020

T. Mahalingam and M. Subramoniam

Surveillance is the emerging concept in the current technology, as it plays a vital role in monitoring keen activities at the nooks and corner of the world. Among which moving…

2148

Abstract

Surveillance is the emerging concept in the current technology, as it plays a vital role in monitoring keen activities at the nooks and corner of the world. Among which moving object identifying and tracking by means of computer vision techniques is the major part in surveillance. If we consider moving object detection in video analysis is the initial step among the various computer applications. The main drawbacks of the existing object tracking method is a time-consuming approach if the video contains a high volume of information. There arise certain issues in choosing the optimum tracking technique for this huge volume of data. Further, the situation becomes worse when the tracked object varies orientation over time and also it is difficult to predict multiple objects at the same time. In order to overcome these issues here, we have intended to propose an effective method for object detection and movement tracking. In this paper, we proposed robust video object detection and tracking technique. The proposed technique is divided into three phases namely detection phase, tracking phase and evaluation phase in which detection phase contains Foreground segmentation and Noise reduction. Mixture of Adaptive Gaussian (MoAG) model is proposed to achieve the efficient foreground segmentation. In addition to it the fuzzy morphological filter model is implemented for removing the noise present in the foreground segmented frames. Moving object tracking is achieved by the blob detection which comes under tracking phase. Finally, the evaluation phase has feature extraction and classification. Texture based and quality based features are extracted from the processed frames which is given for classification. For classification we are using J48 ie, decision tree based classifier. The performance of the proposed technique is analyzed with existing techniques k-NN and MLP in terms of precision, recall, f-measure and ROC.

Details

Applied Computing and Informatics, vol. 17 no. 1
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 22 September 2023

Nengsheng Bao, Yuchen Fan, Chaoping Li and Alessandro Simeone

Lubricating oil leakage is a common issue in thermal power plant operation sites, requiring prompt equipment maintenance. The real-time detection of leakage occurrences could…

Abstract

Purpose

Lubricating oil leakage is a common issue in thermal power plant operation sites, requiring prompt equipment maintenance. The real-time detection of leakage occurrences could avoid disruptive consequences caused by the lack of timely maintenance. Currently, inspection operations are mostly carried out manually, resulting in time-consuming processes prone to health and safety hazards. To overcome such issues, this paper proposes a machine vision-based inspection system aimed at automating the oil leakage detection for improving the maintenance procedures.

Design/methodology/approach

The approach aims at developing a novel modular-structured automatic inspection system. The image acquisition module collects digital images along a predefined inspection path using a dual-light (i.e. ultraviolet and blue light) illumination system, deploying the fluorescence of the lubricating oil while suppressing unwanted background noise. The image processing module is designed to detect the oil leakage within the digital images minimizing detection errors. A case study is reported to validate the industrial suitability of the proposed inspection system.

Findings

On-site experimental results demonstrate the capabilities to complete the automatic inspection procedures of the tested industrial equipment by achieving an oil leakage detection accuracy up to 99.13%.

Practical implications

The proposed inspection system can be adopted in industrial context to detect lubricant leakage ensuring the equipment and the operators safety.

Originality/value

The proposed inspection system adopts a computer vision approach, which deploys the combination of two separate sources of light, to boost the detection capabilities, enabling the application for a variety of particularly hard-to-inspect industrial contexts.

Details

Journal of Quality in Maintenance Engineering, vol. 29 no. 5
Type: Research Article
ISSN: 1355-2511

Keywords

Open Access
Article
Publication date: 16 July 2020

Loris Nanni, Stefano Ghidoni and Sheryl Brahnam

This work presents a system based on an ensemble of Convolutional Neural Networks (CNNs) and descriptors for bioimage classification that has been validated on different datasets…

2316

Abstract

This work presents a system based on an ensemble of Convolutional Neural Networks (CNNs) and descriptors for bioimage classification that has been validated on different datasets of color images. The proposed system represents a very simple yet effective way of boosting the performance of trained CNNs by composing multiple CNNs into an ensemble and combining scores by sum rule. Several types of ensembles are considered, with different CNN topologies along with different learning parameter sets. The proposed system not only exhibits strong discriminative power but also generalizes well over multiple datasets thanks to the combination of multiple descriptors based on different feature types, both learned and handcrafted. Separate classifiers are trained for each descriptor, and the entire set of classifiers is combined by sum rule. Results show that the proposed system obtains state-of-the-art performance across four different bioimage and medical datasets. The MATLAB code of the descriptors will be available at https://github.com/LorisNanni.

Details

Applied Computing and Informatics, vol. 17 no. 1
Type: Research Article
ISSN: 2634-1964

Open Access
Article
Publication date: 24 June 2021

Paolo Canonico, Ernesto De Nito, Vincenza Esposito, Gerarda Fattoruso, Mario Pezzillo Iacono and Gianluigi Mangia

The paper focuses on how knowledge visualization supports the development of a particular multiobjective decision-making problem as a portfolio optimization problem in the context…

2269

Abstract

Purpose

The paper focuses on how knowledge visualization supports the development of a particular multiobjective decision-making problem as a portfolio optimization problem in the context of interorganizational collaboration between universities and a large automotive company. This paper fits with the emergent knowledge visualization literature because it helps to explain decision-making related to the development of a multiobjective optimization model in Lean Product Development settings. We investigate how using ad hoc visual tools supports knowledge translation and knowledge sharing, enhancing managerial judgment and decision-making.

Design/methodology/approach

The empirical case in this study concerns the setting up of a multiobjective decision-making model as a portfolio optimization problem to analyze and select alternatives for upgrading the lean production process quality at an FCA plant.

Findings

The study shows how knowledge visualization and the associated tools work to enable knowledge translation and knowledge sharing, supporting decision-making. The empirical findings show why and how knowledge visualization can be used to foster knowledge translation and sharing among individuals and from individuals to groups. Knowledge visualization is understood as both a collective and interactional process and a systematic approach where different players translate their expertise, share a framework and develop common ground to support decision-making.

Originality/value

From a theoretical perspective, the paper expands the understanding of knowledge visualization as a system of practices that support the development of a multiobjective decision-making method. From an empirical point of view, our results may be useful to other firms in the automotive industry and for academics wishing to develop applied research on portfolio optimization.

Details

Management Decision, vol. 60 no. 4
Type: Research Article
ISSN: 0025-1747

Keywords

Open Access
Article
Publication date: 4 August 2020

Alaa Tharwat

Independent component analysis (ICA) is a widely-used blind source separation technique. ICA has been applied to many applications. ICA is usually utilized as a black box, without…

29139

Abstract

Independent component analysis (ICA) is a widely-used blind source separation technique. ICA has been applied to many applications. ICA is usually utilized as a black box, without understanding its internal details. Therefore, in this paper, the basics of ICA are provided to show how it works to serve as a comprehensive source for researchers who are interested in this field. This paper starts by introducing the definition and underlying principles of ICA. Additionally, different numerical examples in a step-by-step approach are demonstrated to explain the preprocessing steps of ICA and the mixing and unmixing processes in ICA. Moreover, different ICA algorithms, challenges, and applications are presented.

Details

Applied Computing and Informatics, vol. 17 no. 2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 24 October 2018

Samuel Evans, Eric Jones, Peter Fox and Chris Sutcliffe

This paper aims to introduce a novel method for the analysis of open cell porous components fabricated by laser-based powder bed metal additive manufacturing (AM) for the purpose…

1140

Abstract

Purpose

This paper aims to introduce a novel method for the analysis of open cell porous components fabricated by laser-based powder bed metal additive manufacturing (AM) for the purpose of quality control. This method uses photogrammetric analysis, the extraction of geometric information from an image through the use of algorithms. By applying this technique to porous AM components, a rapid, low-cost inspection of geometric properties such as material thickness and pore size is achieved. Such measurements take on greater importance, as the production of porous additive manufactured orthopaedic devices increases in number, causing other, slower and more expensive methods of analysis to become impractical.

Design/methodology/approach

Here the development of the photogrammetric method is discussed and compared to standard techniques including scanning electron microscopy, micro computed tomography scanning and the recently developed focus variation (FV) imaging. The system is also validated against test graticules and simple wire geometries of known size, prior to the more complex orthopaedic structures.

Findings

The photogrammetric method shows an ability to analyse the variability in build fidelity of AM porous structures for use in inspection purposes to compare component properties. While measured values for material thickness and pore size differed from those of other techniques, the new photogrammetric technique demonstrated a low deviation when repeating measurements, and was able to analyse components at a much faster rate and lower cost than the competing systems, with less requirement for specific expertise or training.

Originality/value

The advantages demonstrated by the image-based technique described indicate the system to be suitable for implementation as a means of in-line process control for quality and inspection applications, particularly for high-volume production where existing methods would be impractical.

Details

Rapid Prototyping Journal, vol. 24 no. 8
Type: Research Article
ISSN: 1355-2546

Keywords

Open Access
Article
Publication date: 15 December 2020

Soha Rawas and Ali El-Zaart

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern…

Abstract

Purpose

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc. However, an accurate segmentation is a critical task since finding a correct model that fits a different type of image processing application is a persistent problem. This paper develops a novel segmentation model that aims to be a unified model using any kind of image processing application. The proposed precise and parallel segmentation model (PPSM) combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions. Moreover, a parallel boosting algorithm is proposed to improve the performance of the developed segmentation algorithm and minimize its computational cost. To evaluate the effectiveness of the proposed PPSM, different benchmark data sets for image segmentation are used such as Planet Hunters 2 (PH2), the International Skin Imaging Collaboration (ISIC), Microsoft Research in Cambridge (MSRC), the Berkley Segmentation Benchmark Data set (BSDS) and Common Objects in COntext (COCO). The obtained results indicate the efficacy of the proposed model in achieving high accuracy with significant processing time reduction compared to other segmentation models and using different types and fields of benchmarking data sets.

Design/methodology/approach

The proposed PPSM combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions.

Findings

On the basis of the achieved results, it can be observed that the proposed PPSM–minimum cross-entropy thresholding (PPSM–MCET)-based segmentation model is a robust, accurate and highly consistent method with high-performance ability.

Originality/value

A novel hybrid segmentation model is constructed exploiting a combination of Gaussian, gamma and lognormal distributions using MCET. Moreover, and to provide an accurate and high-performance thresholding with minimum computational cost, the proposed PPSM uses a parallel processing method to minimize the computational effort in MCET computing. The proposed model might be used as a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 17 July 2020

Sheryl Brahnam, Loris Nanni, Shannon McMurtrey, Alessandra Lumini, Rick Brattin, Melinda Slack and Tonya Barrier

Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex…

2311

Abstract

Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex, multifactorial, and geared toward research. The goals of this work are twofold: 1) to develop a new video dataset for automatic neonatal pain detection called iCOPEvid (infant Classification Of Pain Expressions videos), and 2) to present a classification system that sets a challenging comparison performance on this dataset. The iCOPEvid dataset contains 234 videos of 49 neonates experiencing a set of noxious stimuli, a period of rest, and an acute pain stimulus. From these videos 20 s segments are extracted and grouped into two classes: pain (49) and nopain (185), with the nopain video segments handpicked to produce a highly challenging dataset. An ensemble of twelve global and local descriptors with a Bag-of-Features approach is utilized to improve the performance of some new descriptors based on Gaussian of Local Descriptors (GOLD). The basic classifier used in the ensembles is the Support Vector Machine, and decisions are combined by sum rule. These results are compared with standard methods, some deep learning approaches, and 185 human assessments. Our best machine learning methods are shown to outperform the human judges.

Details

Applied Computing and Informatics, vol. 19 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 3 August 2020

Jihad Maulana Akbar and De Rosal Ignatius Moses Setiadi

Current technology makes it easy for humans to take an image and convert it to digital content, but sometimes there is additional noise in the image so it looks damaged. The…

1030

Abstract

Current technology makes it easy for humans to take an image and convert it to digital content, but sometimes there is additional noise in the image so it looks damaged. The damage that often occurs, like blurring and excessive noise in digital images, can certainly affect the meaning and quality of the image. Image restoration is a process used to restore the image to its original state before the image damage occurs. In this research, we proposed an image restoration method by combining Wavelet transformation and Akamatsu transformation. Based on previous research Akamatsu's transformation only works well on blurred images. In order not to focus solely on blurry images, Akamatsu's transformation will be applied based on Wavelet transformations on high-low (HL), low-high (LH), and high-high (HH) subunits. The result of the proposed method will be comparable with the previous methods. PSNR is used as a measure of image quality restoration. Based on the results the proposed method can improve the quality of the restoration on image noise, such as Gaussian, salt and pepper, and also works well on blurred images. The average increase is around 2 dB based on the PSNR calculation.

Details

Applied Computing and Informatics, vol. 19 no. 3/4
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 14 August 2020

F.J. Farsana, V.R. Devi and K. Gopakumar

This paper introduces an audio encryption algorithm based on permutation of audio samples using discrete modified Henon map followed by substitution operation with keystream…

1592

Abstract

This paper introduces an audio encryption algorithm based on permutation of audio samples using discrete modified Henon map followed by substitution operation with keystream generated from the modified Lorenz-Hyperchaotic system. In this work, the audio file is initially compressed by Fast Walsh Hadamard Transform (FWHT) for removing the residual intelligibility in the transform domain. The resulting file is then encrypted in two phases. In the first phase permutation operation is carried out using modified discrete Henon map to weaken the correlation between adjacent samples. In the second phase it utilizes modified-Lorenz hyperchaotic system for substitution operation to fill the silent periods within the speech conversation. Dynamic keystream generation mechanism is also introduced to enhance the correlation between plaintext and encrypted text. Various quality metrics analysis such as correlation, signal to noise ratio (SNR), differential attacks, spectral entropy, histogram analysis, keyspace and key sensitivity are carried out to evaluate the quality of the proposed algorithm. The simulation results and numerical analyses demonstrate that the proposed algorithm has excellent security performance and robust against various cryptographic attacks.

Details

Applied Computing and Informatics, vol. 19 no. 3/4
Type: Research Article
ISSN: 2634-1964

Keywords

1 – 10 of 36