Search results

1 – 10 of 80
Open Access
Article
Publication date: 4 May 2021

Loris Nanni and Sheryl Brahnam

Automatic DNA-binding protein (DNA-BP) classification is now an essential proteomic technology. Unfortunately, many systems reported in the literature are tested on only one or…

1313

Abstract

Purpose

Automatic DNA-binding protein (DNA-BP) classification is now an essential proteomic technology. Unfortunately, many systems reported in the literature are tested on only one or two datasets/tasks. The purpose of this study is to create the most optimal and universal system for DNA-BP classification, one that performs competitively across several DNA-BP classification tasks.

Design/methodology/approach

Efficient DNA-BP classifier systems require the discovery of powerful protein representations and feature extraction methods. Experiments were performed that combined and compared descriptors extracted from state-of-the-art matrix/image protein representations. These descriptors were trained on separate support vector machines (SVMs) and evaluated. Convolutional neural networks with different parameter settings were fine-tuned on two matrix representations of proteins. Decisions were fused with the SVMs using the weighted sum rule and evaluated to experimentally derive the most powerful general-purpose DNA-BP classifier system.

Findings

The best ensemble proposed here produced comparable, if not superior, classification results on a broad and fair comparison with the literature across four different datasets representing a variety of DNA-BP classification tasks, thereby demonstrating both the power and generalizability of the proposed system.

Originality/value

Most DNA-BP methods proposed in the literature are only validated on one (rarely two) datasets/tasks. In this work, the authors report the performance of our general-purpose DNA-BP system on four datasets representing different DNA-BP classification tasks. The excellent results of the proposed best classifier system demonstrate the power of the proposed approach. These results can now be used for baseline comparisons by other researchers in the field.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 29 July 2020

T. Mahalingam and M. Subramoniam

Surveillance is the emerging concept in the current technology, as it plays a vital role in monitoring keen activities at the nooks and corner of the world. Among which moving…

2084

Abstract

Surveillance is the emerging concept in the current technology, as it plays a vital role in monitoring keen activities at the nooks and corner of the world. Among which moving object identifying and tracking by means of computer vision techniques is the major part in surveillance. If we consider moving object detection in video analysis is the initial step among the various computer applications. The main drawbacks of the existing object tracking method is a time-consuming approach if the video contains a high volume of information. There arise certain issues in choosing the optimum tracking technique for this huge volume of data. Further, the situation becomes worse when the tracked object varies orientation over time and also it is difficult to predict multiple objects at the same time. In order to overcome these issues here, we have intended to propose an effective method for object detection and movement tracking. In this paper, we proposed robust video object detection and tracking technique. The proposed technique is divided into three phases namely detection phase, tracking phase and evaluation phase in which detection phase contains Foreground segmentation and Noise reduction. Mixture of Adaptive Gaussian (MoAG) model is proposed to achieve the efficient foreground segmentation. In addition to it the fuzzy morphological filter model is implemented for removing the noise present in the foreground segmented frames. Moving object tracking is achieved by the blob detection which comes under tracking phase. Finally, the evaluation phase has feature extraction and classification. Texture based and quality based features are extracted from the processed frames which is given for classification. For classification we are using J48 ie, decision tree based classifier. The performance of the proposed technique is analyzed with existing techniques k-NN and MLP in terms of precision, recall, f-measure and ROC.

Details

Applied Computing and Informatics, vol. 17 no. 1
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 16 July 2020

Loris Nanni, Stefano Ghidoni and Sheryl Brahnam

This work presents a system based on an ensemble of Convolutional Neural Networks (CNNs) and descriptors for bioimage classification that has been validated on different datasets…

2259

Abstract

This work presents a system based on an ensemble of Convolutional Neural Networks (CNNs) and descriptors for bioimage classification that has been validated on different datasets of color images. The proposed system represents a very simple yet effective way of boosting the performance of trained CNNs by composing multiple CNNs into an ensemble and combining scores by sum rule. Several types of ensembles are considered, with different CNN topologies along with different learning parameter sets. The proposed system not only exhibits strong discriminative power but also generalizes well over multiple datasets thanks to the combination of multiple descriptors based on different feature types, both learned and handcrafted. Separate classifiers are trained for each descriptor, and the entire set of classifiers is combined by sum rule. Results show that the proposed system obtains state-of-the-art performance across four different bioimage and medical datasets. The MATLAB code of the descriptors will be available at https://github.com/LorisNanni.

Details

Applied Computing and Informatics, vol. 17 no. 1
Type: Research Article
ISSN: 2634-1964

Open Access
Article
Publication date: 17 July 2020

Sheryl Brahnam, Loris Nanni, Shannon McMurtrey, Alessandra Lumini, Rick Brattin, Melinda Slack and Tonya Barrier

Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex…

2236

Abstract

Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex, multifactorial, and geared toward research. The goals of this work are twofold: 1) to develop a new video dataset for automatic neonatal pain detection called iCOPEvid (infant Classification Of Pain Expressions videos), and 2) to present a classification system that sets a challenging comparison performance on this dataset. The iCOPEvid dataset contains 234 videos of 49 neonates experiencing a set of noxious stimuli, a period of rest, and an acute pain stimulus. From these videos 20 s segments are extracted and grouped into two classes: pain (49) and nopain (185), with the nopain video segments handpicked to produce a highly challenging dataset. An ensemble of twelve global and local descriptors with a Bag-of-Features approach is utilized to improve the performance of some new descriptors based on Gaussian of Local Descriptors (GOLD). The basic classifier used in the ensembles is the Support Vector Machine, and decisions are combined by sum rule. These results are compared with standard methods, some deep learning approaches, and 185 human assessments. Our best machine learning methods are shown to outperform the human judges.

Details

Applied Computing and Informatics, vol. 19 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 1 February 2018

Xuhui Ye, Gongping Wu, Fei Fan, XiangYang Peng and Ke Wang

An accurate detection of overhead ground wire under open surroundings with varying illumination is the premise of reliable line grasping with the off-line arm when the inspection…

1226

Abstract

Purpose

An accurate detection of overhead ground wire under open surroundings with varying illumination is the premise of reliable line grasping with the off-line arm when the inspection robot cross obstacle automatically. This paper aims to propose an improved approach which is called adaptive homomorphic filter and supervised learning (AHSL) for overhead ground wire detection.

Design/methodology/approach

First, to decrease the influence of the varying illumination caused by the open work environment of the inspection robot, the adaptive homomorphic filter is introduced to compensation the changing illumination. Second, to represent ground wire more effectively and to extract more powerful and discriminative information for building a binary classifier, the global and local features fusion method followed by supervised learning method support vector machine is proposed.

Findings

Experiment results on two self-built testing data sets A and B which contain relative older ground wires and relative newer ground wire and on the field ground wires show that the use of the adaptive homomorphic filter and global and local feature fusion method can improve the detection accuracy of the ground wire effectively. The result of the proposed method lays a solid foundation for inspection robot grasping the ground wire by visual servo.

Originality/value

This method AHSL has achieved 80.8 per cent detection accuracy on data set A which contains relative older ground wires and 85.3 per cent detection accuracy on data set B which contains relative newer ground wires, and the field experiment shows that the robot can detect the ground wire accurately. The performance achieved by proposed method is the state of the art under open environment with varying illumination.

Open Access
Article
Publication date: 4 August 2020

Alessandra Lumini, Loris Nanni and Gianluca Maguolo

In this paper, we present a study about an automated system for monitoring underwater ecosystems. The system here proposed is based on the fusion of different deep learning…

2257

Abstract

In this paper, we present a study about an automated system for monitoring underwater ecosystems. The system here proposed is based on the fusion of different deep learning methods. We study how to create an ensemble based of different Convolutional Neural Network (CNN) models, fine-tuned on several datasets with the aim of exploiting their diversity. The aim of our study is to experiment the possibility of fine-tuning CNNs for underwater imagery analysis, the opportunity of using different datasets for pre-training models, the possibility to design an ensemble using the same architecture with small variations in the training procedure.

Our experiments, performed on 5 well-known datasets (3 plankton and 2 coral datasets) show that the combination of such different CNN models in a heterogeneous ensemble grants a substantial performance improvement with respect to other state-of-the-art approaches in all the tested problems. One of the main contributions of this work is a wide experimental evaluation of famous CNN architectures to report the performance of both the single CNN and the ensemble of CNNs in different problems. Moreover, we show how to create an ensemble which improves the performance of the best single model. The MATLAB source code is freely link provided in title page.

Details

Applied Computing and Informatics, vol. 19 no. 3/4
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 24 July 2020

Falah Alsaqre and Osama Almathkour

Classifying moving objects in video sequences has been extensively studied, yet it is still an ongoing problem. In this paper, we propose to solve moving objects classification

Abstract

Classifying moving objects in video sequences has been extensively studied, yet it is still an ongoing problem. In this paper, we propose to solve moving objects classification problem via an extended version of two-dimensional principal component analysis (2DPCA), named as category-wise 2DPCA (CW2DPCA). A key component of the CW2DPCA is to independently construct optimal projection matrices from object-specific training datasets and produce category-wise feature spaces, wherein each feature space uniquely captures the invariant characteristics of the underlying intra-category samples. Consequently, on one hand, CW2DPCA enables early separation among the different object categories and, on the other hand, extracts effective discriminative features for representing both training datasets and test objects samples in the classification model, which is a nearest neighbor classifier. For ease of exposition, we consider human/vehicle classification, although the proposed CW2DPCA-based classification framework can be easily generalized to handle multiple objects classification. The experimental results prove the effectiveness of CW2DPCA features in discriminating between humans and vehicles in two publicly available video datasets.

Details

Applied Computing and Informatics, vol. 18 no. 1/2
Type: Research Article
ISSN: 2210-8327

Keywords

Open Access
Article
Publication date: 21 December 2023

Oladosu Oyebisi Oladimeji and Ayodeji Olusegun J. Ibitoye

Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the…

Abstract

Purpose

Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the traditional methods, deep learning approaches have gained popularity in automating the diagnosis of brain tumors, offering the potential for more accurate and efficient results. Notably, attention-based models have emerged as an advanced, dynamically refining and amplifying model feature to further elevate diagnostic capabilities. However, the specific impact of using channel, spatial or combined attention methods of the convolutional block attention module (CBAM) for brain tumor classification has not been fully investigated.

Design/methodology/approach

To selectively emphasize relevant features while suppressing noise, ResNet50 coupled with the CBAM (ResNet50-CBAM) was used for the classification of brain tumors in this research.

Findings

The ResNet50-CBAM outperformed existing deep learning classification methods like convolutional neural network (CNN), ResNet-CBAM achieved a superior performance of 99.43%, 99.01%, 98.7% and 99.25% in accuracy, recall, precision and AUC, respectively, when compared to the existing classification methods using the same dataset.

Practical implications

Since ResNet-CBAM fusion can capture the spatial context while enhancing feature representation, it can be integrated into the brain classification software platforms for physicians toward enhanced clinical decision-making and improved brain tumor classification.

Originality/value

This research has not been published anywhere else.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 2 August 2019

Yazhou Mao, Yang Jianxi, Xu Wenjing and Liu Yonggang

The purpose of this paper is to investigate the effect of round pits arrangement patterns on tribological properties of journal bearing. In this paper, the tribological behaviors…

Abstract

Purpose

The purpose of this paper is to investigate the effect of round pits arrangement patterns on tribological properties of journal bearing. In this paper, the tribological behaviors of journal bearing with different arrangement patterns under lubrication condition were studied based on M-2000 friction and wear tester.

Design/methodology/approach

The friction and wear of journal bearing contact surface were simulated by ANSYS. The wear mechanism of bearing contact surfaces was investigated by the means of energy dispersive spectrum analysis on the surface morphology and friction and wear status of the journal bearing specimens by Scanning Electron Microscopy (SEM) and Energy Dispersive Spectrometer (EDS). Besides, the wearing capacity of the textured bearing was predicted by using the GM (1,1) and Grey–Markov model.

Findings

As the loads increase, the friction coefficient of journal bearing specimens decrease first and then increase slowly. The higher rotation speed, the lower friction coefficient and the faster temperature build-up. The main friction method of the bearing sample is three-body friction. The existence of texture can effectively reduce friction and wear. In many arrangement patterns, the best is 4# bearing with round pits cross-arrangement pattern. Its texturing diameters are 60 µm and 125 µm, and the spacing and depth are 200 µm and 25 µm, respectively. In addition, the Grey–Markov model prediction result is more accurate and fit the experimental value better.

Originality/value

The friction and wear mechanism is helpful for scientific research and engineers to understand the tribological behaviors and engineering applications of textured bearing. The wear capacity of textured bearing is predicted by using the Grey–Markov model, which provides technical help and theoretical guidance for the service life and reliability of textured bearing.

Open Access
Article
Publication date: 21 July 2020

Prajowal Manandhar, Prashanth Reddy Marpu and Zeyar Aung

We make use of the Volunteered Geographic Information (VGI) data to extract the total extent of the roads using remote sensing images. VGI data is often provided only as vector…

1219

Abstract

We make use of the Volunteered Geographic Information (VGI) data to extract the total extent of the roads using remote sensing images. VGI data is often provided only as vector data represented by lines and not as full extent. Also, high geolocation accuracy is not guaranteed and it is common to observe misalignment with the target road segments by several pixels on the images. In this work, we use the prior information provided by the VGI and extract the full road extent even if there is significant mis-registration between the VGI and the image. The method consists of image segmentation and traversal of multiple agents along available VGI information. First, we perform image segmentation, and then we traverse through the fragmented road segments using autonomous agents to obtain a complete road map in a semi-automatic way once the seed-points are defined. The road center-line in the VGI guides the process and allows us to discover and extract the full extent of the road network based on the image data. The results demonstrate the validity and good performance of the proposed method for road extraction that reflects the actual road width despite the presence of disturbances such as shadows, cars and trees which shows the efficiency of the fusion of the VGI and satellite images.

Details

Applied Computing and Informatics, vol. 17 no. 1
Type: Research Article
ISSN: 2634-1964

Keywords

1 – 10 of 80