Search results

1 – 10 of 13
Article
Publication date: 28 April 2014

Seth Dillard, James Buchholz, Sarah Vigmostad, Hyunggun Kim and H.S. Udaykumar

The performance of three frequently used level set-based segmentation methods is examined for the purpose of defining features and boundary conditions for image-based Eulerian…

Abstract

Purpose

The performance of three frequently used level set-based segmentation methods is examined for the purpose of defining features and boundary conditions for image-based Eulerian fluid and solid mechanics models. The focus of the evaluation is to identify an approach that produces the best geometric representation from a computational fluid/solid modeling point of view. In particular, extraction of geometries from a wide variety of imaging modalities and noise intensities, to supply to an immersed boundary approach, is targeted.

Design/methodology/approach

Two- and three-dimensional images, acquired from optical, X-ray CT, and ultrasound imaging modalities, are segmented with active contours, k-means, and adaptive clustering methods. Segmentation contours are converted to level sets and smoothed as necessary for use in fluid/solid simulations. Results produced by the three approaches are compared visually and with contrast ratio, signal-to-noise ratio, and contrast-to-noise ratio measures.

Findings

While the active contours method possesses built-in smoothing and regularization and produces continuous contours, the clustering methods (k-means and adaptive clustering) produce discrete (pixelated) contours that require smoothing using speckle-reducing anisotropic diffusion (SRAD). Thus, for images with high contrast and low to moderate noise, active contours are generally preferable. However, adaptive clustering is found to be far superior to the other two methods for images possessing high levels of noise and global intensity variations, due to its more sophisticated use of local pixel/voxel intensity statistics.

Originality/value

It is often difficult to know a priori which segmentation will perform best for a given image type, particularly when geometric modeling is the ultimate goal. This work offers insight to the algorithm selection process, as well as outlining a practical framework for generating useful geometric surfaces in an Eulerian setting.

Details

Engineering Computations, vol. 31 no. 3
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 15 October 2021

Rangayya, Virupakshappa and Nagabhushan Patil

One of the challenging issues in computer vision and pattern recognition is face image recognition. Several studies based on face recognition were introduced in the past decades…

Abstract

Purpose

One of the challenging issues in computer vision and pattern recognition is face image recognition. Several studies based on face recognition were introduced in the past decades, but it has few classification issues in terms of poor performances. Hence, the authors proposed a novel model for face recognition.

Design/methodology/approach

The proposed method consists of four major sections such as data acquisition, segmentation, feature extraction and recognition. Initially, the images are transferred into grayscale images, and they pose issues that are eliminated by resizing the input images. The contrast limited adaptive histogram equalization (CLAHE) utilizes the image preprocessing step, thereby eliminating unwanted noise and improving the image contrast level. Second, the active contour and level set-based segmentation (ALS) with neural network (NN) or ALS with NN algorithm is used for facial image segmentation. Next, the four major kinds of feature descriptors are dominant color structure descriptors, scale-invariant feature transform descriptors, improved center-symmetric local binary patterns (ICSLBP) and histograms of gradients (HOG) are based on clour and texture features. Finally, the support vector machine (SVM) with modified random forest (MRF) model for facial image recognition.

Findings

Experimentally, the proposed method performance is evaluated using different kinds of evaluation criterions such as accuracy, similarity index, dice similarity coefficient, precision, recall and F-score results. However, the proposed method offers superior recognition performances than other state-of-art methods. Further face recognition was analyzed with the metrics such as accuracy, precision, recall and F-score and attained 99.2, 96, 98 and 96%, respectively.

Originality/value

The good facial recognition method is proposed in this research work to overcome threat to privacy, violation of rights and provide better security of data.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 15 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 26 July 2019

Ayalapogu Ratna Raju, Suresh Pabboju and Ramisetty Rajeswara Rao

Brain tumor segmentation and classification is the interesting area for differentiating the tumorous and the non-tumorous cells in the brain and classifies the tumorous cells for…

Abstract

Purpose

Brain tumor segmentation and classification is the interesting area for differentiating the tumorous and the non-tumorous cells in the brain and classifies the tumorous cells for identifying its level. The methods developed so far lack the automatic classification, consuming considerable time for the classification. In this work, a novel brain tumor classification approach, namely, harmony cuckoo search-based deep belief network (HCS-DBN) has been proposed. Here, the images present in the database are segmented based on the newly developed hybrid active contour (HAC) segmentation model, which is the integration of the Bayesian fuzzy clustering (BFC) and the active contour model. The proposed HCS-DBN algorithm is trained with the features obtained from the segmented images. Finally, the classifier provides the information about the tumor class in each slice available in the database. Experimentation of the proposed HAC and the HCS-DBN algorithm is done using the MRI image available in the BRATS database, and results are observed. The simulation results prove that the proposed HAC and the HCS-DBN algorithm have an overall better performance with the values of 0.945, 0.9695 and 0.99348 for accuracy, sensitivity and specificity, respectively.

Design/methodology/approach

The proposed HAC segmentation approach integrates the properties of the AC model and BFC. Initially, the brain image with different modalities is subjected to segmentation with the BFC and AC models. Then, the Laplacian correction is applied to fuse the segmented outputs from each model. Finally, the proposed HAC segmentation provides the error-free segments of the brain tumor regions prevailing in the MRI image. The next step is to extract the useful features, based on scattering transform, wavelet transform and local Gabor binary pattern, from the segmented brain image. Finally, the extracted features from each segment are provided to the DBN for the training, and the HCS algorithm chooses the optimal weights for DBN training.

Findings

The experimentation of the proposed HAC with the HCS-DBN algorithm is analyzed with the standard BRATS database, and its performance is evaluated based on metrics such as accuracy, sensitivity and specificity. The simulation results of the proposed HAC with the HCS-DBN algorithm are compared against existing works such as k-NN, NN, multi-SVM and multi-SVNN. The results achieved by the proposed HAC with the HCS-DBN algorithm are eventually higher than the existing works with the values of 0.945, 0.9695 and 0.99348 for accuracy, sensitivity and specificity, respectively.

Originality/value

This work presents the brain tumor segmentation and the classification scheme by introducing the HAC-based segmentation model. The proposed HAC model combines the BFC and the active contour model through a fusion process, using the Laplacian correction probability for segmenting the slices in the database.

Details

Sensor Review, vol. 39 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 30 August 2013

Vanessa El‐Khoury, Martin Jergler, Getnet Abebe Bayou, David Coquil and Harald Kosch

A fine‐grained video content indexing, retrieval, and adaptation requires accurate metadata describing the video structure and semantics to the lowest granularity, i.e. to the…

Abstract

Purpose

A fine‐grained video content indexing, retrieval, and adaptation requires accurate metadata describing the video structure and semantics to the lowest granularity, i.e. to the object level. The authors address these requirements by proposing semantic video content annotation tool (SVCAT) for structural and high‐level semantic video annotation. SVCAT is a semi‐automatic MPEG‐7 standard compliant annotation tool, which produces metadata according to a new object‐based video content model introduced in this work. Videos are temporally segmented into shots and shots level concepts are detected automatically using ImageNet as background knowledge. These concepts are used as a guide to easily locate and select objects of interest which are then tracked automatically to generate an object level metadata. The integration of shot based concept detection with object localization and tracking drastically alleviates the task of an annotator. The paper aims to discuss these issues.

Design/methodology/approach

A systematic keyframes classification into ImageNet categories is used as the basis for automatic concept detection in temporal units. This is then followed by an object tracking algorithm to get exact spatial information about objects.

Findings

Experimental results showed that SVCAT is able to provide accurate object level video metadata.

Originality/value

The new contribution in this paper introduces an approach of using ImageNet to get shot level annotations automatically. This approach assists video annotators significantly by minimizing the effort required to locate salient objects in the video.

Details

International Journal of Pervasive Computing and Communications, vol. 9 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 7 February 2021

Sengathir Janakiraman, Deva Priya M., Christy Jeba Malar A., Karthick S. and Anitha Rajakumari P.

The purpose of this paper is to design an Internet-of-Things (IoT) architecture-based Diabetic Retinopathy Detection Scheme (DRDS) proposed for identifying Type-I or Type-II…

Abstract

Purpose

The purpose of this paper is to design an Internet-of-Things (IoT) architecture-based Diabetic Retinopathy Detection Scheme (DRDS) proposed for identifying Type-I or Type-II diabetes and to specifically advise the Type-II diabetic patients about the possibility of vision loss.

Design/methodology/approach

The proposed DRDS includes the benefits of automatic calculation of clip limit parameters and sub-window for making the detection process completely adaptive. It uses the advantages of extended 5 × 5 Sobels operator for estimating the maximum edges determined through the convolution of 24 pixels with eight templates to achieve 24 outputs corresponding to individual pixels for finding the maximum magnitude. It enhances the probability of connecting pixels in the vascular map with its closely located neighbourhood points in the fundus images. Then, the spatial information and kernel of the neighbourhood pixels are integrated through the Robust Semi-supervised Kernelized Fuzzy Local information C-Means Clustering (RSKFL-CMC) method to attain significant clustering process.

Findings

The results of the proposed DRDS architecture confirm the predominance in terms of accuracy, specificity and sensitivity. The proposed DRDS technique facilitates superior performance at an average of 99.64% accuracy, 76.84% sensitivity and 99.93% specificity.

Research limitations/implications

DRDS is proposed as a comfortable, pain-free and harmless diagnosis system using the merits of Dexcom G4 Plantinum sensors for estimating blood glucose level in diabetic patients. It uses the merits of RSKFL-CMC method to estimate the spatial information and kernel of the neighborhood pixels for attaining significant clustering process.

Practical implications

The IoT architecture comprises of the application layer that inherits the DR application enabled Graphical User Interface (GUI) which is combined for processing of fundus images by using MATLAB applications. This layer aids the patients in storing the capture fundus images in the database for future diagnosis.

Social implications

This proposed DRDS method plays a vital role in the detection of DR and categorization based on the intensity of disease into severe, moderate and mild grades. The proposed DRDS is responsible for preventing vision loss of diabetic Type-II patients by accurate and potential detection achieved through the utilization of IoT architecture.

Originality/value

The performance of the proposed scheme with the benchmarked approaches of the literature is implemented using MATLAB R2010a. The complete evaluations of the proposed scheme are conducted using HRF, REVIEW, STARE and DRIVE data sets with subjective quantification provided by the experts for the purpose of potential retinal blood vessel segmentation.

Details

International Journal of Pervasive Computing and Communications, vol. 17 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 28 September 2012

Cristobal Arrieta, Sergio Uribe, Jorge Ramos‐Grez, Alex Vargas, Pablo Irarrazaval, Vicente Parot and Cristian Tejos

In medical applications, it is crucial to evaluate the geometric accuracy of rapid prototyping (RP) models. Current research on evaluating geometric accuracy has focused on…

Abstract

Purpose

In medical applications, it is crucial to evaluate the geometric accuracy of rapid prototyping (RP) models. Current research on evaluating geometric accuracy has focused on identifying two or more specific anatomical landmarks on the original structure and the RP model, and comparing their corresponding linear distances. Such kind of accuracy metrics is ambiguous and may induce misrepresentations of the actual errors. The purpose of this paper is to propose an alternative method and metrics to measure the accuracy of RP models.

Design/methodology/approach

The authors propose an accuracy metric composed of two different approaches: a global accuracy evaluation using volumetric intersection indexes calculated over segmented Computed Tomography scans of the original object and the RP model. Second, a local error metric that is computed from the surfaces of the original object and the RP model. This local error is rendered in a 3D surface using a color code, that allow differentiating regions where the model is overestimated, underestimated, or correctly estimated. Global and local error measurements are performed after rigid body registration, segmentation and triangulation.

Findings

The results show that the method can be applied to different objects without any modification, and provide simple, meaningful and precise quantitative indexes to measure the geometric accuracy of RP models.

Originality/value

The paper presents a new approach to characterize the geometric errors in RP models using global indexes and a local surface distribution of the errors. It requires minimum human intervention and it can be applied without any modification to any kind of object.

Details

Rapid Prototyping Journal, vol. 18 no. 6
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 22 June 2010

Linlin Zhu, Baojie Fan and Yandong Tang

Active contour can describe target's silhouette accurately and has been widely used in image segmentation and target tracking. Its main drawback is huge computation that is still…

Abstract

Purpose

Active contour can describe target's silhouette accurately and has been widely used in image segmentation and target tracking. Its main drawback is huge computation that is still not well resolved. The purpose of this paper is to optimize the evolving path of active contour, to reduce the computation cost and to make the evolution effectively.

Design/methodology/approach

The contour‐evolution process is separated into two steps: global translation and local deformation. The contour global translation and local deformation are realized by average and normal gradient flow of the evolving contour curve, respectively.

Findings

When a contour is far away from the object to be segmented or tracked, the effective way of contour evolution is that it moves to the object without deformation first and then it deforms into the shape of the object when it moves on the object.

Originality/value

The method presented in this paper can optimize the curve evolving path effectively without complicated calculation, such as rebuilding a new inner product, and its computation cost is largely reduced.

Details

Industrial Robot: An International Journal, vol. 37 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 17 August 2021

Andrés Regal Ludowieg, Claudio Ortega, Andrés Bronfman, Michelle Rodriguez Serra and Mario Chong

The purpose of this paper is to present a spatial decision support system (SDSS) to be used by the local authorities of a city in the planning and response phase of a disaster…

Abstract

Purpose

The purpose of this paper is to present a spatial decision support system (SDSS) to be used by the local authorities of a city in the planning and response phase of a disaster. The SDSS focuses on the management of public spaces as a resource to increase a vulnerable population’s accessibility to essential goods and services. Using a web-based platform, the SDSS would support data-driven decisions, especially for cases such as the COVID-19 pandemic which requires special care in quarantine situations (which imply walking access instead of by other means of transport).

Design/methodology/approach

This paper proposes a methodology to create a web-SDSS to manage public spaces in the planning and response phase of a disaster to increase the access to essential goods and services. Using a regular polygon grid, a city is partitioned into spatial units that aggregate spatial data from open and proprietary sources. The polygon grid is then used to compute accessibility, vulnerability and population density indicators using spatial analysis. Finally, a facility location problem is formulated and solved to provide decision-makers with an adaptive selection of public spaces given their indicators of choice.

Findings

The design and implementation of the methodology resulted in a granular representation of the city of Lima, Peru, in terms of population density, accessibility and vulnerability. Using these indicators, the SDSS was deployed as a web application that allowed decision-makers to explore different solutions to a facility location model within their districts, as well as visualizing the indicators computed for the hexagons that covered the district’s area. By performing tests with different local authorities, improvements were suggested to support a more general set of decisions and the key indicators to use in the SDSS were determined.

Originality/value

This paper, following the literature gap, is the first of its kind that presents an SDSS focused on increasing access to essential goods and services using public spaces and has had a successful response from local authorities with different backgrounds regarding the integration into their decision-making process.

Details

Journal of Humanitarian Logistics and Supply Chain Management, vol. 12 no. 2
Type: Research Article
ISSN: 2042-6747

Keywords

Article
Publication date: 18 April 2016

Satish Kumar Reddy and Prabir K. Pal

– The purpose of this paper is to detect traversable regions surrounding a mobile robot by computing terrain unevenness using the range data obtained from a single 3D scan.

267

Abstract

Purpose

The purpose of this paper is to detect traversable regions surrounding a mobile robot by computing terrain unevenness using the range data obtained from a single 3D scan.

Design/methodology/approach

The geometry of acquiring range data from a 3D scan is exploited to probe the terrain and extract traversable regions. Nature of terrain under each scan point is quantified in terms of an unevenness value, which is computed from the difference in range of scan point with respect to its neighbours. Both radial and transverse unevenness values are computed and compared with threshold values at every point to determine if the point belongs to a traversable region or an obstacle. A region growing algorithm spreads like a wavefront to join all traversable points into a traversable region.

Findings

This simple method clearly distinguishes ground and obstacle points. The method works well even in presence of terrain slopes or when the robot experiences pitch and roll.

Research limitations/implications

The method applies on single 3D scans and not on aggregated point cloud in general.

Practical implications

The method has been tested on a mobile robot in outdoor environment in our research centre.

Social implications

This method, along with advanced navigation schemes, can reduce human intervention in many mobile robot applications including unmanned ground vehicles.

Originality/value

Range difference between scan points has been used earlier for obstacle detection, but no methodology has been developed around this concept. The authors propose a concrete method based on computation of radial and transverse unevenness at every point and detecting obstacle edges using range-dependent threshold values.

Details

International Journal of Intelligent Unmanned Systems, vol. 4 no. 2
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 1 February 1997

Siang Kok Sim and Ming Yeong Teo

Describes work based on the hypothesis that the use of artificial neural networks can imbue vision‐based robots with the ability to learn about their environment and hence enhance…

301

Abstract

Describes work based on the hypothesis that the use of artificial neural networks can imbue vision‐based robots with the ability to learn about their environment and hence enhance their competence and flexibility. The Neocognitron neural network provides the vision‐based robot with the capability of learning about its environment through training to recognize certain objects. The Neocognitron network is selected because of its ability to tolerate translational, rotational and scaling invariance in the input pattern of objects. Presents results which support the use of Neocognitron in enhancing the flexibility of vision‐based robots.

Details

Integrated Manufacturing Systems, vol. 8 no. 1
Type: Research Article
ISSN: 0957-6061

Keywords

1 – 10 of 13