Search results

1 – 10 of 34
Article
Publication date: 4 September 2019

Li Na, Xiong Zhiyong, Deng Tianqi and Ren Kai

The precise segmentation of brain tumors is the most important and crucial step in their diagnosis and treatment. Due to the presence of noise, uneven gray levels, blurred…

Abstract

Purpose

The precise segmentation of brain tumors is the most important and crucial step in their diagnosis and treatment. Due to the presence of noise, uneven gray levels, blurred boundaries and edema around the brain tumor region, the brain tumor image has indistinct features in the tumor region, which pose a problem for diagnostics. The paper aims to discuss these issues.

Design/methodology/approach

In this paper, the authors propose an original solution for segmentation using Tamura Texture and ensemble Support Vector Machine (SVM) structure. In the proposed technique, 124 features of each voxel are extracted, including Tamura texture features and grayscale features. Then, these features are ranked using the SVM-Recursive Feature Elimination method, which is also adopted to optimize the parameters of the Radial Basis Function kernel of SVMs. Finally, the bagging random sampling method is utilized to construct the ensemble SVM classifier based on a weighted voting mechanism to classify the types of voxel.

Findings

The experiments are conducted over a sample data set to be called BraTS2015. The experiments demonstrate that Tamura texture is very useful in the segmentation of brain tumors, especially the feature of line-likeness. The superior performance of the proposed ensemble SVM classifier is demonstrated by comparison with single SVM classifiers as well as other methods.

Originality/value

The authors propose an original solution for segmentation using Tamura Texture and ensemble SVM structure.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 12 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 28 October 2021

Wenda Wei, Chengxia Liu and Jianing Wang

Nowadays, most methods of illusion garment evaluation are based on the subjective evaluation of experienced practitioners, which consumes time and the results are too subjective…

Abstract

Purpose

Nowadays, most methods of illusion garment evaluation are based on the subjective evaluation of experienced practitioners, which consumes time and the results are too subjective to be accurate enough. It is necessary to explore a method that can quantify professional experience into objective indicators to evaluate the sensory comfort of the optical illusion skirt quickly and accurately. The purpose of this paper is to propose a method to objectively evaluate the sensory comfort of optical illusion skirt patterns by combining texture feature extraction and prediction model construction.

Design/methodology/approach

Firstly, 10 optical illusion sample skirts are produced, and 10 experimental images are collected for each sample skirt. Then a Likert five-level evaluation scale is designed to obtain the sensory comfort level of each skirt through the questionnaire survey. Synchronously, the coarseness, contrast, directionality, line-likeness, regularity and roughness of the sample image are calculated based on Tamura texture feature algorithm, and the mean, contrast and entropy are extracted of the image transformed by Gabor wavelet. Both are set as objective parameters. Two final indicators T1 and T2 are refined from the objective parameters previously obtained to construct the predictive model of the subjective comfort of the visual illusion skirt. The linear regression model and the MLP neural network model are constructed.

Findings

Results show that the accuracy of the linear regression model is 92%, and prediction accuracy of the MLP neural network model is 97.9%. It is feasible to use Tamura texture features, Gabor wavelet transform and MLP neural network methods to objectively predict the sensory comfort of visual illusion skirt images.

Originality/value

Compared with the existing uncertain and non-reproducible subjective evaluation of optical illusion clothing based on experienced experts. The main advantage of the authors' method is that this method can objectively obtain evaluation parameters, quickly and accurately obtain evaluation grades without repeated evaluation by experienced experts. It is a method of objectively quantifying the experience of experts.

Details

International Journal of Clothing Science and Technology, vol. 33 no. 5
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 27 July 2021

Papangkorn Pidchayathanakorn and Siriporn Supratid

A major key success factor regarding proficient Bayes threshold denoising refers to noise variance estimation. This paper focuses on assessing different noise variance estimations…

Abstract

Purpose

A major key success factor regarding proficient Bayes threshold denoising refers to noise variance estimation. This paper focuses on assessing different noise variance estimations in three Bayes threshold models on two different characteristic brain lesions/tumor magnetic resonance imaging (MRIs).

Design/methodology/approach

Here, three Bayes threshold denoising models based on different noise variance estimations under the stationary wavelet transforms (SWT) domain are mainly assessed, compared to state-of-the-art non-local means (NLMs). Each of those three models, namely D1, GB and DR models, respectively, depends on the most detail wavelet subband at the first resolution level, on the entirely global detail subbands and on the detail subband in each direction/resolution. Explicit and implicit denoising performance are consecutively assessed by threshold denoising and segmentation identification results.

Findings

Implicit performance assessment points the first–second best accuracy, 0.9181 and 0.9048 Dice similarity coefficient (Dice), sequentially yielded by GB and DR; reliability is indicated by 45.66% Dice dropping of DR, compared against 53.38, 61.03 and 35.48% of D1 GB and NLMs, when increasing 0.2 to 0.9 noise level on brain lesions MRI. For brain tumor MRI under 0.2 noise level, it denotes the best accuracy of 0.9592 Dice, resulted by DR; however, 8.09% Dice dropping of DR, relative to 6.72%, 8.85 and 39.36% of D1, GB and NLMs is denoted. The lowest explicit and implicit denoising performances of NLMs are obviously pointed.

Research limitations/implications

A future improvement of denoising performance possibly refers to creating a semi-supervised denoising conjunction model. Such model utilizes the denoised MRIs, resulted by DR and D1 thresholding model as uncorrupted image version along with the noisy MRIs, representing corrupted version ones during autoencoder training phase, to reconstruct the original clean image.

Practical implications

This paper should be of interest to readers in the areas of technologies of computing and information science, including data science and applications, computational health informatics, especially applied as a decision support tool for medical image processing.

Originality/value

In most cases, DR and D1 provide the first–second best implicit performances in terms of accuracy and reliability on both simulated, low-detail small-size region-of-interest (ROI) brain lesions and realistic, high-detail large-size ROI brain tumor MRIs.

Article
Publication date: 1 June 2021

Na Li and Kai Ren

Automatic segmentation of brain tumor from medical images is a challenging task because of tumor's uneven and irregular shapes. In this paper, the authors propose an…

Abstract

Purpose

Automatic segmentation of brain tumor from medical images is a challenging task because of tumor's uneven and irregular shapes. In this paper, the authors propose an attention-based nested segmentation network, named DAU-Net. In total, two types of attention mechanisms are introduced to make the U-Net network focus on the key feature regions. The proposed network has a deep supervised encoder–decoder architecture and a redesigned dense skip connection. DAU-Net introduces an attention mechanism between convolutional blocks so that the features extracted at different levels can be merged with a task-related selection.

Design/methodology/approach

In the coding layer, the authors designed a channel attention module. It marks the importance of each feature graph in the segmentation task. In the decoding layer, the authors designed a spatial attention module. It marks the importance of different regional features. And by fusing features at different scales in the same coding layer, the network can fully extract the detailed information of the original image and learn more tumor boundary information.

Findings

To verify the effectiveness of the DAU-Net, experiments were carried out on the BRATS 2018 brain tumor magnetic resonance imaging (MRI) database. The segmentation results show that the proposed method has a high accuracy, with a Dice similarity coefficient (DSC) of 89% in the complete tumor, which is an improvement of 8.04 and 4.02%, compared with fully convolutional network (FCN) and U-Net, respectively.

Originality/value

The experimental results show that the proposed method has good performance in the segmentation of brain tumors. The proposed method has potential clinical applicability.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 14 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 5 June 2017

Zhoufeng Liu, Lei Yan, Chunlei Li, Yan Dong and Guangshuai Gao

The purpose of this paper is to find an efficient fabric defect detection algorithm by means of exploring the sparsity characteristics of main local binary pattern (MLBP…

Abstract

Purpose

The purpose of this paper is to find an efficient fabric defect detection algorithm by means of exploring the sparsity characteristics of main local binary pattern (MLBP) extracted from the original fabric texture.

Design/methodology/approach

In the proposed algorithm, original LBP features are extracted from the fabric texture to be detected, and MLBP are selected by occurrence probability. Second, a dictionary is established with MLBP atoms which can sparsely represent all the LBP. Then, the value of the gray-scale difference between gray level of neighborhood pixels and the central pixel, and the mean of the difference which has the same MLBP feature are calculated. And then, the defect-contained image is reconstructed as normal texture image. Finally, the residual is calculated between reconstructed and original images, and a simple threshold segmentation method can divide the residual image, and the defective region is detected.

Findings

The experiment result shows that the fabric texture can be more efficiently reconstructed, and the proposed method achieves better defect detection performance. Moreover, it offers empirical insights about how to exploit the sparsity of one certain feature, e.g. LBP.

Research limitations/implications

Because of the selected research approach, the results may lack generalizability in chambray. Therefore, researchers are encouraged to test the proposed propositions further.

Originality/value

In this paper, a novel fabric defect detection method which extracts the sparsity of MLBP features is proposed.

Details

International Journal of Clothing Science and Technology, vol. 29 no. 3
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 27 November 2009

A. Vadivel, Shamik Sural and A.K. Majumdar

The main obstacle in realising semantic‐based image retrieval from the web is that it is difficult to capture semantic description of an image in low‐level features. Text‐based…

Abstract

Purpose

The main obstacle in realising semantic‐based image retrieval from the web is that it is difficult to capture semantic description of an image in low‐level features. Text‐based keywords can be generated from web documents to capture semantic information for narrowing down the search space. The combination of keywords and various low‐level features effectively increases the retrieval precision. The purpose of this paper is to propose a dynamic approach for integrating keywords and low‐level features to take advantage of their complementary strengths.

Design/methodology/approach

Image semantics are described using both low‐level features and keywords. The keywords are constructed from the text located in the vicinity of images embedded in HTML documents. Various low‐level features such as colour histograms, texture and composite colour‐texture features are extracted for supplementing keywords.

Findings

The retrieval performance is better than that of various recently proposed techniques. The experimental results show that the integrated approach has better retrieval performance than both the text‐based and the content‐based techniques.

Research limitations/implications

The features of images used for capturing the semantics may not always describe the content.

Practical implications

The indexing mechanism for dynamically growing features is challenging while practically implementing the system.

Originality/value

A survey of image retrieval systems for searching images available on the internet found that no internet search engine can handle both low‐level features and keywords as queries for retrieving images from WWW so this is the first of its kind.

Details

Online Information Review, vol. 33 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 21 November 2008

Chun‐Nan Lin, Chih‐Fong Tsai and Jinsheng Roan

Because of the popularity of digital cameras, the number of personal photographs is increasing rapidly. In general, people manage their photos by date, subject, participants, etc…

Abstract

Purpose

Because of the popularity of digital cameras, the number of personal photographs is increasing rapidly. In general, people manage their photos by date, subject, participants, etc. for future browsing and searching. However, it is difficult and/or takes time to retrieve desired photos from a large number of photographs based on the general personal photo management strategy. In this paper the authors aim to propose a systematic solution to effectively organising and browsing personal photos.

Design/methodology/approach

In their system the authors apply the concept of content‐based image retrieval (CBIR) to automatically extract visual image features of personal photos. Then three well‐known clustering techniques – k‐means, self‐organising maps and fuzzy c‐means – are used to group personal photos. Finally, the clustering results are evaluated by human subjects in terms of retrieval effectiveness and efficiency.

Findings

Experimental results based on the dataset of 1,000 personal photos show that the k‐means clustering method outperforms self‐organising maps and fuzzy c‐means. That is, 12 subjects out of 30 preferred the clustering results of k‐means. In particular, most subjects agreed that larger numbers of clusters (e.g. 15 to 20) enabled more effective browsing of personal photos. For the efficiency evaluation, the clustering results using k‐means allowed subjects to search for relevant images in the least amount of time.

Originality/value

CBIR is applied in many areas, but very few related works focus on personal photo browsing and retrieval. This paper examines the applicability of using CBIR and clustering techniques for browsing personal photos. In addition, the evaluation based on the effectiveness and efficiency strategies ensures the reliability of our findings.

Details

Online Information Review, vol. 32 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 1 October 2005

Juan Manuel García Chamizo, Andrés Fuster Guilló and Jorge Azorín López

According to the problems of visual perception, we propose a model for the processing of vision in adverse situations of illumination, scale, etc. In this paper, a model for image…

Abstract

Purpose

According to the problems of visual perception, we propose a model for the processing of vision in adverse situations of illumination, scale, etc. In this paper, a model for image segmentation and labelling obtained in real conditions with different scales is proposed.

Design/methodology/approach

The model is based on the texture identification of the scene's objects by means of comparison with a database that stores series of each texture perceived with successive optic parameter values. As a basis for the model, self‐organising maps have been used in several phases of the labelling process.

Findings

The model has been conceived to systematically deal with the different causes that make vision difficult and allows it to be applied in a wide range of real situations. The results show high success rates in the labelling of scenes captured in different scale conditions, using very simple describers, such as different histograms of textures.

Research limitations/implications

Our interest is directed towards systematising the proposal and experimenting on the influence of the other variables of the vision. We will also tackle the implantation of the classifier module so that the different causes can be dealt with by the reconfiguration of the same hardware (using reconfigurable hardware).

Originality/value

This research approaches a very advanced angle of the vision problems: visual perception under adverse conditions. In order to deal with this problem, a model formulated with a general purpose is proposed. Our objective is to present an approach to conceive universal architectures (in the sense of being valid with independence of the implied magnitudes).

Details

Kybernetes, vol. 34 no. 9/10
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 20 July 2012

Ola Pilerot

The purpose of this article is to investigate and critically examine conceptualisations of information sharing activities in a selection of library and information science (LIS…

4166

Abstract

Purpose

The purpose of this article is to investigate and critically examine conceptualisations of information sharing activities in a selection of library and information science (LIS) literature.

Design/methodology/approach

In order to explore how LIS researchers define the concept of information sharing, and how the concept is connected with theory, empirical material and other supporting concepts, a literature review and a conceptual meta‐analysis was carried out on 35 papers and one monograph. The analysis was based on Waismann's concept of open texture, Wittgenstein's notion of language games and the concept of meaning holism.

Findings

Six theoretical frameworks were identified. These are not found to be incommensurable, but can be used as building blocks for an integrative framework. Ambiguous conceptualisations are frequent. Different conceptualisations tend to emphasize different aspects of information sharing activities: that which is shared; those who are sharing; and the location in which the sharing activities take place. The commonalities of the people involved in information sharing activities are often seen as a ground for the development of information sharing practices.

Practical implications

The findings provide a guide for future research which intends to explore activities of information sharing.

Originality/value

The article offers a systematic review of recent LIS literature on information sharing, and extends the theoretical base for information sharing research.

Details

Journal of Documentation, vol. 68 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 6 March 2019

Xueqing Zhao, Xin Shi, Kaixuan Liu and Yongmei Deng

The quality of produced textile fibers plays a very important role in the textile industry, and detection and assessment schemes are the key problems. Therefore, the purpose of…

Abstract

Purpose

The quality of produced textile fibers plays a very important role in the textile industry, and detection and assessment schemes are the key problems. Therefore, the purpose of this paper is to propose a relatively simple and effective technique to detect and assess the quality of produced textile fibers.

Design/methodology/approach

In order to achieve automatic visual inspection of fabric defects, first, images of the textile fabric are pre-processed by using Block-Matching and 3-D (BM3D) filtering. And then, features of textile fibers image are respectively extracted, including color, texture and frequency spectrum features. The color features are extracted by using hue–saturation–intensity model, which is more consistent with the human vision perception model; texture features are extracted by using scale-invariant feature transform scheme, which is a quite good method to detect and describe the local image features, and the obtained features are robust to local geometric distortion; frequency spectrum features of textiles are less sensitive to noise and intensity variations than spatial features. Finally, for evaluating the quality of the fabric in real time, two quantitatively metric parameters, peak signal-to-noise ratio and structural similarity, are used to objectively assess the quality of textile fabric image.

Findings

Compared to the quality between production and pre-processing of textile fiber images, the BM3D filtering method is a very efficient technology to improve the quality of textile fiber images. Compared to the different features of textile fibers, like color, texture and frequency spectrum, the proposed detection and assessment method based on textile fabric image feature can easily detect and assess the quality of textiles. Moreover, the objective metrics can further improve the intelligence and performance of detection and assessment schemes, and it is very simple to detect and assess the quality of textiles in the textile industry.

Originality/value

An intelligent detection and assessment method based on textile fabric image feature is proposed, which can efficiently detect and assess the quality of textiles, thereby improving the efficiency of textile production lines.

Details

International Journal of Clothing Science and Technology, vol. 31 no. 3
Type: Research Article
ISSN: 0955-6222

Keywords

1 – 10 of 34