Search results

1 – 10 of 310
Article
Publication date: 21 March 2016

Tao Liu, Zhixiang Fang, Qingzhou Mao, Qingquan Li and Xing Zhang

The spatial feature is important for scene saliency detection. Scene-based visual saliency detection methods fail to incorporate 3D scene spatial aspects. This paper aims to…

Abstract

Purpose

The spatial feature is important for scene saliency detection. Scene-based visual saliency detection methods fail to incorporate 3D scene spatial aspects. This paper aims to propose a cube-based method to improve saliency detection through integrating visual and spatial features in 3D scenes.

Design/methodology/approach

In the presented approach, a multiscale cube pyramid is used to organize the 3D image scene and mesh model. Each 3D cube in this pyramid represents a space unit similar to a pixel in the image saliency model multiscale image pyramid. In each 3D cube color, intensity and orientation features are extracted from the image and a quantitative concave–convex descriptor is extracted from the 3D space. A Gaussian filter is then used on this pyramid of cubes with an extended center-surround difference introduced to compute the cube-based 3D scene saliency.

Findings

The precision-recall rate and receiver operating characteristic curve is used to evaluate the method and other state-of-art methods. The results show that the method used is better than traditional image-based methods, especially for 3D scenes.

Originality/value

This paper presents a method that improves the image-based visual saliency model.

Details

Sensor Review, vol. 36 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 18 January 2024

Huazhou He, Pinghua Xu, Jing Jia, Xiaowan Sun and Jingwen Cao

Fashion merchandising hold a paramount position within the realm of retail marketing. Currently, the purpose of this article is that the assessment of display effectiveness…

47

Abstract

Purpose

Fashion merchandising hold a paramount position within the realm of retail marketing. Currently, the purpose of this article is that the assessment of display effectiveness predominantly relies on the subjective judgment of merchandisers due to the absence of an effective evaluation method. Although eye-tracking devices have found extensive used in tracking the gaze trajectory of subject, they exhibit limitations in terms of stability when applied to the evaluation of various scenes. This underscores the need for a dependable, user-friendly and objective assessment method.

Design/methodology/approach

To develop a cost-effective and convenient evaluation method, the authors introduced an image processing framework for the assessment of variations in the impact of store furnishings. An optimized visual saliency methodology that leverages a multiscale pyramid model, incorporating color, brightness and orientation features, to construct a visual saliency heatmap. Additionally, the authors have established two pivotal evaluation indices aimed at quantifying attention coverage and dispersion. Specifically, bottom features are extract from 9 distinct scale images which are down sampled from merchandising photographs. Subsequently, these extracted features are amalgamated to form a heatmap, serving as the focal point of the evaluation process. The authors have proposed evaluation indices dedicated to measuring visual focus and dispersion, facilitating a precise quantification of attention distribution within the observed scenes.

Findings

In comparison to conventional saliency algorithm, the optimization method yields more intuitive feedback regarding scene contrast. Moreover, the optimized approach results in a more concentrated focus within the central region of the visual field, a pattern in alignment with physiological research findings. The results affirm that the two defined indicators prove highly effective in discerning variations in visual attention across diverse brand store displays.

Originality/value

The study introduces an intelligent and cost-effective objective evaluate method founded upon visual saliency. This pioneering approach not only effectively discerns the efficacy of merchandising efforts but also holds the potential for extension to the assessment of fashion advertisements, home design and website aesthetics.

Details

International Journal of Clothing Science and Technology, vol. 36 no. 1
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 9 April 2021

Qiang Yang, Yuanjian Zhou, Yushi Jiang and Jiale Huo

This study aims to explore whether creativity can overcome banner blindness in the viewing of web pages and demonstrate how visual saliency and banner-page congruity constitute…

1294

Abstract

Purpose

This study aims to explore whether creativity can overcome banner blindness in the viewing of web pages and demonstrate how visual saliency and banner-page congruity constitute the boundary conditions for creativity to improve memory for banner ads.

Design/methodology/approach

Three studies were conducted to understand the influence of advertising creativity and banner blindness on recognition of banner ads, which were assessed using questionnaires and bias adjustment. The roles of online user tasks (goal-directed vs free-viewing), visual saliency (high vs low) and banner-page congruity (congruent vs incongruent) were considered.

Findings

The findings suggest that creativity alone is not sufficient to overcome the banner blindness phenomenon. Specifically, in goal-directed tasks, the effect of creativity on recognition of banner ads is dependent on banner ads’ visual saliency and banner-page congruity. Creative banners are high on visual saliency, and banner-page congruity yields higher recognition rates.

Practical implications

Creativity matters for attracting consumer attention. And in a web page context, where banner blindness prevails, the design of banners becomes even more important in this respect. Given the prominence of banners in online marketing, it is also necessary to tap the potential of creativity of banner ads.

Originality/value

First, focusing on how creativity influences memory for banner ads across distinct online user tasks not just provides promising theoretical insight on the tackling of banner blindness but also enriches research on advertising creativity. Second, contrary to the popular belief of extant literature, the findings suggest that, in a web page context, improvement in memory for banner ads via creativity is subject to certain boundary conditions. Third, a computational neuroscience software program was used in this study to assess the visual saliency of banner ads, whereas signal detection theory was used for adjustment of recognition scores. This interdisciplinary examination combining the two perspectives sheds new light on online advertising research.

Details

Journal of Research in Interactive Marketing, vol. 15 no. 2
Type: Research Article
ISSN: 2040-7122

Keywords

Article
Publication date: 1 August 2016

Chunlei Li, Ruimin Yang, Zhoufeng Liu, Guangshuai Gao and Qiuli Liu

Fabric defect detection plays an important role in textile quality control. The purpose of this paper is to propose a fabric defect detection algorithm using learned…

Abstract

Purpose

Fabric defect detection plays an important role in textile quality control. The purpose of this paper is to propose a fabric defect detection algorithm using learned dictionary-based visual saliency.

Design/methodology/approach

First, the test fabric image is splitted into image blocks, and the learned dictionary with normal samples and defective sample is constructed by selecting the image block local binary pattern features with highest or lowest similarity comparing with the average feature vector; second, the first L largest correlation coefficients between each test image block and the dictionary are calculated, and other correlation coefficients are set to zeros; third, the sum of the non-zeros coefficients corresponding to defective samples is used to generate saliency map; finally, an improve valley-emphasis method can efficiently segment the defect region.

Findings

Experimental results demonstrate that the generated saliency map by the proposed method can efficiently outstand defect region comparing with the state-of-the-art, and segment results can precisely localize defect region.

Originality/value

In this paper, a novel fabric defect detection scheme is proposed via learned dictionary-based visual saliency.

Details

International Journal of Clothing Science and Technology, vol. 28 no. 4
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 14 August 2017

Ning Xian

The purpose of this paper is to propose a new algorithm chaotic pigeon-inspired optimization (CPIO), which can effectively improve the computing efficiency of the basic Itti’s…

Abstract

Purpose

The purpose of this paper is to propose a new algorithm chaotic pigeon-inspired optimization (CPIO), which can effectively improve the computing efficiency of the basic Itti’s model for saliency-based detection. The CPIO algorithm and relevant applications are aimed at air surveillance for target detection.

Design/methodology/approach

To compare the improvements of the performance on Itti’s model, three bio-inspired algorithms including particle swarm optimization (PSO), brain storm optimization (BSO) and CPIO are applied to optimize the weight coefficients of each feature map in the saliency computation.

Findings

According to the experimental results in optimized Itti’s model, CPIO outperforms PSO in terms of computing efficiency and is superior to BSO in terms of searching ability. Therefore, CPIO provides the best overall properties among the three algorithms.

Practical implications

The algorithm proposed in this paper can be extensively applied for fast, accurate and multi-target detections in aerial images.

Originality/value

CPIO algorithm is originally proposed, which is very promising in solving complicated optimization problems.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Open Access
Article
Publication date: 1 June 2022

Hua Zhai and Zheng Ma

Effective rail surface defects detection method is the basic guarantee to manufacture high-quality rail. However, the existed visual inspection methods have disadvantages such as…

Abstract

Purpose

Effective rail surface defects detection method is the basic guarantee to manufacture high-quality rail. However, the existed visual inspection methods have disadvantages such as poor ability to locate the rail surface region and high sensitivity to uneven reflection. This study aims to propose a bionic rail surface defect detection method to obtain the high detection accuracy of rail surface defects under uneven reflection environments.

Design/methodology/approach

Through this bionic rail surface defect detection algorithm, the positioning and correction of the rail surface region can be computed from maximum run-length smearing (MRLS) and background difference. A saliency image can be generated to simulate the human visual system through some features including local grayscale, local contrast and edge corner effect. Finally, the meanshift algorithm and adaptive threshold are developed to cluster and segment the saliency image.

Findings

On the constructed rail defect data set, the bionic rail surface defect detection algorithm shows good recognition ability on the surface defects of the rail. Pixel- and defect-level index in the experimental results demonstrate that the detection algorithm is better than three advanced rail defect detection algorithms and five saliency models.

Originality/value

The bionic rail surface defect detection algorithm in the production process is proposed. Particularly, a method based on MRLS is introduced to extract the rail surface region and a multifeature saliency fusion model is presented to identify rail surface defects.

Details

Sensor Review, vol. 42 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 15 April 2020

Xiaoliang Qian, Jing Li, Jianwei Zhang, Wenhao Zhang, Weichao Yue, Qing-E Wu, Huanlong Zhang, Yuanyuan Wu and Wei Wang

An effective machine vision-based method for micro-crack detection of solar cell can economically improve the qualified rate of solar cells. However, how to extract features which…

Abstract

Purpose

An effective machine vision-based method for micro-crack detection of solar cell can economically improve the qualified rate of solar cells. However, how to extract features which have strong generalization and data representation ability at the same time is still an open problem for machine vision-based methods.

Design/methodology/approach

A micro-crack detection method based on adaptive deep features and visual saliency is proposed in this paper. The proposed method can adaptively extract deep features from the input image without any supervised training. Furthermore, considering the fact that micro-cracks can obviously attract visual attention when people look at the solar cell’s surface, the visual saliency is also introduced for the micro-crack detection.

Findings

Comprehensive evaluations are implemented on two existing data sets, where subjective experimental results show that most of the micro-cracks can be detected, and the objective experimental results show that the method proposed in this study has better performance in detecting precision.

Originality/value

First, an adaptive deep features extraction scheme without any supervised training is proposed for micro-crack detection. Second, the visual saliency is introduced for micro-crack detection.

Details

Sensor Review, vol. 40 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 14 October 2013

Dong Liu, Ming Cong, Yu Du and Clarence W. de Silva

Indoor robotic tasks frequently specify objects. For these applications, this paper aims to propose an object-based attention method using task-relevant feature for target…

Abstract

Purpose

Indoor robotic tasks frequently specify objects. For these applications, this paper aims to propose an object-based attention method using task-relevant feature for target selection. The task-relevant feature(s) are deduced from the learned object representation in semantic memory (SM), and low dimensional bias feature templates are obtained using Gaussian mixture model (GMM) to get an efficient attention process. This method can be used to select target in a scene which forms a task-specific representation of the environment and improves the scene understanding by driving the robot to a position in which the objects of interest can be detected with a smaller error probability.

Design/methodology/approach

Task definition and object representation in SM are proposed, and bias feature templates are obtained using GMM deduction for features from high dimension to low dimension. Mean shift method is used to segment the visual scene into discrete proto-objects. Given a task-specific object, the top-down bias attention uses obtained statistical knowledge of the visual features of the desired target to impact proto-objects and generate the saliency map by combining with the bottom-up saliency-based attention so as to maximize target detection speed.

Findings

Experimental results show that the proposed GMM-based attention model provides an effective and efficient method for task-specific target selection under different conditions. The promising results show that the method may provide good approximation to how humans combine target cues to optimize target selection.

Practical implications

The present method has been successfully applied in plenty of natural scenes of indoor robotic tasks. The proposed method has a wide range of applications and is using for an intelligent homecare robot cognitive control project. Due to the computational cost, the current implementation of this method has some limitations in real-time application.

Originality/value

The novel attention model which uses GMM to get the bias feature templates is proposed for attention competition. It provides a solution for object-based attention, and it is effective and efficient to improve search speed due to the autonomous deduction of features. The proposed model is adaptive without requiring predefined distinct types of features for task-specific objects.

Details

Industrial Robot: An International Journal, vol. 40 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 26 August 2014

Xing Wang, Zhenfeng Shao, Xiran Zhou and Jun Liu

This paper aims to present a novel feature design that is able to precisely describe salient objects in images. With the development of space survey, sensor and information…

Abstract

Purpose

This paper aims to present a novel feature design that is able to precisely describe salient objects in images. With the development of space survey, sensor and information acquisition technologies, more complex objects appear in high-resolution remote sensing images. Traditional visual features are no longer precise enough to describe the images.

Design/methodology/approach

A novel remote sensing image retrieval method based on VSP (visual salient point) features is proposed in this paper. A key point detector and descriptor are used to extract the critical features and their descriptors in remote sensing images. A visual attention model is adopted to calculate the saliency map of the images, separating the salient regions from the background in the images. The key points in the salient regions are then extracted and defined as VSPs. The VSP features can then be constructed. The similarity between images is measured using the VSP features.

Findings

According to the experiment results, compared with traditional visual features, VSP features are more precise and stable in representing diverse remote sensing images. The proposed method performs better than the traditional methods in image retrieval precision.

Originality/value

This paper presents a novel remote sensing image retrieval method based on VSP features.

Details

Sensor Review, vol. 34 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 14 December 2021

Zhoufeng Liu, Menghan Wang, Chunlei Li, Shumin Ding and Bicao Li

The purpose of this paper is to focus on the design of a dual-branch balance saliency model based on fully convolutional network (FCN) for automatic fabric defect detection, and…

Abstract

Purpose

The purpose of this paper is to focus on the design of a dual-branch balance saliency model based on fully convolutional network (FCN) for automatic fabric defect detection, and improve quality control in textile manufacturing.

Design/methodology/approach

This paper proposed a dual-branch balance saliency model based on discriminative feature for fabric defect detection. A saliency branch is firstly designed to address the problems of scale variation and contextual information integration, which is realized through the cooperation of a multi-scale discriminative feature extraction module (MDFEM) and a bidirectional stage-wise integration module (BSIM). These modules are respectively adopted to extract multi-scale discriminative context information and enrich the contextual information of features at each stage. In addition, another branch is proposed to balance the network, in which a bootstrap refinement module (BRM) is trained to guide the restoration of feature details.

Findings

To evaluate the performance of the proposed network, we conduct extensive experiments, and the experimental results demonstrate that the proposed method outperforms state-of-the-art (SOTA) approaches on seven evaluation metrics. We also conduct adequate ablation analyses that provide a full understanding of the design principles of the proposed method.

Originality/value

The dual-branch balance saliency model was proposed and applied into the fabric defect detection. The qualitative and quantitative experimental results show the effectiveness of the detection method. Therefore, the proposed method can be used for accurate fabric defect detection and even surface defect detection of other industrial products.

Details

International Journal of Clothing Science and Technology, vol. 34 no. 3
Type: Research Article
ISSN: 0955-6222

Keywords

1 – 10 of 310