Search results

1 – 10 of over 32000
Article
Publication date: 1 April 2002

Uri Fidelman

It is suggested that the left hemispheric neurons and the magnocellular visual system are specialized in tasks requiring a relatively small number of large neurons having a fast…

Abstract

It is suggested that the left hemispheric neurons and the magnocellular visual system are specialized in tasks requiring a relatively small number of large neurons having a fast reaction time due to a high firing rate or many dendritic synapses of the same neuron which are activated simultaneously. On the other hand the right hemispheric neurons and the neurons of the parvocellular visual system are specialized in tasks requiring a relatively larger number of short term memory (STM) Hebbian engrams (neural networks). This larger number of engrams is achieved by a combination of two strategies. The first is evolving a larger number of neurons, which may be smaller and have a lower firing rate. The second is evolving longer and more branching axons and thus producing more engrams, including engrams comprising neurons located at cortical areas distant from each other. This model explains why verbal functions of the brain are related to the left hemisphere, and the division of semantic tasks between the left hemisphere and the right one. This explanation is extended to other cognitive functions like visual search, ontological cognition, the detection of temporal order, and the dual cognitive interpretation of the perceived physical phenomena.

Details

Kybernetes, vol. 31 no. 3/4
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 20 December 2021

Xingyu Wen, Jing Zhang, Mincheol Whang and Kaixuan Liu

The purpose of this paper is to discuss the relationship between bra's visual impression and bra parts, and then to explore the application of visual impression in bra design.

Abstract

Purpose

The purpose of this paper is to discuss the relationship between bra's visual impression and bra parts, and then to explore the application of visual impression in bra design.

Design/methodology/approach

Firstly, 82 female undergraduates are asked to answered this questionnaire online, which is about the importance of parts in bra design. In the part of data analysis, the method of principal part analysis (PCA) are used to get the relationship between bra's parts, and reduce dimension of factors that influence bra design. After that, those group of features are further discussed from the perspective of visual design. Finally, design an application based on conclusion.

Findings

To get the influence features of bra appearance and improve the visual design effect, this paper matches the bra parts with visual features (color, texture, shape and space) and presents four main features of bra design: “color,” “visual texture,” “design shape” and “spatial expression” together with corresponding bra parts and technique of expression. Moreover, user interface in bra cloud customization is designed.

Practical implications

The conclusion, which shows the corresponding relationship between bra visual effect and its basic parts, has an important role in bra visual design. First, it can be useful for design idea with different technique of expression, which may supply a theoretical basis for design. Secondly, the combination of bra parts and visual features can be used to evaluate the appearance.

Originality/value

Discussing the bra visual impression based on bra's basic parts and visual features provides a theoretical method for bra design and its appearance evaluation.

Details

International Journal of Clothing Science and Technology, vol. 34 no. 3
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 3 August 2012

Chih‐Fong Tsai and Wei‐Chao Lin

Content‐based image retrieval suffers from the semantic gap problem: that images are represented by low‐level visual features, which are difficult to directly match to high‐level…

Abstract

Purpose

Content‐based image retrieval suffers from the semantic gap problem: that images are represented by low‐level visual features, which are difficult to directly match to high‐level concepts in the user's mind during retrieval. To date, visual feature representation is still limited in its ability to represent semantic image content accurately. This paper seeks to address these issues.

Design/methodology/approach

In this paper the authors propose a novel meta‐feature feature representation method for scenery image retrieval. In particular some class‐specific distances (namely meta‐features) between low‐level image features are measured. For example the distance between an image and its class centre, and the distances between the image and its nearest and farthest images in the same class, etc.

Findings

Three experiments based on 190 concrete, 130 abstract, and 610 categories in the Corel dataset show that the meta‐features extracted from both global and local visual features significantly outperform the original visual features in terms of mean average precision.

Originality/value

Compared with traditional local and global low‐level features, the proposed meta‐features have higher discriminative power for distinguishing a large number of conceptual categories for scenery image retrieval. In addition the meta‐features can be directly applied to other image descriptors, such as bag‐of‐words and contextual features.

Article
Publication date: 26 January 2022

Ziming Zeng, Shouqiang Sun, Tingting Li, Jie Yin and Yueyan Shen

The purpose of this paper is to build a mobile visual search service system for the protection of Dunhuang cultural heritage in the smart library. A novel mobile visual search…

Abstract

Purpose

The purpose of this paper is to build a mobile visual search service system for the protection of Dunhuang cultural heritage in the smart library. A novel mobile visual search model for Dunhuang murals is proposed to help users acquire rich knowledge and services conveniently.

Design/methodology/approach

First, local and global features of images are extracted, and the visual dictionary is generated by the k-means clustering. Second, the mobile visual search model based on the bag-of-words (BOW) and multiple semantic associations is constructed. Third, the mobile visual search service system of the smart library is designed in the cloud environment. Furthermore, Dunhuang mural images are collected to verify this model.

Findings

The findings reveal that the BOW_SIFT_HSV_MSA model has better search performance for Dunhuang mural images when the scale-invariant feature transform (SIFT) and the hue, saturation and value (HSV) are used to extract local and global features of the images. Compared with different methods, this model is the most effective way to search images with the semantic association in the topic, time and space dimensions.

Research limitations/implications

Dunhuang mural image set is a part of the vast resources stored in the smart library, and the fine-grained semantic labels could be applied to meet diverse search needs.

Originality/value

The mobile visual search service system is constructed to provide users with Dunhuang cultural services in the smart library. A novel mobile visual search model based on BOW and multiple semantic associations is proposed. This study can also provide references for the protection and utilization of other cultural heritages.

Details

Library Hi Tech, vol. 40 no. 6
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 7 April 2023

Sixing Liu, Yan Chai, Rui Yuan and Hong Miao

Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments…

Abstract

Purpose

Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments. Existing algorithms are based on laser or visual odometry; however, the lidar sensing range is small, the amount of data features is small, the camera is vulnerable to external conditions and the localization and map building cannot be performed stably and accurately using a single sensor. This paper aims to propose a laser three dimensions tightly coupled map building method that incorporates visual information, and uses laser point cloud information and image information to complement each other to improve the overall performance of the algorithm.

Design/methodology/approach

The visual feature points are first matched at the front end of the method, and the mismatched point pairs are removed using the bidirectional random sample consensus (RANSAC) algorithm. The laser point cloud is then used to obtain its depth information, while the two types of feature points are fed into the pose estimation module for a tightly coupled local bundle adjustment solution using a heuristic simulated annealing algorithm. Finally, the visual bag-of-words model is fused in the laser point cloud information to establish a threshold to construct a loopback framework to further reduce the cumulative drift error of the system over time.

Findings

Experiments on publicly available data sets show that the proposed method in this paper can match its real trajectory well. For various scenes, the map can be constructed by using the complementary laser and vision sensors, with high accuracy and robustness. At the same time, the method is verified in a real environment using an autonomous walking acquisition platform, and the system loaded with the method can run well for a long time and take into account the environmental adaptability of multiple scenes.

Originality/value

A multi-sensor data tight coupling method is proposed to fuse laser and vision information for optimal solution of the positional attitude. A bidirectional RANSAC algorithm is used for the removal of visual mismatched point pairs. Further, oriented fast and rotated brief feature points are used to build a bag-of-words model and construct a real-time loopback framework to reduce error accumulation. According to the experimental validation results, the accuracy and robustness of the single-sensor SLAM algorithm can be improved.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 31 May 2022

Osamah M. Al-Qershi, Junbum Kwon, Shuning Zhao and Zhaokun Li

For the case of many content features, This paper aims to investigate which content features in video and text ads more contribute to accurately predicting the success of…

Abstract

Purpose

For the case of many content features, This paper aims to investigate which content features in video and text ads more contribute to accurately predicting the success of crowdfunding by comparing prediction models.

Design/methodology/approach

With 1,368 features extracted from 15,195 Kickstarter campaigns in the USA, the authors compare base models such as logistic regression (LR) with tree-based homogeneous ensembles such as eXtreme gradient boosting (XGBoost) and heterogeneous ensembles such as XGBoost + LR.

Findings

XGBoost shows higher prediction accuracy than LR (82% vs 69%), in contrast to the findings of a previous relevant study. Regarding important content features, humans (e.g. founders) are more important than visual objects (e.g. products). In both spoken and written language, words related to experience (e.g. eat) or perception (e.g. hear) are more important than cognitive (e.g. causation) words. In addition, a focus on the future is more important than a present or past time orientation. Speech aids (see and compare) to complement visual content are also effective and positive tone matters in speech.

Research limitations/implications

This research makes theoretical contributions by finding more important visuals (human) and language features (experience, perception and future time). Also, in a multimodal context, complementary cues (e.g. speech aids) across different modalities help. Furthermore, the noncontent parts of speech such as positive “tone” or pace of speech are important.

Practical implications

Founders are encouraged to assess and revise the content of their video or text ads as well as their basic campaign features (e.g. goal, duration and reward) before they launch their campaigns. Next, overly complex ensembles may suffer from overfitting problems. In practice, model validation using unseen data is recommended.

Originality/value

Rather than reducing the number of content feature dimensions (Kaminski and Hopp, 2020), by enabling advanced prediction models to accommodate many contents features, prediction accuracy rises substantially.

Article
Publication date: 14 June 2013

Edgardo Molina, Alpha Diallo and Zhigang Zhu

The purpose of this paper is to propose a local orientation and navigation framework based on visual features that provide location recognition, context augmentation, and viewer…

Abstract

Propose

The purpose of this paper is to propose a local orientation and navigation framework based on visual features that provide location recognition, context augmentation, and viewer localization information to a blind or low‐vision user.

Design/methodology/approach

The authors consider three types of “visual noun” features: signage, visual‐text, and visual‐icons that are proposed as a low‐cost method for augmenting environments. These are used in combination with an RGB‐D sensor and a simplified SLAM algorithm to develop a framework for navigation assistance suitable for the blind and low‐vision users.

Findings

It was found that signage detection cannot only help a blind user to find a location, but can also be used to give accurate orientation and location information to guide the user navigating a complex environment. The combination of visual nouns for orientation and RGB‐D sensing for traversable path finding can be one of the cost‐effective solutions for navigation assistance for blind and low‐vision users.

Research limitations/implications

This is the first step for a new approach in self‐localization and local navigation of a blind user using both signs and 3D data. The approach is meant to be cost‐effective but it only works in man‐made scenes where a lot of signs exist or can be placed and are relatively permanent in their appearances and locations.

Social implications

Based on 2012 World Health Organization, 285 million people are visually impaired, of which 39 million are blind. This project will have a direct impact on this community.

Originality/value

Signage detection has been widely studied for assisting visually impaired people in finding locations, but this paper provides the first attempt to use visual nouns as visual features to accurately locate and orient a blind user. The combination of visual nouns with 3D data from an RGB‐D sensor is also new.

Details

Journal of Assistive Technologies, vol. 7 no. 2
Type: Research Article
ISSN: 1754-9450

Keywords

Article
Publication date: 30 June 2021

Grazia Murtarelli, Stefania Romenti and Chiara Valentini

Online images can convey sensory-based elements affecting digital users' emotions and digital engagement. The purpose of this study is to investigate which image-based features

Abstract

Purpose

Online images can convey sensory-based elements affecting digital users' emotions and digital engagement. The purpose of this study is to investigate which image-based features are more effective in conveying and stimulating particular emotions and engagement towards organizations operating in the food industry.

Design/methodology/approach

An online experimental survey was implemented. Two image-based features, narrativity and dynamism were chosen. The stimuli comprise four images, one with high and one with low level of narrativity, and one with high and one with low dynamism, published by a food company on its official Instagram account. Food-identity, emotional appeals and digital visual engagement behaviours were measured. A total of 141 students between 19 and 25 years old of a European University completed the questionnaire. Data was analysed through SPSS software using t-test analysis.

Findings

Results show that both narrativity and dynamism impact digital users' emotions and it was found to impact digital visual engagement attitude. Food involvement was measured in terms of food identity impact the effects of specific image-based features on emotions and visual engagement.

Research limitations/implications

The study focuses on only two visual social semiotics features – narrativity and dynamism – and therefore, only partially captures the potentialities of images in digital communications.

Practical implications

This study provides professionals with empirical evidence and insights for effectively planning a visual social media strategy.

Originality/value

This paper contributes to the stream of research in social media communications by investigating the visual social semiotic features of images published online by a food company.

Details

British Food Journal, vol. 124 no. 1
Type: Research Article
ISSN: 0007-070X

Keywords

Article
Publication date: 31 May 2013

Qijin Chen, Jituo Li, Zheng Liu, Guodong Lu, Xinyu Bi and Bei Wang

Clothing retrieval is very useful to help the clients to efficiently search out the apparel they want. Currently, the mainstream clothing retrieval methods are attribute semantics…

Abstract

Purpose

Clothing retrieval is very useful to help the clients to efficiently search out the apparel they want. Currently, the mainstream clothing retrieval methods are attribute semantics based, which however are inconvenient for common clients. The purpose of this paper is to provide an easy‐to‐operate apparels retrieval mode with the authors' novel approach of clothing image similarity measurement.

Design/methodology/approach

The authors measure the similarity between two clothing images by computing the weighted similarities between their bundled features. Each bundled feature consists of the point features (SIFT) which are further quantified into local visual words in a maximally stable extremal region (MSER). The authors weight the importance of bundled features by the precision of SIFT quantification and local word frequency that reflects the frequency of the common visual words appeared in two bundled features. The bundled features similarity is computed from two aspects: local word frequency; and SIFTs distance matrix that records the distances between every two SIFTs in a bundled feature.

Findings

Local word frequencies improves the recognition between two bundled features with the same common visual words but different local word frequency. SIFTs distance matrix has the merits of scale invariance and rotation invariance. Experimental results show that this approach works well in the situations with large clothing deformation, background exchange and part hidden, etc. And the similarity measurement of Weight+Bundled+LWF+SDM is the best.

Originality/value

This paper presents an apparel retrieval mode based on local visual features, and presents a new algorithm for bundled feature matching and apparel similarity measurement.

Details

International Journal of Clothing Science and Technology, vol. 25 no. 2
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 26 August 2014

Xing Wang, Zhenfeng Shao, Xiran Zhou and Jun Liu

This paper aims to present a novel feature design that is able to precisely describe salient objects in images. With the development of space survey, sensor and information…

Abstract

Purpose

This paper aims to present a novel feature design that is able to precisely describe salient objects in images. With the development of space survey, sensor and information acquisition technologies, more complex objects appear in high-resolution remote sensing images. Traditional visual features are no longer precise enough to describe the images.

Design/methodology/approach

A novel remote sensing image retrieval method based on VSP (visual salient point) features is proposed in this paper. A key point detector and descriptor are used to extract the critical features and their descriptors in remote sensing images. A visual attention model is adopted to calculate the saliency map of the images, separating the salient regions from the background in the images. The key points in the salient regions are then extracted and defined as VSPs. The VSP features can then be constructed. The similarity between images is measured using the VSP features.

Findings

According to the experiment results, compared with traditional visual features, VSP features are more precise and stable in representing diverse remote sensing images. The proposed method performs better than the traditional methods in image retrieval precision.

Originality/value

This paper presents a novel remote sensing image retrieval method based on VSP features.

Details

Sensor Review, vol. 34 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

1 – 10 of over 32000