Search results

1 – 10 of over 2000
To view the access options for this content please click here
Article
Publication date: 1 November 2005

Mohamed Hammami, Youssef Chahir and Liming Chen

Along with the ever growingWeb is the proliferation of objectionable content, such as sex, violence, racism, etc. We need efficient tools for classifying and filtering…

Abstract

Along with the ever growingWeb is the proliferation of objectionable content, such as sex, violence, racism, etc. We need efficient tools for classifying and filtering undesirable web content. In this paper, we investigate this problem through WebGuard, our automatic machine learning based pornographic website classification and filtering system. Facing the Internet more and more visual and multimedia as exemplified by pornographic websites, we focus here our attention on the use of skin color related visual content based analysis along with textual and structural content based analysis for improving pornographic website filtering. While the most commercial filtering products on the marketplace are mainly based on textual content‐based analysis such as indicative keywords detection or manually collected black list checking, the originality of our work resides on the addition of structural and visual content‐based analysis to the classical textual content‐based analysis along with several major‐data mining techniques for learning and classifying. Experimented on a testbed of 400 websites including 200 adult sites and 200 non pornographic ones, WebGuard, our Web filtering engine scored a 96.1% classification accuracy rate when only textual and structural content based analysis are used, and 97.4% classification accuracy rate when skin color related visual content based analysis is driven in addition. Further experiments on a black list of 12 311 adult websites manually collected and classified by the French Ministry of Education showed that WebGuard scored 87.82% classification accuracy rate when using only textual and structural content‐based analysis, and 95.62% classification accuracy rate when the visual content‐based analysis is driven in addition. The basic framework of WebGuard can apply to other categorization problems of websites which combine, as most of them do today, textual and visual content.

Details

International Journal of Web Information Systems, vol. 1 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article
Publication date: 25 January 2020

Shiqing Wu, Zhonghou Wang, Bin Shen, Jia-Hai Wang and Li Dongdong

The purpose of this study is to achieve multi-variety and small-batch assembly through direct cooperation between equipment and people and to improve assembly efficiency…

Abstract

Purpose

The purpose of this study is to achieve multi-variety and small-batch assembly through direct cooperation between equipment and people and to improve assembly efficiency as well as flexibility.

Design/methodology/approach

Firstly, the concept of the human–computer interaction is designed. Secondly, the machine vision technology is studied theoretically. Skin color filter based on hue, saturation and value color model is put forward to screen out images that meet the skin color characteristics of the worker, and a multi-Gaussian weighted model is built to separate moving objects from its background. Both of them are combined to obtain the final images of the target objects. Then, the key technology is applied to the smart assembly workbench. Finally, experiments are conducted to evaluate the role of the human–computer interaction features in improving productivity for the smart assembly workbench.

Findings

The result shows that multi-variety and small-batch considerable increases assembly time and the developed human–computer interaction features, including prompting and introduction, effectively decrease assembly time.

Originality/value

This study proves that the machine vision technology studied in this paper can effectively eliminate the interferences of the environment to obtain the target image. By adopting the human–computer interaction features, including prompting and introduction, the efficiency of manual operation is improved greatly, especially for multi-variety and small-batch assembly.

Details

Assembly Automation, vol. 40 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article
Publication date: 9 December 2019

Wei-Yen Hsu

Virtual medical instrumentation plays a vital role in a telemedicine system that obtains data from the medical instrument, required by doctors at remote location to…

Abstract

Purposed

Virtual medical instrumentation plays a vital role in a telemedicine system that obtains data from the medical instrument, required by doctors at remote location to diagnose a patient. In recent years, the analysis of skin quality by telemedicine system has become an emerging trend. To allow the skin to complement the beauty products and achieve better improvement results, the purpose of this study is to provide advice on a system that can objectively evaluate the condition of the skin of the face and to match appropriate beauty and cosmetic products.

Design/methodology/approach

A novel customer-oriented medical system is proposed for the applications of telemedicine in this study, whose aim is to improve information transfer quality and rate to further enhance the communication between medical staffs and patients in the telemedicine. More specifically, facial skin will be recorded with digital images, and skin detection will be performed using image processing technology to facilitate doctors to provide medical treatment for the patients at far end.

Findings

The roughness, freckles and acne indicators were evaluated after obtaining skin images. These three indicators were used as input to the system, and skin scores were then calculated to evaluate skin conditions to further provide more matching skin care.

Originality/value

This can improve the health problems that have occurred and can also record the skin condition for each test. Experimental results suggest that it is suitable for the applications of telemedicine.

Details

The Electronic Library, vol. 37 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

To view the access options for this content please click here
Article
Publication date: 6 May 2014

Yong-Hwan Lee, Hyochang Ahn, Han-Jin Cho and June-Hwan Lee

This paper holds a big advantage to enable to recognize faces, regardless of time and place. Also this provides an independent performance of smart phone, because of its…

Abstract

Purpose

This paper holds a big advantage to enable to recognize faces, regardless of time and place. Also this provides an independent performance of smart phone, because of its process by a computer of third party not by that of the mobile device. In addition, it is desirable to minimize the expensive operations in mobile device with constraint computational power (i.e. battery consumption). Thus, the authors exclude the process of transmission failed from the input device. The paper aims to discuss these issues.

Design/methodology/approach

In this paper, the authors have proposed a new face detection and verification algorithm, based on skin color detection to enable extracting the face region from color images of the mobile phone. And then extracted the facial feature as eigenface, verified whether or not the identity of users is right, applied support vector machine to the region of detected face.

Findings

The experimental results for two datasets show that the proposed method achieves slightly higher efficiencies at the detection and verification of user identity, compared with other method, where varying lighting conditions with complex backgrounds, according to be fast and accurate than any other previous methods.

Originality/value

The proposed algorithm enables to implement fast and accurate search using triangle-square transformation for detection of human faces in a digital still color images, obtained by the mobile device camera under unconstraint environments, using advanced skin color model and characteristic points in a detected face.

Details

Journal of Systems and Information Technology, vol. 16 no. 2
Type: Research Article
ISSN: 1328-7265

Keywords

To view the access options for this content please click here
Article
Publication date: 18 September 2017

Ali Sohaib, Laurence Broadbent, Abdul Rehman Farooq, Lyndon Neal Smith and Melvyn Lionel Smith

Significant research has been carried out in terms of development of new bidirectional reflectance distribution function (BRDF) instruments; however, there is still little…

Abstract

Purpose

Significant research has been carried out in terms of development of new bidirectional reflectance distribution function (BRDF) instruments; however, there is still little research available regarding spectral BRDF measurements of human skin. This study aims to investigate the variation in human skin reflectance using a new fibre optic-based spectral-BRDF measurement device.

Design/methodology/approach

Design of this system mainly involves use of multiple fibre optics to illuminate and detect light reflected from a sample, whereas a hemispherical dome was 3D printed to mount the fibres at various slant/tilt angles. To investigate the spectral differences in BRDF of human skin, 3 narrowband filters in the visible spectrum were used, whereas measurements were taken from the back of the hand for Caucasian and Asian skin types.

Findings

The experiments demonstrate that the BRDF of human skin varies with wavelengths in the visible spectrum and it is also different for Caucasian and Asian skin types. Both skin types exhibit off-specular reflection with increase in angle of incidence and show less variation with respect to viewing angles when the angle of incidence is normal to the surface.

Research implications

A database of spectral BRDF measurements of human skin will help not only in creating realistic skin renderings but also in development of novel skin reflectance models for biomedical and machine vision applications. The measurements would also provide means to validate the predictions from existing light transport/spectral simulation models for human skin and will ultimately help in the accurate diagnosis and simulation of various skin disorders.

Originality/value

The proposed system provides fast scatter measurements by utilising multiple fibres to detect light simultaneously at different angles while also allowing easy switching between incident light directions. Due to its flexible design and contact-based measurements, the device is independent of errors due to sample movements and does not require any image registration. Also, measurements taken from the device show that the BRDF of skin varies significantly in the visible spectrum and it is different for Caucasian and Asian skin types.

Details

Sensor Review, vol. 37 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

To view the access options for this content please click here
Article
Publication date: 30 January 2007

Zhi Liu, Qingli Li, Jing‐qi Yan and Qun‐lin Tang

Tongue diagnosis is a standard expert technique of traditional Chinese medicine (TCM). Computerized tongue diagnosis promises to automate the process of tongue diagnosis…

Abstract

Purpose

Tongue diagnosis is a standard expert technique of traditional Chinese medicine (TCM). Computerized tongue diagnosis promises to automate the process of tongue diagnosis yet the tongue images segmentation upon which it depends is made difficult by the fact that the tongue is non‐rigid and varies greatly in size, shape, color, and texture. This paper presents a novel medical sensor system for TCM tongue diagnosis, which makes use of hyperspectral imaging technology.

Design/methodology/approach

The tongue image capturing sensor device for Chinese medical is based on the theory of the pushbroom hyperspectral imager. The paper illustrates its advantages by detecting the tongue contour in the hyperspectral images.

Findings

The experiments from 1,522 clinical tongue images show the validity of the system.

Practical implications

In this paper, the authors propose to use hyperspectral technology for tongue diagnosis for the first time in the literature and obtain promising results.

Originality/value

The novel sensor for tongue image capture gives a new method for tongue imformation collection. This system gives a new approach for tongue information collection.

Details

Sensor Review, vol. 27 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

To view the access options for this content please click here
Article
Publication date: 5 August 2014

Hairong Jiang, Juan P. Wachs and Bradley S. Duerstock

The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a…

Abstract

Purpose

The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a gesture recognition interface system was developed specially for individuals with upper-level spinal cord injuries including object tracking and face recognition to function as an efficient, hands-free WMRM controller.

Design/methodology/approach

Two Kinect® cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was used to interpret the hand gestures and locate the operator's face for object positioning, and then send those as commands to control the WMRM. The other sensor was used to automatically recognize different daily living objects selected by the subjects. An object recognition module employing the Speeded Up Robust Features algorithm was implemented and recognition results were sent as a commands for “coarse positioning” of the robotic arm near the selected object. Automatic face detection was provided as a shortcut enabling the positing of the objects close by the subject's face.

Findings

The gesture recognition interface incorporated hand detection, tracking and recognition algorithms, and yielded a recognition accuracy of 97.5 percent for an eight-gesture lexicon. Tasks’ completion time were conducted to compare manual (gestures only) and semi-manual (gestures, automatic face detection, and object recognition) WMRM control modes. The use of automatic face and object detection significantly reduced the completion times for retrieving a variety of daily living objects.

Originality/value

Integration of three computer vision modules were used to construct an effective and hand-free interface for individuals with upper-limb mobility impairments to control a WMRM.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 7 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 14 September 2012

Ong Chin Ann and Lau Bee Theng

The purpose of this paper is to investigate an idea of producing an assistive and augmentative communication (AAC) tool that uses natural human computer interfacing to…

Abstract

Purpose

The purpose of this paper is to investigate an idea of producing an assistive and augmentative communication (AAC) tool that uses natural human computer interfacing to accommodate the disabilities of children with cerebral palsy (CP) and assist them in their daily communication.

Design/methodology/approach

The authors developed a prototype that recognizes the real time detected emotions display on the face and send alerts to the caretakers through Short Messaging System (SMS) or loud speaker.

Findings

The evaluation result shows that the proposed prototype recognizes real time facial expressions from the children with CP with an average of 79.4 per cent, and a maximum of 88.3 per cent (standard deviation of 7.4 per cent) on ten children with CP. Evaluations were also conducted to investigate the effectiveness of the prototype to deliver critical expression messages to their caretakers. The result showed that 98.5 per cent of SMS was sent successfully to the caretakers (pre‐defined mobile phone number) with an average waiting time of 8.3 seconds.

Originality/value

The paper demonstrates the potential of the proposed prototype to assist children with CP to communicate with their caretakers in real time.

To view the access options for this content please click here
Article
Publication date: 15 June 2010

Xinrong Hu and Bugao Xu

The purpose of this paper is to develop a fast parameterized modeling approach to generate individualized dress forms for realistic human bodies.

Abstract

Purpose

The purpose of this paper is to develop a fast parameterized modeling approach to generate individualized dress forms for realistic human bodies.

Design/methodology/approach

An individualized dress form is created by deriving a new set of fitting functions from a number of key existing dressing parameters and pre‐defined templates. The fitting functions only contain simple shapes of circular and/or elliptical arcs, which can be modified computationally based on a few personal dressing data.

Findings

This paper reaffirms that individual body shape can be adequately described by a number of critical cross‐section silhouettes, and a personalized dress form can be constructed based on key dressing parameters and templates.

Originality/value

The fitting functions and relevant dressing data for specific cross‐sectional silhouettes are determined, permitting a user to create personalized dress forms only by inputting a simple set of dressing parameters.

Details

International Journal of Clothing Science and Technology, vol. 22 no. 2/3
Type: Research Article
ISSN: 0955-6222

Keywords

To view the access options for this content please click here
Article
Publication date: 30 August 2013

Rainhard Dieter Findling and Rene Mayrhofer

Personal mobile devices currently have access to a significant portion of their user's private sensitive data and are increasingly used for processing mobile payments…

Abstract

Purpose

Personal mobile devices currently have access to a significant portion of their user's private sensitive data and are increasingly used for processing mobile payments. Consequently, securing access to these mobile devices is a requirement for securing access to the sensitive data and potentially costly services. The authors propose and evaluate a first version of a pan shot face unlock method: a mobile device unlock mechanism using all information available from a 180° pan shot of the device around the user's head – utilizing biometric face information as well as sensor data of built‐in sensors of the device. The paper aims to discuss these issues.

Design/methodology/approach

This approach uses grayscale 2D images, on which the authors perform frontal and profile face detection. For face recognition, the authors evaluate different support vector machines and neural networks. To reproducibly evaluate this pan shot face unlock toolchain, the authors assembled the 2013 Hagenberg stereo vision pan shot face database, which the authors describe in detail in this article.

Findings

Current results indicate that the approach to face recognition is sufficient for further usage in this research. However, face detection is still error prone for the mobile use case, which consequently decreases the face recognition performance as well.

Originality/value

The contributions of this paper include: introducing pan shot face unlock as an approach to increase security and usability during mobile device authentication; introducing the 2013 Hagenberg stereo vision pan shot face database; evaluating this current pan shot face unlock toolchain using the newly created face database.

Details

International Journal of Pervasive Computing and Communications, vol. 9 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

1 – 10 of over 2000