Search results
1 – 10 of over 29000Hima Bindu and Manjunathachari K.
This paper aims to develop the Hybrid feature descriptor and probabilistic neuro-fuzzy system for attaining the high accuracy in face recognition system. In recent days, facial…
Abstract
Purpose
This paper aims to develop the Hybrid feature descriptor and probabilistic neuro-fuzzy system for attaining the high accuracy in face recognition system. In recent days, facial recognition (FR) systems play a vital part in several applications such as surveillance, access control and image understanding. Accordingly, various face recognition methods have been developed in the literature, but the applicability of these algorithms is restricted because of unsatisfied accuracy. So, the improvement of face recognition is significantly important for the current trend.
Design/methodology/approach
This paper proposes a face recognition system through feature extraction and classification. The proposed model extracts the local and the global feature of the image. The local features of the image are extracted using the kernel based scale invariant feature transform (K-SIFT) model and the global features are extracted using the proposed m-Co-HOG model. (Co-HOG: co-occurrence histograms of oriented gradients) The proposed m-Co-HOG model has the properties of the Co-HOG algorithm. The feature vector database contains combined local and the global feature vectors derived using the K-SIFT model and the proposed m-Co-HOG algorithm. This paper proposes a probabilistic neuro-fuzzy classifier system for the finding the identity of the person from the extracted feature vector database.
Findings
The face images required for the simulation of the proposed work are taken from the CVL database. The simulation considers a total of 114 persons form the CVL database. From the results, it is evident that the proposed model has outperformed the existing models with an improved accuracy of 0.98. The false acceptance rate (FAR) and false rejection rate (FRR) values of the proposed model have a low value of 0.01.
Originality/value
This paper proposes a face recognition system with proposed m-Co-HOG vector and the hybrid neuro-fuzzy classifier. Feature extraction was based on the proposed m-Co-HOG vector for extracting the global features and the existing K-SIFT model for extracting the local features from the face images. The proposed m-Co-HOG vector utilizes the existing Co-HOG model for feature extraction, along with a new color gradient decomposition method. The major advantage of the proposed m-Co-HOG vector is that it utilizes the color features of the image along with other features during the histogram operation.
Details
Keywords
Gergely Orbán and Gábor Horváth
The purpose of this paper is to show an efficient method for the detection of signs of early lung cancer. Various image processing algorithms are presented for different types of…
Abstract
Purpose
The purpose of this paper is to show an efficient method for the detection of signs of early lung cancer. Various image processing algorithms are presented for different types of lesions, and a scheme is proposed for the combination of results.
Design/methodology/approach
A computer aided detection (CAD) scheme was developed for detection of lung cancer. It enables different lesion enhancer algorithms, sensitive to specific lesion subtypes, to be used simultaneously. Three image processing algorithms are presented for the detection of small nodules, large ones, and infiltrated areas. The outputs are merged, the false detection rate is reduced with four separated support vector machine (SVM) classifiers. The classifier input comes from a feature selection algorithm selecting from various textural and geometric features. A total of 761 images were used for testing, including the database of the Japanese Society of Radiological Technology (JSRT).
Findings
The fusion of algorithms reduced false positives on average by 0.6 per image, while the sensitivity remained 80 per cent. On the JSRT database the system managed to find 60.2 per cent of lesions at an average of 2.0 false positives per image. The effect of using different result evaluation criteria was tested and a difference as high as 4 percentage points in sensitivity was measured. The system was compared to other published methods.
Originality/value
The study described in the paper proves the usefulness of lesion enhancement decomposition, while proposing a scheme for the fusion of algorithms. Furthermore, a new algorithm is introduced for the detection of infiltrated areas, possible signs of lung cancer, neglected by previous solutions.
Details
Keywords
Steffen Knak‐Nielson and Susanne Ornager
Interactive video projects where a laser disc is linked to a microcomputer are a new trend in research on information transfer and library development. The high cost of these…
Abstract
Interactive video projects where a laser disc is linked to a microcomputer are a new trend in research on information transfer and library development. The high cost of these projects presents a drawback. The aim of this paper is to illustrate how libraries and information centres can increase utilisation of non‐book reference materials by using inexpensive microcomputer equipment for image storage. Collections of pictures, archival materials and maps can be stored by capturing the images on video and transferring the frames to a database on a microcomputer. The description and the image can be viewed together when searching the materials. The research project described here considers the quality of the pictures in the image database, as well as time calculations for image database production. The project aims at proposing a low‐cost solution to image information storage on microcomputers in libraries and information centres.
This paper surveys theoretical and practical issues associated with a particular type of information retrieval problem, namely that where the information need is pictorial. The…
Abstract
This paper surveys theoretical and practical issues associated with a particular type of information retrieval problem, namely that where the information need is pictorial. The paper is contextualised by the notion of a visually stimulated society, in which the ease of record creation and transmission in the visual medium is contrasted with the difficulty of gaining effective subject access to the world's stores of such records. The technological developments which, in casting the visual image in electronic form, have contributed so significantly to its availability are reviewed briefly, as a prelude to the main thrust of the paper. Concentrating on still and moving pictorial forms of the visual image, the paper dwells on issues related to the subject indexing of pictorial material and discusses four models of pictorial information retrieval corresponding with permutations of the verbal and visual modes for the representation of picture content and of information need.
This research project focuses on developing techniques and technologies for automatically identifying human faces from images in the situations where face sample collections in…
Abstract
Purpose
This research project focuses on developing techniques and technologies for automatically identifying human faces from images in the situations where face sample collections in the database as well as in the input query images are “as is”, i.e. no standard data collection environment is available. The developed method can also be used in other biometric applications.
Design/methodology/approach
The specific method presented in this paper is called scale independent identification (SII). SII allows direct “comparison” between two images in terms of whether the two objects (e.g. faces) in the two images are the same object (i.e. the same individual). SII is developed by extensively using the matrix computation theory and in particular, the singular value decomposition theory.
Findings
It is found that almost all the existing methods in the literature or technologies in the market require that a normalization in scale be done before any identification processing. However, it is also found that normalization in scale not only adds additional processing complexity, but also may reduce the identification accuracy. In addition, it is difficult to anticipate an “optimal” scale in advance. The developed SII complements the existing methods in all these aspects.
Research limitations/implications
The only limitation which is also the limitation for many other biometric identification methods is that each object (e.g. individual in human face identification) must have a sufficient number of training samples collected before the method works well.
Practical implications
SII is particularly suitable in law enforcement and/or intelligence applications in which it is difficult or impossible to collect data in a standard, “clean” environment.
Originality/value
The SII method is new, and the paper should be interesting to researchers or engineers in this area, and should also be interesting to companies developing any biometrics‐based identification technologies as well as government agencies.
Details
Keywords
Purpose – The purpose of this research is to analyse the display of digital images found on clothing and textile collection websites. Design/methodology/approach – Features noted…
Abstract
Purpose – The purpose of this research is to analyse the display of digital images found on clothing and textile collection websites. Design/methodology/approach – Features noted included where on the website the images were found, such as in a display or as part of a database. Display features are documented, including enlargement abilities, the view of the artefact, the use of dress forms and mannequins, and the context in which the artefact was pictured. The text that describes the artefact is also documented. The instrument was a content analysis of clothing and textile collection websites. Data were collected in 2006 from 57 clothing and textile collection websites. Findings – All 57 costume and textile museums had images of collection artefacts online, with the majority sharing a featured artefact. Almost half of the websites used images in databases and displays. Enlargement abilities were not common; most of the visuals showed the front view of the artefact. Enlargements were more common in displays. Mannequins and dress forms were used infrequently. Detailed text to explain the artefacts was available in the databases. Research limitations/implications – The research was limited to observing 57 websites. Originality/value – Common features used by costume and textile museums when displaying collection pieces online were identified. Suggestions as to what content to include in a website for clothing and textile collections are discussed in light of the data collected.
Details
Keywords
As well as making products that come in little yellow boxes, Eastman Kodak is also a software developer. One of its software products is the Eastman Exchange, a database that lets…
Abstract
As well as making products that come in little yellow boxes, Eastman Kodak is also a software developer. One of its software products is the Eastman Exchange, a database that lets film and television professionals access remotely images and information that they need to plan shoots. The Eastman Exchange presently consists of location images and data provided by US and international film commissions, but will soon be expanded to include images and information about props, costumes, talent, audio recordings, photo stills and video clips. The database engine and interface have been designed to be highly adaptable: they can be customised to support any application involving the cataloguing of images for remote access and display.
Sona Karentz Andrews and John Grozik
The interactive cartography videodisc includes a double‐sided videodisc with one side containing full map images of over 600 maps and in excess of 30,000 tiles and close‐up pieces…
Abstract
The interactive cartography videodisc includes a double‐sided videodisc with one side containing full map images of over 600 maps and in excess of 30,000 tiles and close‐up pieces of these maps. The database contains a wide assortment of map information and will be approximately eight megabytes in size. Not only does the database provide information about each map but a user can search fields in the database to isolate those maps that fit the intended criteria. The interface (approximately two megabytes) provides the link between the images and the data. It is designed to allow a high level of interaction and access through display environments. A printed directory is also being prepared with three indexes organized alphabetically by map title, subject, and region.
Chanattra Ammatmanee and Lu Gan
Because of the fast-growing digital image collections on online platforms and the transfer learning ability of deep learning technology, image classification could be improved and…
Abstract
Purpose
Because of the fast-growing digital image collections on online platforms and the transfer learning ability of deep learning technology, image classification could be improved and implemented for the hostel domain, which has complex clusters of image contents. This paper aims to test the potential of 11 pretrained convolutional neural network (CNN) with transfer learning for hostel image classification on the first hostel image database to advance the knowledge and fill the gap academically, as well as to suggest an alternative solution in optimal image classification with less labour cost and human errors to those who manage hostel image collections.
Design/methodology/approach
The hostel image database is first created with data pre-processing steps, data selection and data augmentation. Then, the systematic and comprehensive investigation is divided into seven experiments to test 11 pretrained CNNs which transfer learning was applied and parameters were fine-tuned to match this newly created hostel image dataset. All experiments were conducted in Google Colaboratory environment using PyTorch.
Findings
The 7,350 hostel image database is created and labelled into seven classes. Furthermore, its experiment results highlight that DenseNet 121 and DenseNet 201 have the greatest potential for hostel image classification as they outperform other CNNs in terms of accuracy and training time.
Originality/value
The fact that there is no existing academic work dedicating to test pretrained CNNs with transfer learning for hostel image classification and no existing hostel image-only database have made this paper a novel contribution.
Details
Keywords
Pei-Jarn Chen, Chia-Hong Yeng, Ma-Mi Lu and Sheng-Hsien Chen
The purpose of this paper is to establish an automated microscopic imaging database system using a set of Radio Frequency Identification (RFID) management functions to provide a…
Abstract
Purpose
The purpose of this paper is to establish an automated microscopic imaging database system using a set of Radio Frequency Identification (RFID) management functions to provide a secure storage for hispathology images.
Design/methodology/approach
The automated microscopy imaging system is composed mainly of four parts, which include: first, tissue biopsy image acquisition system, second, image processing system, third, RFID system, and fourth, SQL database system. The system has two modes of operation to store and manage hispathology images. First, the hispathology slide undergoes fluorescence staining before acquiring images directly from an external CCD camera connected to the system. Second, the hispathogical slides that have undergone fluorescence staining undergo another microscopic imaging system, and the contents are extracted into a digitized image archive and imported to the system. Also, the system not only acquires images but also performs functions such as displacement correction, image superimposition, and calculation of the total number of fluorescence points. The two methods mentioned above produce the hispathology image files and are tagged using an RFID string index to establish and manage the database system.
Findings
The results demonstrated that in the impurities were effectively eliminated in the red fluorescence staining after binarization processing. However, the blue ones remained the same and to solve this problem an adjustable threshold allows users to select the appropriate threshold. Using an additional eigenvalue code to the RFID string provides better encryption mechanism for the patient files and any attempt to tamper the file can easily be detected through the comparison of the eigenvalues.
Originality/value
This paper proposes a novel method to implement a more comprehensive, safe, fast, and automated management system for hispathological images using RFID management and image processing techniques. Additional security is provided by including eigenvalues as encryption mechanisms in the Tag string of the RFID.
Details