Search results
1 – 10 of over 1000Chunlei Li, Chaodie Liu, Zhoufeng Liu, Ruimin Yang and Yun Huang
The purpose of this paper is to focus on the design of automated fabric defect detection based on cascaded low-rank decomposition and to maintain high quality control in textile…
Abstract
Purpose
The purpose of this paper is to focus on the design of automated fabric defect detection based on cascaded low-rank decomposition and to maintain high quality control in textile manufacturing.
Design/methodology/approach
This paper proposed a fabric defect detection algorithm based on cascaded low-rank decomposition. First, the constructed Gabor feature matrix is divided into a low-rank matrix and sparse matrix using low-rank decomposition technique, and the sparse matrix is used as priori matrix where higher values indicate a higher probability of abnormality. Second, we conducted the second low-rank decomposition for the constructed texton feature matrix under the guidance of the priori matrix. Finally, an improved adaptive threshold segmentation algorithm was adopted to segment the saliency map generated by the final sparse matrix to locate the defect regions.
Findings
The proposed method was evaluated on the public fabric image databases. By comparing with the ground-truth, the average detection rate of 98.26% was obtained and is superior to the state-of-the-art.
Originality/value
The cascaded low-rank decomposition was first proposed and applied into the fabric defect detection. The quantitative value shows the effectiveness of the detection method. Hence, the proposed method can be used for accurate defect detection and automated analysis system.
Details
Keywords
Qizi Huangpeng, Wenwei Huang, Hanyi Shi and Jun Fan
Vehicles estimation can be used in evaluating traffic conditions and facilitating traffic control, which is an important task in intelligent transportation system. The paper aims…
Abstract
Purpose
Vehicles estimation can be used in evaluating traffic conditions and facilitating traffic control, which is an important task in intelligent transportation system. The paper aims to propose a vehicle-counting method based on the analysis of surveillance videos.
Design/methodology/approach
The paper proposes a novel two-step method using low-rank representation (LRR) detection and locality-constrained linear coding (LLC) classification to count the number of vehicles in traffic video sequences automatically. The proposed method is based on an offline training to understand an LLC-based classifier with extracted features for vehicle and pedestrian classification, followed by an online counting algorithm to count the number of vehicles detected from the image sequence.
Findings
The proposed method allows delivery estimation (counting the number of vehicles at each frame only) and total number estimation of vehicles shown in the scene. The paper compares the proposed method with other similar methods on three public data sets. The experimental results show that the proposed method is competitive and effective in terms of computational speed and evaluation accuracy.
Research limitations/implications
The proposed method does not consider illumination. Hence, the results might be unsatisfactory under low-lighting condition. Therefore, researchers are encouraged to add a term that controls the illumination changes into the energy function of vehicle detection in future work.
Originality/value
The paper bridges the gap between LRR detection and vehicle counting by taking advantage of existing LLC classification algorithm to distinguish different moving objects.
Details
Keywords
Gui Yuan, Shali Huang, Jing Fu and Xinwei Jiang
This study aims to assess the default risk of borrowers in peer-to-peer (P2P) online lending platforms. The authors propose a novel default risk classification model based on data…
Abstract
Purpose
This study aims to assess the default risk of borrowers in peer-to-peer (P2P) online lending platforms. The authors propose a novel default risk classification model based on data cleaning and feature extraction, which increases risk assessment accuracy.
Design/methodology/approach
The authors use borrower data from the Lending Club and propose the risk assessment model based on low-rank representation (LRR) and discriminant analysis. Firstly, the authors use three LRR models to clean the high-dimensional borrower data by removing outliers and noise, and then the authors adopt a discriminant analysis algorithm to reduce the dimension of the cleaned data. In the dimension-reduced feature space, machine learning classifiers including the k-nearest neighbour, support vector machine and artificial neural network are used to assess and classify default risks.
Findings
The results reveal significant noise and redundancy in the borrower data. LRR models can effectively clean such data, particularly the two LRR models with local manifold regularisation. In addition, the supervised discriminant analysis model, termed the local Fisher discriminant analysis model, can extract low-dimensional and discriminative features, which further increases the accuracy of the final risk assessment models.
Originality/value
The originality of this study is that it proposes a novel default risk assessment model, based on data cleaning and feature extraction, for P2P online lending platforms. The proposed approach is innovative and efficient in the P2P online lending field.
Details
Keywords
Wendong Zheng, Huaping Liu, Bowen Wang and Fuchun Sun
For robots to more actively interact with the surrounding environment in object manipulation tasks or walking, they must understand the physical attributes of objects and surface…
Abstract
Purpose
For robots to more actively interact with the surrounding environment in object manipulation tasks or walking, they must understand the physical attributes of objects and surface materials they encounter. Dynamic tactile sensing can effectively capture rich information about material properties. Hence, methods that convey and interpret this tactile information to the user can improve the quality of human–machine interaction. This paper aims to propose a visual-tactile cross-modal retrieval framework to convey tactile information of surface material for perceptual estimation.
Design/methodology/approach
The tactile information of a new unknown surface material can be used to retrieve perceptually similar surface from an available surface visual sample set by associating tactile information to visual information of material surfaces. For the proposed framework, the authors propose an online low-rank similarity learning method, which can effectively and efficiently capture the cross-modal relative similarity between visual and tactile modalities.
Findings
Experimental results conducted on the Technischen Universität München Haptic Texture Database demonstrate the effectiveness of the proposed framework and the method.
Originality/value
This paper provides a visual-tactile cross-modal perception method for recognizing material surface. By the method, a robot can communicate and interpret the conveyed information about the surface material properties to the user; it will further improve the quality of robot interaction.
Details
Keywords
Junjie Cao, Nannan Wang, Jie Zhang, Zhijie Wen, Bo Li and Xiuping Liu
– The purpose of this paper is to present a novel method for fabric defect detection.
Abstract
Purpose
The purpose of this paper is to present a novel method for fabric defect detection.
Design/methodology/approach
The method based on joint low-rank and spare matrix recovery, since patterned fabric is manufactured by a set of predefined symmetry rules, and it can be seen as the superposition of sparse defective regions and low-rank defect-free regions. A robust principal component analysis model with a noise term is designed to handle fabric images with diverse patterns robustly. The authors also estimate a defect prior and use it to guide the matrix recovery process for accurate extraction of various fabric defects.
Findings
Experiments on plain and twill, dot-, box- and star-patterned fabric images with various defects demonstrate that the method is more efficient and robust than previous methods.
Originality/value
The authors present a RPCA-based model for fabric defects detection, and show how to incorporate defect prior to improve the detection results. The authors also show that more robust detection and less running time can be obtained by introducing a noise term into the model.
Details
Keywords
Christine M. Shea, Mary Fran Fran T. Malone, Justin R. Young and Karen J. Graham
The purpose of this paper is to describe the development, implementation and impact evaluation of an interactive theater-based workshop by the ADVANCE program at the University of…
Abstract
Purpose
The purpose of this paper is to describe the development, implementation and impact evaluation of an interactive theater-based workshop by the ADVANCE program at the University of New Hampshire (UNH). The workshop is part of a larger institutional transformation program funded by the National Science Foundation.
Design/methodology/approach
This institutional transformation program relied upon a systems approach to diagnose potential causes for the underrepresentation of women faculty in certain disciplines. This revealed that increasing awareness of, and reducing, implicit gender bias among members of faculty search committees could, in time, contribute to increasing the representation of women faculty at UNH. A committee charged with developing a faculty workshop to achieve this change identified interactive theater as an effective faculty training approach. The committee oversaw the development of customized scripts, and the hiring of professional actors and a facilitator to implement the workshop.
Findings
The workshop’s effectiveness in fulfilling its goals was assessed using faculty hiring and composition data, program evaluations, participant interviews and questions in an annual faculty climate survey. Findings indicate that the representation of women faculty increased significantly at UNH since the implementation of the interactive theater workshop. Analysis of the multiple sources of data provides corroborating evidence that a significant portion of the increase is directly attributable to the workshop.
Originality/value
This paper demonstrates the effectiveness of interactive theater-based workshops in an academic environment and of the systems approach in diagnosing and solving organizational problems.
Details
Keywords
Shahidha Banu S. and Maheswari N.
Background modelling has played an imperative role in the moving object detection as the progress of foreground extraction during video analysis and surveillance in many real-time…
Abstract
Purpose
Background modelling has played an imperative role in the moving object detection as the progress of foreground extraction during video analysis and surveillance in many real-time applications. It is usually done by background subtraction. This method is uprightly based on a mathematical model with a fixed feature as a static background, where the background image is fixed with the foreground object running over it. Usually, this image is taken as the background model and is compared against every new frame of the input video sequence. In this paper, the authors presented a renewed background modelling method for foreground segmentation. The principal objective of the work is to perform the foreground object detection only in the premeditated region of interest (ROI). The ROI is calculated using the proposed algorithm reducing and raising by half (RRH). In this algorithm, the coordinate of a circle with the frame width as the diameter is considered for traversal to find the pixel difference. The change in the pixel intensity is considered to be the foreground object and the position of it is determined based on the pixel location. Most of the techniques study their updates to the pixels of the complete frame which may result in increased false rate; The proposed system deals these flaw by controlling the ROI object (the region only where the background subtraction is performed) and thus extracts a correct foreground by exactly categorizes the pixel as the foreground and mines the precise foreground object. The broad experimental results and the evaluation parameters of the proposed approach with the state of art methods were compared against the most recent background subtraction approaches. Moreover, the efficiency of the authors’ method is analyzed in different situations to prove that this method is available for real-time videos as well as videos available in the 2014 challenge change detection data set.
Design/methodology/approach
In this paper, the authors presented a fresh background modelling method for foreground segmentation. The main objective of the work is to perform the foreground object detection only on the premeditated ROI. The region for foreground extraction is calculated using proposed RRH algorithm. Most of the techniques study their updates to the pixels of the complete frame which may result in increased false rate; most challenging case is that, the slow moving object is updated quickly to detect the foreground region. The anticipated system deals these flaw by controlling the ROI object (the region only where the background subtraction is performed) and thus extracts a correct foreground by exactly categorizing the pixel as the foreground and mining the precise foreground object.
Findings
Plum Analytics provide a new conduit for documenting and contextualizing the public impact and reach of research within digitally networked environments. While limitations are notable, the metrics promoted through the platform can be used to build a more comprehensive view of research impact.
Originality/value
The algorithm used in the work was proposed by the authors and are used for experimental evaluations.
Details
Keywords
Omid Rafieian and Hema Yoganarasimhan
This chapter reviews the recent developments at the intersection of personalization and AI in marketing and related fields. We provide a formal definition of personalized policy…
Abstract
This chapter reviews the recent developments at the intersection of personalization and AI in marketing and related fields. We provide a formal definition of personalized policy and review the methodological approaches available for personalization. We discuss scalability, generalizability, and counterfactual validity issues and briefly touch upon advanced methods for online/interactive/dynamic settings. We then summarize the three evaluation approaches for static policies – the Direct method, the Inverse Propensity Score (IPS) estimator, and the Doubly Robust (DR) method. Next, we present a summary of the evaluation approaches for special cases such as continuous actions and dynamic settings. We then summarize the findings on the returns to personalization across various domains, including content recommendation, advertising, and promotions. Next, we discuss the work on the intersection between personalization and welfare. We focus on four of these welfare notions that have been studied in the literature: (1) search costs, (2) privacy, (3) fairness, and (4) polarization. We conclude with a discussion of the remaining challenges and some directions for future research.
Details
Keywords
Nageswara Rao Eluri, Gangadhara Rao Kancharla, Suresh Dara and Venkatesulu Dondeti
Gene selection is considered as the fundamental process in the bioinformatics field. The existing methodologies pertain to cancer classification are mostly clinical basis, and its…
Abstract
Purpose
Gene selection is considered as the fundamental process in the bioinformatics field. The existing methodologies pertain to cancer classification are mostly clinical basis, and its diagnosis capability is limited. Nowadays, the significant problems of cancer diagnosis are solved by the utilization of gene expression data. The researchers have been introducing many possibilities to diagnose cancer appropriately and effectively. This paper aims to develop the cancer data classification using gene expression data.
Design/methodology/approach
The proposed classification model involves three main phases: “(1) Feature extraction, (2) Optimal Feature Selection and (3) Classification”. Initially, five benchmark gene expression datasets are collected. From the collected gene expression data, the feature extraction is performed. To diminish the length of the feature vectors, optimal feature selection is performed, for which a new meta-heuristic algorithm termed as quantum-inspired immune clone optimization algorithm (QICO) is used. Once the relevant features are selected, the classification is performed by a deep learning model called recurrent neural network (RNN). Finally, the experimental analysis reveals that the proposed QICO-based feature selection model outperforms the other heuristic-based feature selection and optimized RNN outperforms the other machine learning methods.
Findings
The proposed QICO-RNN is acquiring the best outcomes at any learning percentage. On considering the learning percentage 85, the accuracy of the proposed QICO-RNN was 3.2% excellent than RNN, 4.3% excellent than RF, 3.8% excellent than NB and 2.1% excellent than KNN for Dataset 1. For Dataset 2, at learning percentage 35, the accuracy of the proposed QICO-RNN was 13.3% exclusive than RNN, 8.9% exclusive than RF and 14.8% exclusive than NB and KNN. Hence, the developed QICO algorithm is performing well in classifying the cancer data using gene expression data accurately.
Originality/value
This paper introduces a new optimal feature selection model using QICO and QICO-based RNN for effective classification of cancer data using gene expression data. This is the first work that utilizes an optimal feature selection model using QICO and QICO-RNN for effective classification of cancer data using gene expression data.
Details