Search results
1 – 10 of over 12000Junying Chen, Zhanshe Guo, Fuqiang Zhou, Jiangwen Wan and Donghao Wang
As the limited energy of wireless sensor networks (WSNs), energy-efficient data-gathering algorithms are required. This paper proposes a compressive data-gathering algorithm based…
Abstract
Purpose
As the limited energy of wireless sensor networks (WSNs), energy-efficient data-gathering algorithms are required. This paper proposes a compressive data-gathering algorithm based on double sparse structure dictionary learning (DSSDL). The purpose of this paper is to reduce the energy consumption of WSNs.
Design/methodology/approach
The historical data is used to construct a sparse representation base. In the dictionary-learning stage, the sparse representation matrix is decomposed into the product of double sparse matrices. Then, in the update stage of the dictionary, the sparse representation matrix is orthogonalized and unitized. The finally obtained double sparse structure dictionary is applied to the compressive data gathering in WSNs.
Findings
The dictionary obtained by the proposed algorithm has better sparse representation ability. The experimental results show that, the sparse representation error can be reduced by at least 3.6% compared with other dictionaries. In addition, the better sparse representation ability makes the WSNs achieve less measurement times under the same accuracy of data gathering, which means more energy saving. According to the results of simulation, the proposed algorithm can reduce the energy consumption by at least 2.7% compared with other compressive data-gathering methods under the same data-gathering accuracy.
Originality/value
In this paper, the double sparse structure dictionary is introduced into the compressive data-gathering algorithm in WSNs. The experimental results indicate that the proposed algorithm has good performance on energy consumption and sparse representation.
Details
Keywords
Jian Zhou and Jianli Liu
Visual quality control on raw textile fabrics is a vital process in weaving factories to ensure their exterior quality (visual defects or imperfection) satisfying customer…
Abstract
Purpose
Visual quality control on raw textile fabrics is a vital process in weaving factories to ensure their exterior quality (visual defects or imperfection) satisfying customer requirements. Commonly, this critical process is manually conducted by human inspectors, which can hardly provide a fast and reliable inspection results due to fatigue and subjective errors. To meet modern production needs, it is highly demanded to develop an automated defect inspection system by replacing human eyes with computer vision.
Design/methodology/approach
As a structural texture, fabric textures can be effectively represented by a linearly summation of basic elements (dictionary). To create a robust representation of a fabric texture in an unsupervised manner, a smooth constraint is imposed on dictionary learning model. Such representation is robust to defects when using it to recover a defective image. Thus an abnormal map (likelihood of defective regions) can be computed by measuring similarity between recovered version and itself. Finally, the total variation (TV) based model is built to segment defects on the abnormal map.
Findings
Different from traditional dictionary learning method, a smooth constraint is introduced in dictionary learning that not only able to create a robust representation for fabric textures but also avoid the selection of dictionary size. In addition, a TV based model is designed according to defects' characteristics. The experimental results demonstrate that (1) the dictionary with smooth constraint can generate a more robust representation of fabric textures compared to traditional dictionary; (2) the TV based model can achieve a robust and good segmentation result.
Originality/value
The major originality of the proposed method are: (1) Dictionary size can be set as a constant instead of selecting it empirically; (2) The total variation based model is built, which can enhance less salient defects, improving segmentation performance significantly.
Details
Keywords
Bo Li, Jian ming Wang, Qi Wang, Xiu yan Li and Xiaojie Duan
The purpose of this paper is to explore gas/liquid two-phase flow is widely existed in industrial fields, especially in chemical engineering. Electrical resistance tomography…
Abstract
Purpose
The purpose of this paper is to explore gas/liquid two-phase flow is widely existed in industrial fields, especially in chemical engineering. Electrical resistance tomography (ERT) is considered to be one of the most promising techniques to monitor the transient flow process because of its advantages such as fast respond speed and cross-section imaging. However, maintaining high resolution in space together with low cost is still challenging for two-phase flow imaging because of the ill-conditioning of ERT inverse problem.
Design/methodology/approach
In this paper, a sparse reconstruction (SR) method based on the learned dictionary has been proposed for ERT, to accurately monitor the transient flow process of gas/liquid two-phase flow in a pipeline. The high-level representation of the conductivity distributions for typical flow regimes can be extracted based on denoising the deep extreme learning machine (DDELM) model, which is used as prior information for dictionary learning.
Findings
The results from simulation and dynamic experiments indicate that the proposed algorithm efficiently improves the quality of reconstructed images as compared to some typical algorithms such as Landweber and SR-discrete fourier transformation/discrete cosine transformation. Furthermore, the SR-DDELM has also used to estimate the important parameters of the chemical process, a case in point is the volume flow rate. Therefore, the SR-DDELM is considered an ideal candidate for online monitor the gas/liquid two-phase flow.
Originality/value
This paper fulfills a novel approach to effectively monitor the gas/liquid two-phase flow in pipelines. One deep learning model and one adaptive dictionary are trained via the same prior conductivity, respectively. The model is used to extract high-level representation. The dictionary is used to represent the features of the flow process. SR and extraction of high-level representation are performed iteratively. The new method can obviously improve the monitoring accuracy and save calculation time.
Details
Keywords
Ushapreethi P and Lakshmi Priya G G
To find a successful human action recognition system (HAR) for the unmanned environments.
Abstract
Purpose
To find a successful human action recognition system (HAR) for the unmanned environments.
Design/methodology/approach
This paper describes the key technology of an efficient HAR system. In this paper, the advancements for three key steps of the HAR system are presented to improve the accuracy of the existing HAR systems. The key steps are feature extraction, feature descriptor and action classification, which are implemented and analyzed. The usage of the implemented HAR system in the self-driving car is summarized. Finally, the results of the HAR system and other existing action recognition systems are compared.
Findings
This paper exhibits the proposed modification and improvements in the HAR system, namely the skeleton-based spatiotemporal interest points (STIP) feature and the improved discriminative sparse descriptor for the identified feature and the linear action classification.
Research limitations/implications
The experiments are carried out on captured benchmark data sets and need to be analyzed in a real-time environment.
Practical implications
The middleware support between the proposed HAR system and the self-driven car system provides several other challenging opportunities in research.
Social implications
The authors’ work provides the way to go a step ahead in machine vision especially in self-driving cars.
Originality/value
The method for extracting the new feature and constructing an improved discriminative sparse feature descriptor has been introduced.
Details
Keywords
Lei Zeng, Xiaofeng Li and Jin Xu
The purpose of this paper is to introduce an improved method for joint training of low‐ and high‐resolution dictionaries in the single image super resolution. With simulations…
Abstract
Purpose
The purpose of this paper is to introduce an improved method for joint training of low‐ and high‐resolution dictionaries in the single image super resolution. With simulations, the proposed method is thereafter evaluated.
Design/methodology/approach
Sparse representations of low‐resolution image patches are used to reconstruct the high‐resolution image patches with high resolution dictionary. By using different factors, the scheme weights the two dictionaries in the high‐ and low‐resolution spaces in the training process. It is reasonable to achieve better reconstructed images with more emphasis on the high‐resolution spaces.
Findings
An improved joint training algorithm based on K‐SVD is developed with flexible weight factors on dictionaries in the high‐ and low‐resolution spaces. From the experiment results, the proposed scheme outperforms the classic bicubic interpolation and neighbor‐embedding learning based method.
Originality/value
By using flexible weight factors in joint training of the dictionaries for super resolution, better reconstruction results can be achieved.
Details
Keywords
Mohamad Javad Baghiat Esfahani and Saeed Ketabi
This study attempts to evaluate the effect of the corpus-based inductive teaching approach with multiple academic corpora (PICA, CAEC and Oxford Corpus of Academic English) and…
Abstract
Purpose
This study attempts to evaluate the effect of the corpus-based inductive teaching approach with multiple academic corpora (PICA, CAEC and Oxford Corpus of Academic English) and conventional deductive teaching approach (i.e., multiple-choice items, filling the gap, matching and underlining) on learning academic collocations by Iranian advanced EFL learners (students learning English as a foreign language).
Design/methodology/approach
This is a quasi-experimental, quantitative and qualitative study.
Findings
The result showed the experimental group outperformed significantly compared with the control group. The experimental group also shared their perception of the advantages and disadvantages of the corpus-assisted language teaching approach.
Originality/value
Despite growing progress in language pedagogy, methodologies and language curriculum design, there are still many teachers who experience poor performance in their students' vocabulary, whether in comprehension or production. In Iran, for example, even though mandatory English education begins at the age of 13, which is junior and senior high school, students still have serious problems in language production and comprehension when they reach university levels.
Details
Keywords
Chunlei Li, Ruimin Yang, Zhoufeng Liu, Guangshuai Gao and Qiuli Liu
Fabric defect detection plays an important role in textile quality control. The purpose of this paper is to propose a fabric defect detection algorithm using learned dictionary…
Abstract
Purpose
Fabric defect detection plays an important role in textile quality control. The purpose of this paper is to propose a fabric defect detection algorithm using learned dictionary-based visual saliency.
Design/methodology/approach
First, the test fabric image is splitted into image blocks, and the learned dictionary with normal samples and defective sample is constructed by selecting the image block local binary pattern features with highest or lowest similarity comparing with the average feature vector; second, the first L largest correlation coefficients between each test image block and the dictionary are calculated, and other correlation coefficients are set to zeros; third, the sum of the non-zeros coefficients corresponding to defective samples is used to generate saliency map; finally, an improve valley-emphasis method can efficiently segment the defect region.
Findings
Experimental results demonstrate that the generated saliency map by the proposed method can efficiently outstand defect region comparing with the state-of-the-art, and segment results can precisely localize defect region.
Originality/value
In this paper, a novel fabric defect detection scheme is proposed via learned dictionary-based visual saliency.
Details
Keywords
Qi Wang, Pengcheng Zhang, Jianming Wang, Qingliang Chen, Zhijie Lian, Xiuyan Li, Yukuan Sun, Xiaojie Duan, Ziqiang Cui, Benyuan Sun and Huaxiang Wang
Electrical impedance tomography (EIT) is a technique for reconstructing the conductivity distribution by injecting currents at the boundary of a subject and measuring the…
Abstract
Purpose
Electrical impedance tomography (EIT) is a technique for reconstructing the conductivity distribution by injecting currents at the boundary of a subject and measuring the resulting changes in voltage. Image reconstruction for EIT is a nonlinear problem. A generalized inverse operator is usually ill-posed and ill-conditioned. Therefore, the solutions for EIT are not unique and highly sensitive to the measurement noise.
Design/methodology/approach
This paper develops a novel image reconstruction algorithm for EIT based on patch-based sparse representation. The sparsifying dictionary optimization and image reconstruction are performed alternately. Two patch-based sparsity, namely, square-patch sparsity and column-patch sparsity, are discussed and compared with the global sparsity.
Findings
Both simulation and experimental results indicate that the patch based sparsity method can improve the quality of image reconstruction and tolerate a relatively high level of noise in the measured voltages.
Originality/value
EIT image is reconstructed based on patch-based sparse representation. Square-patch sparsity and column-patch sparsity are proposed and compared. Sparse dictionary optimization and image reconstruction are performed alternately. The new method tolerates a relatively high level of noise in measured voltages.
Details
Keywords
The author suggests we know a lot less about learning than we normally admit – in fact there is no generally accepted definition of what it is. He suggests that this is a big…
Abstract
The author suggests we know a lot less about learning than we normally admit – in fact there is no generally accepted definition of what it is. He suggests that this is a big problem with designing so‐called e‐Learning, most of which is really just "e‐Teaching." There is a need, the author argues, for much more differentiation of learning, especially by type of material to be learned. Much of the available research on learning is not about how people learn, but about how they learn in groups – i.e. classes, or, as the author calls them, herds. "Herding" introduces all sort of learning problems, and, to be successful, any e‐Learning must take one‐on‐one tutoring, which is two standard deviations more effective than classroom teaching – as its base.
Details
Keywords
A. Valli Bhasha and B.D. Venkatramana Reddy
The problems of Super resolution are broadly discussed in diverse fields. Rather than the progression toward the super resolution models for real-time images, operating…
Abstract
Purpose
The problems of Super resolution are broadly discussed in diverse fields. Rather than the progression toward the super resolution models for real-time images, operating hyperspectral images still remains a challenging problem.
Design/methodology/approach
This paper aims to develop the enhanced image super-resolution model using “optimized Non-negative Structured Sparse Representation (NSSR), Adaptive Discrete Wavelet Transform (ADWT), and Optimized Deep Convolutional Neural Network”. Once after converting the HR images into LR images, the NSSR images are generated by the optimized NSSR. Then the ADWT is used for generating the subbands of both NSSR and HRSB images. The residual image with this information is obtained by the optimized Deep CNN. All the improvements on the algorithms are done by the Opposition-based Barnacles Mating Optimization (O-BMO), with the objective of attaining the multi-objective function concerning the “Peak Signal-to-Noise Ratio (PSNR), and Structural similarity (SSIM) index”. Extensive analysis on benchmark hyperspectral image datasets shows that the proposed model achieves superior performance over typical other existing super-resolution models.
Findings
From the analysis, the overall analysis of the suggested and the conventional super resolution models relies that the PSNR of the improved O-BMO-(NSSR+DWT+CNN) was 38.8% better than bicubic, 11% better than NSSR, 16.7% better than DWT+CNN, 1.3% better than NSSR+DWT+CNN, and 0.5% better than NSSR+FF-SHO-(DWT+CNN). Hence, it has been confirmed that the developed O-BMO-(NSSR+DWT+CNN) is performing well in converting LR images to HR images.
Originality/value
This paper adopts a latest optimization algorithm called O-BMO with optimized Non-negative Structured Sparse Representation (NSSR), Adaptive Discrete Wavelet Transform (ADWT) and Optimized Deep Convolutional Neural Network for developing the enhanced image super-resolution model. This is the first work that uses O-BMO-based Deep CNN for image super-resolution model enhancement.
Details