Search results

1 – 10 of over 3000
Article
Publication date: 24 November 2020

Qi Xiao, Rui Wang, Hongyu Sun and Limin Wang

The paper aims to build a new objective evaluation method of fabric pilling by combining an integrated image analysis technology with a deep learning algorithm.

307

Abstract

Purpose

The paper aims to build a new objective evaluation method of fabric pilling by combining an integrated image analysis technology with a deep learning algorithm.

Design/methodology/approach

Series of image analysis techniques were adopted. First, a Fourier transform transformed images into the frequency domain. The optimal resolution matrix of an exponential high-pass filter was determined by combining the energy algorithm. Second, the multidimensional discrete wavelet transform determined the optimal division level. Third, the iterative threshold method was used to enhance images to obtain a complete and clear pilling ball images. Finally, the deep learning algorithm was adopted to train data from pilling ball images, and the pilling levels were classified according to the learning features.

Findings

The paper provides a new insight about how to objectively evaluate fabric pilling grades. Results of the experiment indicate that the proposed objective evaluation method can obtain clear and complete pilling information and the classification accuracy rate of the deep learning algorithm is 94.2%, whose structures are rectified linear unit (ReLU) activation function, four hidden layers, cross-entropy learning rules and the regularization method.

Research limitations/implications

Because the methodology of the paper is based on woven fabric, the research study’s results may lack generalizability. Therefore, researchers are encouraged to test other kinds of fabric further, such as knitted and unwoven fabrics.

Originality/value

Combined with a series of image analysis technology, the integrated method can effectively extract clear and complete pilling information from pilled fabrics. Pilling grades can be classified by the deep learning algorithm with learning pilling information.

Details

International Journal of Clothing Science and Technology, vol. 33 no. 4
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 19 September 2016

Ziqiang Cui, Qi Wang, Qian Xue, Wenru Fan, Lingling Zhang, Zhang Cao, Benyuan Sun, Huaxiang Wang and Wuqiang Yang

Electrical capacitance tomography (ECT) and electrical resistance tomography (ERT) are promising techniques for multiphase flow measurement due to their high speed, low cost…

1202

Abstract

Purpose

Electrical capacitance tomography (ECT) and electrical resistance tomography (ERT) are promising techniques for multiphase flow measurement due to their high speed, low cost, non-invasive and visualization features. There are two major difficulties in image reconstruction for ECT and ERT: the “soft-field”effect, and the ill-posedness of the inverse problem, which includes two problems: under-determined problem and the solution is not stable, i.e. is very sensitive to measurement errors and noise. This paper aims to summarize and evaluate various reconstruction algorithms which have been studied and developed in the word for many years and to provide reference for further research and application.

Design/methodology/approach

In the past 10 years, various image reconstruction algorithms have been developed to deal with these problems, including in the field of industrial multi-phase flow measurement and biological medical diagnosis.

Findings

This paper reviews existing image reconstruction algorithms and the new algorithms proposed by the authors for electrical capacitance tomography and electrical resistance tomography in multi-phase flow measurement and biological medical diagnosis.

Originality/value

The authors systematically summarize and evaluate various reconstruction algorithms which have been studied and developed in the word for many years and to provide valuable reference for practical applications.

Article
Publication date: 19 June 2017

Qi Wang, Pengcheng Zhang, Jianming Wang, Qingliang Chen, Zhijie Lian, Xiuyan Li, Yukuan Sun, Xiaojie Duan, Ziqiang Cui, Benyuan Sun and Huaxiang Wang

Electrical impedance tomography (EIT) is a technique for reconstructing the conductivity distribution by injecting currents at the boundary of a subject and measuring the…

Abstract

Purpose

Electrical impedance tomography (EIT) is a technique for reconstructing the conductivity distribution by injecting currents at the boundary of a subject and measuring the resulting changes in voltage. Image reconstruction for EIT is a nonlinear problem. A generalized inverse operator is usually ill-posed and ill-conditioned. Therefore, the solutions for EIT are not unique and highly sensitive to the measurement noise.

Design/methodology/approach

This paper develops a novel image reconstruction algorithm for EIT based on patch-based sparse representation. The sparsifying dictionary optimization and image reconstruction are performed alternately. Two patch-based sparsity, namely, square-patch sparsity and column-patch sparsity, are discussed and compared with the global sparsity.

Findings

Both simulation and experimental results indicate that the patch based sparsity method can improve the quality of image reconstruction and tolerate a relatively high level of noise in the measured voltages.

Originality/value

EIT image is reconstructed based on patch-based sparse representation. Square-patch sparsity and column-patch sparsity are proposed and compared. Sparse dictionary optimization and image reconstruction are performed alternately. The new method tolerates a relatively high level of noise in measured voltages.

Details

Sensor Review, vol. 37 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 10 June 2014

Radosław Wajman and Robert Banasiak

The purpose of this paper is to introduce a significant modification of the sensitivity maps calculation process using electric field distribution analysis. A sensitivity matrix…

Abstract

Purpose

The purpose of this paper is to introduce a significant modification of the sensitivity maps calculation process using electric field distribution analysis. A sensitivity matrix is typically a crucial part of a deterministic image reconstruction process in a three-dimensional capacitance tomography (3D ECT) and strictly decides about a final image quality. Commonly used sensitivity matrix computation methods mostly provide acceptable results and additionally allow to perform a recalculation of sensitivity maps according to the changing permittivity distribution.

Design/methodology/approach

The new “tunnel-based” algorithm is proposed which traces the surfaces constructed along the electric field lines. The new solution is developed and tested using experimental data.

Findings

To fully validate the new technique both linear and non-linear image reconstruction processes were performed and the criteria of image error estimation were discussed. This paper discusses some preliminary results of the image reconstruction process using the new proposed algorithm. As a result of this research, an increased accuracy of the new method is proved.

Originality/value

The presented results of image reconstruction with new sensitivity matrix in comparison with the classic matrix proved that the new solution is able to improve the convergence and stability of image reconstruction process for 3D ECT imaging.

Details

Sensor Review, vol. 34 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 8 June 2012

Mohammad Vaezi, Chee Kai Chua and Siaw Meng Chou

Today, medical models can be made by the use of medical imaging systems through modern image processing methods and rapid prototyping (RP) technology. In ultrasound imaging…

1227

Abstract

Purpose

Today, medical models can be made by the use of medical imaging systems through modern image processing methods and rapid prototyping (RP) technology. In ultrasound imaging systems, as images are not layered and are of lower quality as compared to those of computerized tomography (CT) and magnetic resonance imaging (MRI), the process for making physical models requires a series of intermediate processes and it is a challenge to fabricate a model using ultrasound images due to the inherent limitations of the ultrasound imaging process. The purpose of this paper is to make high quality, physical models from medical ultrasound images by combining modern image processing methods and RP technology.

Design/methodology/approach

A novel and effective semi‐automatic method was developed to improve the quality of 2D image segmentation process. In this new method, a partial histogram of 2D images was used and ideal boundaries were obtained. A 3D model was achieved using the exact boundaries and then the 3D model was converted into the stereolithography (STL) format, suitable for RP fabrication. As a case study, the foetus was chosen for this application since ultrasonic imaging is commonly used for foetus imaging so as not to harm the baby. Finally, the 3D Printing (3DP) and PolyJet processes, two types of RP technique, were used to fabricate the 3D physical models.

Findings

The physical models made in this way proved to have sufficient quality and shortened the process time considerably.

Originality/value

It is still a challenge to fabricate an exact physical model using ultrasound images. Current commercial histogram‐based segmentation method is time‐consuming and results in a less than optimum 3D model quality. In this research work, a novel and effective semi‐automatic method was developed to select the threshold optimum value easily.

Details

Rapid Prototyping Journal, vol. 18 no. 4
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 5 June 2019

Gang Li, Shuo Jia and Hong-Nan Li

The purpose of this paper is to make a theoretical comprehensive efficiency evaluation of a nonlinear analysis method based on the Woodbury formula from the efficiency of the…

Abstract

Purpose

The purpose of this paper is to make a theoretical comprehensive efficiency evaluation of a nonlinear analysis method based on the Woodbury formula from the efficiency of the solution of linear equations in each incremental step and the selected iterative algorithms.

Design/methodology/approach

First, this study employs the time complexity theory to quantitatively compare the efficiency of the Woodbury formula and the LDLT factorization method which is a commonly used method to solve linear equations. Moreover, the performance of iterative algorithms also significantly effects the efficiency of the analysis. Thus, the three-point method with a convergence order of eight is employed to solve the equilibrium equations of the nonlinear analysis method based on the Woodbury formula, aiming to improve the iterative performance of the Newton–Raphson (N–R) method.

Findings

First, the result shows that the asymptotic time complexity of the Woodbury formula is much lower than that of the LDLT factorization method when the number of inelastic degrees of freedom (IDOFs) is much less than that of DOFs, indicating that the Woodbury formula is more efficient for local nonlinear problems. Moreover, the time complexity comparison of the N–R method and the three-point method indicates that the three-point method is more efficient than the N–R method for local nonlinear problems with large-scale structures or a larger ratio of IDOFs number to the DOFs number.

Originality/value

This study theoretically evaluates the efficiency of nonlinear analysis method based on the Woodbury formula, and quantitatively shows the application condition of the comparative methods. The comparison result provides a theoretical basis for the selection of algorithms for different nonlinear problems.

Details

Engineering Computations, vol. 36 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 27 March 2008

H. Ahmadi‐Noubari, A. Pourshaghaghy, F. Kowsary and A. Hakkaki‐Fard

The purpose of this paper is to reduce the destructive effects of existing unavoidable noises contaminating temperature data in inverse heat conduction problems (IHCP) utilizing…

Abstract

Purpose

The purpose of this paper is to reduce the destructive effects of existing unavoidable noises contaminating temperature data in inverse heat conduction problems (IHCP) utilizing the wavelets.

Design/methodology/approach

For noise reduction, sensor data were treated as input to the filter bank used for signal decomposition and implementation of discrete wavelet transform. This is followed by the application of wavelet denoising algorithm that is applied on the wavelet coefficients of signal components at different resolution levels. Both noisy and de‐noised measurement temperatures are then used as input data to a numerical experiment of IHCP. The inverse problem deals with an estimation of unknown surface heat flux in a 2D slab and is solved by the variable metric method.

Findings

Comparison of estimated heat fluxes obtained using denoised data with those using original sensor data indicates that noise reduction by wavelet has a potential to be a powerful tool for improvement of IHCP results.

Originality/value

Noise reduction using wavelets, while it can be implemented very easily, may also significantly relegate (or even eliminate) conventional regularization schemes commonly used in IHCP.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 18 no. 2
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 27 April 2020

Yongxiang Wu, Yili Fu and Shuguo Wang

This paper aims to design a deep neural network for object instance segmentation and six-dimensional (6D) pose estimation in cluttered scenes and apply the proposed method in…

448

Abstract

Purpose

This paper aims to design a deep neural network for object instance segmentation and six-dimensional (6D) pose estimation in cluttered scenes and apply the proposed method in real-world robotic autonomous grasping of household objects.

Design/methodology/approach

A novel deep learning method is proposed for instance segmentation and 6D pose estimation in cluttered scenes. An iterative pose refinement network is integrated with the main network to obtain more robust final pose estimation results for robotic applications. To train the network, a technique is presented to generate abundant annotated synthetic data consisting of RGB-D images and object masks in a fast manner without any hand-labeling. For robotic grasping, the offline grasp planning based on eigengrasp planner is performed and combined with the online object pose estimation.

Findings

The experiments on the standard pose benchmarking data sets showed that the method achieves better pose estimation and time efficiency performance than state-of-art methods with depth-based ICP refinement. The proposed method is also evaluated on a seven DOFs Kinova Jaco robot with an Intel Realsense RGB-D camera, the grasping results illustrated that the method is accurate and robust enough for real-world robotic applications.

Originality/value

A novel 6D pose estimation network based on the instance segmentation framework is proposed and a neural work-based iterative pose refinement module is integrated into the method. The proposed method exhibits satisfactory pose estimation and time efficiency for the robotic grasping.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 24 August 2010

Behnam Salimi and David R. Hayhurst

Purpose — The purpose of this paper is to seek improved solution techniques for combined boundary‐initial value problems (IVPs) associated with the time‐dependent creep…

Abstract

Purpose — The purpose of this paper is to seek improved solution techniques for combined boundary‐initial value problems (IVPs) associated with the time‐dependent creep deformation and rupture of engineering structures at high temperatures and hence to reconfigure a parallel iterative preconditioned conjugate gradient (PCG) solver and the DAMAGE XXX software, for 3‐D finite element creep continuum damage mechanics (CDM) analysis.Design/methodology/approach — The potential to speed up the computer numerical solution of the combined BV‐IVPs is addressed using parallel computers. Since the computational bottleneck is associated with the matrix solver, the parallelisation of a direct and an iterative solver has been studied. The creep deformation and rupture of a tension bar has been computed for a range of the number of degrees of freedom (ndf), and the performance of the two solvers is compared and assessed.Findings — The results show the superior scalability of the iterative solver compared to the direct solver, with larger speed‐ups gained by the PCG solver for higher degrees of freedom. Also, a new algorithm for the first trial solution of the PCG solver provides additional speed‐ups.Research limitations/implications — The results show that the ideal parallel speed‐up of the iterative solver of 16, relative to two processors, is achieved when using 32 processors for a mesh of ndf = 153,238. Originality/value — Techniques have been established in this paper for the parallelisation of CDM creep analysis software using an iterative equation solver. The significant computational speed‐ups achieved will enable the analysis of failures in weldments of industrial significance.

Details

Engineering Computations, vol. 27 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 June 2000

Stephen Robertson and Stephen Walker

A major problem in using current best‐match methods in a filtering task is that of setting appropriate thresholds, which are required in order to force a binary decision on…

Abstract

A major problem in using current best‐match methods in a filtering task is that of setting appropriate thresholds, which are required in order to force a binary decision on notifying a user of a document. We discuss methods for setting such thresholds and adapting them as a result of feedback information on the performance of the profile. These methods fit within the probabilistic approach to retrieval, and are applied to a probabilistic system. Some experiments, within the framework of the TREC‐7 adaptive filtering track, are described.

Details

Journal of Documentation, vol. 56 no. 3
Type: Research Article
ISSN: 0022-0418

Keywords

1 – 10 of over 3000