Personalized 3D mannequin reconstruction based on 3D scanning

PengPeng Hu (College of Textiles, Donghua University, Shanghai, China)
Duan Li (College of Textiles, Donghua University, Shanghai, China)
Ge Wu (College of Textiles, Donghua University, Shanghai, China)
Taku Komura (Institute of Perception, Action and Behaviour, University of Edinburgh, Edinburgh, UK)
Dongliang Zhang (International Design Institute, Zhejiang University, Hangzhou, China)
Yueqi Zhong (College of Textiles, Donghua University, Shanghai, China) (Key Laboratory of Textile Science & Technology, Ministry of Education, Donghua University, Shanghai, China)

International Journal of Clothing Science and Technology

ISSN: 0955-6222

Publication date: 16 April 2018

Abstract

Purpose

Currently, a common method of reconstructing mannequin is based on the body measurements or body features, which only preserve the body size lacking of the accurate body geometric shape information. However, the same human body measurement does not equal to the same body shape. This may result in an unfit garment for the target human body. The purpose of this paper is to propose a novel scanning-based pipeline to reconstruct the personalized mannequin, which preserves both body size and body shape information.

Design/methodology/approach

The authors first capture the body of a subject via 3D scanning, and a statistical body model is fit to the scanned data. This results in a skinned articulated model of the subject. The scanned body is then adjusted to be pose-symmetric via linear blending skinning. The mannequin part is then extracted. Finally, a slice-based method is proposed to generate a shape-symmetric 3D mannequin.

Findings

A personalized 3D mannequin can be reconstructed from the scanned body. Compared to conventional methods, the method can preserve both the size and shape of the original scanned body. The reconstructed mannequin can be imported directly into the apparel CAD software. The proposed method provides a step for digitizing the apparel manufacturing.

Originality/value

Compared to the conventional methods, the main advantage of the authors’ system is that the authors can preserve both size and geometry of the original scanned body. The main contributions of this paper are as follows: decompose the process of the mannequin reconstruction into pose symmetry and shape symmetry; propose a novel scanning-based pipeline to reconstruct a 3D personalized mannequin; and present a slice-based method for the symmetrization of the 3D mesh.

Keywords

Citation

Hu, P., Li, D., Wu, G., Komura, T., Zhang, D. and Zhong, Y. (2018), "Personalized 3D mannequin reconstruction based on 3D scanning", International Journal of Clothing Science and Technology, Vol. 30 No. 2, pp. 159-174. https://doi.org/10.1108/IJCST-05-2017-0067

Download as .RIS

Publisher

:

Emerald Publishing Limited

Copyright © 2018, Emerald Publishing Limited


1. Introduction

Fitting clothes for customers remains the top issue in the apparel industry. In modern fashion manufacturing and sales, products can be divided into three categories: ready-to-wear, made-to-measure, and bespoken. The cost of made-to-measure and bespoken apparel production is more than 10 or 100 times expensive than the ready-to-wear clothes. Even for the luxury brands such as Chanel, GUCCI, and Versace, the price of ready-to-wear could not compare with the normal made-to-measure and bespoken clothes. The famous Savile Row tailoring street in London is famous for its high quality of men’s bespoken suits and attracts high-end customers around the world. Bespoken and made-to-wear provide the much higher customer satisfaction than made-to-wear but takes much longer time to fit the clothes for customers. Recently, there is an emerging trend of developing the digital customization tools to fit customers. Digital manufacturing technologies such as 3D scanning and CAD technologies can significantly improve the efficiency of clothing fitting and manufacturing process so that the overall cost of apparel manufacturing can be reduced. The 3D mannequin plays an important role in the computer-aided apparel design (Au and Yuen, 2000; Au and Ma, 2010; Hsiao and Chen, 2015; Au and Yuen, 1999; McCartney et al., 2005). There is sufficient space for creativity in all of the design, manufacture, display, and performance links of the industry. However, during the process, determining how to obtain the required mannequin is a problem that cannot be ignored.

Currently, two main methods are used to develop the clothes: plane cutting and three-dimensional cutting. In methods based on plane cutting, the basic pieces required for apparel design are generated by mannequin forms (Au and Yuen, 1999; McCartney et al., 2005), and the patterning is performed by adjusting the scale of the shaped cutting pieces. By contrast, in three-dimensional cutting methods, the designers directly design on a 3D mannequin to obtain a shaped cuttings of the 2D pieces required for production. Therefore, no matter which method is adopted for the apparel design, the 3D mannequins are an important media for the concretization of design ideas.

The mannequins can be classified into two categories by means of the application: standard mannequin and display mannequin. The standard mannequin is used to design garments by the artists while the display mannequin is used to demonstrate the dressing effects for the customers. Figure 1 shows the two types of mannequins. The display mannequin is flexible to keep any pose and any stature. In contrast, the standard mannequin is more professional, which should be symmetric.

3D body scanning becomes a revolutionary technology that is changing many aspects of the apparel industry (Istook and Hwang, 2001). As the recent improvement in and the declining costs of scanning technology, commercial RGBD cameras are becoming widely available. With the help of 3D scanning, hundreds of body measurements can be automatically extracted in several seconds (Simmons, 2001; Zhong and Xu, 2006). The main application of 3D body scanning in the apparel industry is made-to-measure. Made-to-measure techniques are based on the traditional grading method that modifies the sizes and shapes of 2D patterns according to the input body sizes. Therefore, the main information used for made-to-measure is body measurement data. However, the shape information of the scanned body is seldom utilized for apparel design. For the same human body measurement, the same mannequin from is not definitive. This may result in an unfit garment for the target user. The main challenge of the direct use of the scanned data is to convert an asymmetric scanned body into a symmetric body (e.g. the left shoulder may be higher or lower than the right shoulder; the shape of the torso is asymmetric due to the fluid of muscle and fat).

In this paper, a novel scanning-based pipeline is proposed to reconstruct a 3D personalized mannequin. We model it as a body symmetry problem: pose symmetry and shape symmetry. First, a user will be scanned. A statistical template body is fit to the scanned data. This results in a skinned articulated model of the user. A pose-symmetric body model will be obtained via linear blending skinning (Kavan and Žára, 2005). Next, the mannequin data will be extracted and a slice-based method is proposed to generate a shape-symmetric mannequin. The reconstructed mannequin can be imported to the apparel CAD software for the purpose of apparel design.

The rest of the paper is organized as follows. The related work is reviewed in Section 2. In Section 3, linear blending skinning is applied to adjust the pose of the scanned body. A slice-based method is proposed to generate a shape-symmetric mannequin from the updated scanned data. The experimental results are demonstrated in Section 4, and we conclude our work in Section 5.

2. Previous work

The parametric mannequin is popular as its shape can be changed flexibly. A mannequin model is usually reconstructed from semantic features and parametric surfaces, and then the surface can be unfolded to obtain 2D pieces required by plane cutting techniques, which is an important function of 3D mannequin (Au and Yuen, 1999; McCartney et al., 2005). The reverse engineering method is also a popular solution (Au and Yuen, 2000; Au and Ma, 2010; Hsiao and Chen, 2015). When the mannequin template is generated, the parametric method will be used to obtain a parametric mannequin. The parametric mannequin is accepted in apparel design as its shape can be rapidly changed to fit various body sizes of the users. However, the parametric mannequin is not reliable especially when the number of the input body measurements is small. Because the same body size does not equal to the same body geometry. In addition, to prepare an appropriate mannequin template is not straightforward.

Many researchers have presented 3D scanning systems based on a single RGBD camera (lzadi et al., 2011; Henry et al., 2012). However, these methods have to capture many frames from various directions to cover the whole body, which is very time consuming. To rapidly capture the subject, multiple RGBD cameras system is presented which can finish the capture in several seconds (Tong et al., 2012; Hu et al.). Some researchers have explored processing scanned human bodies and designing 3D garments for scanned human bodies. Decaudin et al. (2006) and Apeagyei (2010) reviewed 3D body scanning technologies with application to the fashion and apparel industry. Olaru et al. (2012) developed a dress garment simulation based on a virtual mannequin obtained by 3D body scanning to test whether 2D patterns fit the virtual mannequin using the specialized 3D software. Wang et al. (2003) proposed a prototype system using a fuzzy logic concept for constructing a feature human model. A feature-based mesh generation algorithm is applied on the cloud points to construct the mesh surface of the human model. Hsiao and Chen (2015) scanned the mannequin and reconstruct new mannequin based on feature curves. Xu et al. (2002) first extracted body measurements from the scanned body, then used B-spline curves to connect and smoothen body curves. From the scanned data, a body form can be constructed using linear Coons surfaces. Kim and Kang (2003) proposed a system for automatic garment pattern design using the scanned body model. The surface geometry of a standard garment model used in the apparel industry is usually reconstructed by a stereovision technique and converted into a mesh structure. Then the surface of the 3D garment is flattened into 2D patterns. However, these methods do not give a solution of directly using the scanned body as a mannequin.

From the above-related research, how to reconstruct a personalized mannequin from the scanned data is the goal of this study. First, a scanned body is obtained. Second, the pose of the scanned body can be symmetric via linear blending skinning. The new pose-symmetric body is segmented to extract parts for regenerating mannequin. The earlier work of body segmentation is aimed to extract body measurements (Simmons, 2001). Nurre (1997) considered the automatic segmentation of the human body into its functional parts. Ju et al. (2000) presented an efficient method to do the body segmentation by slicing the scanned body model. Zhong and Xu (2006) developed a segmentation and measurement system for the scanned body. In our work, we use the method of Ju et al. to segment our body model. Finally, a slice-based method is proposed to generate a shape-symmetric 3D mannequin. The overview of our method is shown in Figure 2.

3. Methodology

A two-step solution is proposed to reconstruct a mannequin from the scanned data. The body model can be easily obtained by RGBD cameras such as the work of Tong et al. (2012). In our work, the tested bodies are captured by the body scanning system developed by Hu et al. In the process of pose-based symmetry, a template body is used to automatically rig the scanned data, so that the pose of scanned bodies can be adjusted via linear blending skinning. The body model is then segmented. In the process of shape-based symmetry, a slice-based method is proposed to generate a shape-symmetric mannequin from the scanned body, which can preserve the size and stature of the original scanned body.

3.1 Pose-based symmetry

In this section, we describe the process to fit a statistical body model into the scanned body and rigging it with a skeleton model.

3.1.1 Construction of the statistical body model

A morphable human model is constructed from a Japanese body database (Yamazaki et al., 2013; Hu et al., 2017); using a rigged template mesh and a set of different 3D human models in A-pose, a PCA-based statistical body model is defined.

The template mesh is denoted by X={V, T, J} where V={v1, … v|V|} are the vertices, T={t1, …, t|T|} are the triangles and J={ji, …, j|J|} are the joints of the body. A set of skin weights W(vi)={w1(vi), …, w|J|(vi)} are defined for each vertex vi in X which is used for changing the pose of the template mesh. The joint centers are also predicted manually by inserting a skeleton using the existing modeling software. This needs to be done only once for the template model.

During the pre-processing, the template mesh is fit to different body scans by non-rigid registration (Yamazaki et al., 2013). As a result, different body models with the same mesh topology can be obtained, which can be used for building the statistical body model, namely the database.

The skeleton of each model in the database can be calculated via mean value coordinates (MVC) (Ju et al., 2005). The assumption is that the MVC of the joint positions is consistent among different body models in the database. Using the MVC of each joint jn, i=1, …, |J| in the rigged template mesh, the joint position can be computed by:

(1) j n = i m n ( v i ) v i
where mn(vi) is the mean value coordinates of jn for joint vi. Thus as the shape parameters change, the new joint locations can be inferred from the new body mesh V'. Figure 3 gives some results of predicting the skeletons of different template bodies. The skin weights can be copied as the topology of the mesh remains the same.

In this way, a statistical PCA body model Sk (β) is constructed, β is a vector of coefficients of the PCA bases μ, computed from the Japanese body database:

(2) S k ( β ) = mean + i = 1 k β i μ i
where k=15 in our experiments. A new body shape can be constructed given a coefficient β (see Figure 5). Ahead of computing the statistical body model, the poses of each body model are slightly adjusted using linear blend skinning, such that the joint angles are all the same for each body model.

3.1.2 Fitting the statistical body model to the body scans

We now fit the statistical body model to the scanned body by simultaneously predicting the low-dimensional body parameters β and the joint angles θ. Using linear blend skinning, we can deform the pose of the morphable body model by changing the joint angles θ:

(3) D ( θ , v i ) = l | J | w l ( v i ) R l ( θ ) v i
where Rl(θ) is the global bone transformation of joint ji computed using the skeletal hierarchy and joint angles θ. As vi is a function of β in the statistical body model (see Equation (2)). The morphable model is denoted by M(θ, β).

The model M(θ, β) is then fit to the scanned naked body Y. The fitting is implemented by optimizing θ and β via minimizing mean square error between the paired correspondences (in Figure 4). The mean template model (as shown in Figure 5) is used to initialize this optimization. It is formulated by the following formulation:

(4) θ , β arg min M ( θ , β ) Y

After solving Equation (4), M becomes an approximation for Y. The skeleton can be transferred directly from M to Y. The skin weights need further process as the number of vertices of M and Y is different. To compute the skin weights w(zi) for Y, each vertex zi is projected onto the closest triangle of M, and then the skin weight of each vertex in Y can be calculated by interpolating the barycentric coordinates of the corresponding closest triangle in M.

Once the scanned body is rigged, its pose is interactively adjusted into symmetry by controlling the joint angles of the skeleton as shown in Figure 6.

3.2 Shape-based symmetry

In this section, we describe the process of reconstructing a shape-symmetric mannequin from the updated pose-symmetric body. The pose-symmetric body is segmented to extract the mannequin part using the method of Ju et al., (2000). As shown in Figure 7, each part is represented by a different color.

Inspired by the work of Dekker et al. (1999), a slice-based method is proposed to implement the symmetrization of a 3D mesh. First, the model is cut by a set of parallel planes to obtain a series of cross-sections (Figure 8). The cross-sections will be further analyzed to build a symmetric model. Dependent on the topology of the body, there are two types of symmetry (Figure 8): axial symmetry, such as the torso; and bilateral symmetry, such as leg and arm parts.

The process of axial symmetry can be described as follows:

  • Step 1: the bounding box for each cross-section is defined by minimal and maximal coordinates. (T(x)min, T(x)max, T(y)min, T(y)max). Then the center point of each cross-section is calculated using the following equations:

    (5) T ( x o ) = ( T ( x ) min + T ( x ) max ) 2
    (6) T ( y o ) = ( T ( y ) min + T ( y ) max ) 2

  • Step 2: n rays, which are casted from the center point, will intersect with the cross-section curve. In all our experiments, n is set to 360. The angle between two adjacent rays is set to (2π)/(n) degree. The two intersection points located in Y-axis will be removed as there are no corresponding points for them. The cross-section is discretized into n−2 points in continuous order.

  • Step 3: adjust the coordinate of each point Ti using the following equations:

(7) T i ( x ) ' = T ( x o ) + ( T i ( x ) T ( n 1 ) i ( x ) ) 2
(8) T i ( y ) ' = ( T i ( y ) + T ( n 1 ) i ( y ) ) 2
where i=1, 2, …, n−2, (Ti(x)’, Ti(y)’) is the new coordinate of Ti. The new points are symmetric with respect to the Y-axis:
  • Step 4: scale the new cross-section to make it preserve the same length with the original cross-section.

    For the bilateral symmetry, two separate cross-sections will be obtained after being cut by a plane, as shown in Figure 9(b). The process of bilateral is as follows:

  • Step 1: calculate the center of each cross-section using the same method as the first step of axial symmetry.

  • Step 2: divide each cross-section into n ordered points using the same method as the second step of the axial symmetry. The two points located in Y-axis are still removed.

  • Step 3: adjust the coordinates of these points using the following equations:

(9) L i ( x ) ' = L c1 ( x ) + [ ( L i ( x ) L ( x c1 ) ) + ( R i ( x ) R ( x c2 ) ) ] 2
(10) L i ( y ) ' = ( L i ( y ) + R i ( y ) ) 2
(11) R i ( x ) ' = R ( x c2 ) + ( L i ( x ) ' L ( x c1 ) )
(12) R i ( y ) ' = L i ( y ) '
where i=1, 2, …, n−2, L i ' and R i ' are the new coordinates of Li and Ri. L(xc1) and R(xc2) are the x-coordinates of the centers of bilateral cross-sections:
  • Step 4: scale both new cross-sections to preserve the length of the original cross-section.

Up to now, a symmetric 3D mannequin is generated in the form of a point cloud. A triangular mesh is constructed from this point cloud (Fabio, 2003). To obtain a high-quality mesh, the number of divided layers should be appropriate. In our practice, we cut the torso and legs into 100 and 50 layers, respectively. Figure 10 shows the symmetric mannequin using our method.

4. Experimental results and discussion

A novel scanning-based pipeline of reconstructing 3D personalized mannequin is proposed. The generated 3D mannequin preserves both the size and the stature of the original scanned body. To validate our pose-symmetric method, some participants are asked to keep an unnatural pose during the scanning (Figure 2 upper, and Figure 13(f)). These models can also be well adjusted to be pose-symmetric, which proves our method.

The results of our shape-symmetric method can be seen in Figure 10(b), it can be intuitively found that the stature can be preserved well. In Figure 11, we compare our results with the mirror-symmetric results. To mirror half of the body model is a direct method to obtain the absolutely symmetric body model. Compared to the original scanned body, the hip width of the left mirrored model becomes smaller while the hip width of the right mirrored model becomes larger. In addition, the shapes of the crotch in Figure 11(c) and in Figure 11(d) are distinct. But the mannequin from our method can preserve the stature well comparing it with the original scanned body.

Besides the shape, to preserve the size of the original scanned body is equally important. As described in Section 3.2, we scale the new symmetric cross-section to keep the same length with the original cross-section. This simple but efficient method can preserve the size of the original scanned body well, as shown in Table I.

In the computer-aided technologies, a 3D body model is reconstructed from the body measurements. BoydLabs (www.bodylabs.com) is one of the leading companies that engage in building the accurate body shape and designing the 3D garments. It implemented the measurement-based body reconstruction. As shown in Table I, the body size of the scanned body and the generated mannequin using our method are measured in Clo3D (www.clo3d.com). Then the body measurements of the original scanned body will be imported into BodyLabs to generate the corresponding body shape. In Figure 12, it can be seen that the body from BodyLabs loses the accurate body shape information compared with the original scanned body while our results preserve it. This comparison validates that the same body measurements do not equal to the same body stature, and our result can preserve both the size and stature of the original scanned body. The more experimental results are given in Figure 13.

Once the mannequin is given, it is easy to design clothes utilizing the CAD software such as Clo3D (www.clo3d.com). For example, the tight garment can be designed by converting the body surface into a garment surface directly. In this way, the seam lines on the garments can be drawn to flatten 3D surface patches into the 2D patterns. A loose garment can also be created. The 2D patches can be trimmed to fit the 3D mannequin, then these 2D patches are sewn via physically based simulation to obtain a complete 3D garment. As shown in Figure 14, a stand and a better rendering are added to get a better visual result.

5. Conclusion and future work

This paper proposes a novel scanning-based pipeline of reconstructing a personalized 3D mannequin from the scanned body. We converted it to a body symmetry problem and gave a two-step solution. In pose symmetry, a template body is fit to the scanned body. It offers the ability to change the pose of the scanned body. In shape symmetry, a slice-based method is presented to generate a shape-symmetric mannequin. Experimental results validate that our method can preserve both size and stature of the user while the conventional methods can only preserve the size.

The main contributions of this paper are as follows:

  1. decompose the process of the mannequin reconstruction into pose symmetry and shape symmetry;

  2. propose a novel scanning-based pipeline to reconstruct a 3D personalized mannequin; and

  3. present a slice-based method for the symmetrization of the 3D mesh.

Future work will involve reconstructing 3D personalized mannequin from the scanned dressed body. Second, to verify if a designed garments comfortable wearing on the customer, it is necessary to study the fitness of a garment. Third, a fitting robot that can mimic various 3D mannequin models will be developed for testing the fitness of real garments.

Figures

Two types of mannequins

Figure 1

Two types of mannequins

Overview of our method

Figure 2

Overview of our method

Japanese body models with the corresponding morphable skeletons fitted

Figure 3

Japanese body models with the corresponding morphable skeletons fitted

Correspondences between PCA mean model (white body) and scanned body (green body)

Figure 4

Correspondences between PCA mean model (white body) and scanned body (green body)

3D morphable human models (left) and the mean model (right)

Figure 5

3D morphable human models (left) and the mean model (right)

Our pose-symmetry result

Figure 6

Our pose-symmetry result

Body segmentation

Figure 7

Body segmentation

Slicing

Figure 8

Slicing

Two types of cross-sections

Figure 9

Two types of cross-sections

Our shape-symmetry result

Figure 10

Our shape-symmetry result

Compared our result with mirrored results

Figure 11

Compared our result with mirrored results

Compared our result with the BodyLabs result

Figure 12

Compared our result with the BodyLabs result

More experimental results

Figure 13

More experimental results

More experimental results

Figure 13

More experimental results

Two types of mannequins using our method

Figure 14

Two types of mannequins using our method

Male body measurements (weight: 67 kg; height: 170 cm; inseam: 69 cm)

(unit: cm) Original scanned body Pose-symmetry using our method Shape-symmetry using our method Body from BodyLabs
Chest girth 94 94 94 94
Waist girth 85 85 85 85
Hip girth 101 101 101 101

References

Apeagyei, P.R. (2010), “Application of 3D body scanning technology to human measurement for clothing fit”, Change, Vol. 4 No. 7, pp. 58-68.

Au, C.K. and Ma, Y.S. (2010), “Garment pattern definition, development and application with associative feature approach”, Computers in Industry, Vol. 61 No. 6, pp. 524-531.

Au, C.K. and Yuen, M.M.F. (1999), “Feature-based reverse engineering of mannequin for garment design”, Computer-Aided Design, Vol. 31 No. 12, pp. 751-759.

Au, C.K. and Yuen, M.M.F. (2000), “A semantic feature language for sculptured object modelling”, Computer-Aided Design, Vol. 32 No. 1, pp. 63-74.

Decaudin, P., Julius, D., Wither, J., Boissieux, L., Sheffer, A. and Cani, M.P. (2006), “Virtual garments: a fully geometric approach for clothing design”, Computer Graphics Forum, Vol. 25 No. 3, pp. 625-634.

Dekker, L., Douros, I., Buston, B.F. and Treleaven, P. (1999), “Building symbolic information for 3D human body modeling from range data”, Second International Conference on 3-D Digital Imaging and Modeling, pp. 388-397.

Fabio, R. (2003), “From point cloud to surface: the modeling and visualization problem”, International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. 34 No. 5, p. W10.

Henry, P., Krainin, M., Herbst, E., Ren, X. and Fox, D. (2012), “RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments”, The International Journal of Robotics Research, Vol. 31 No. 5, pp. 647-663.

Hsiao, S.W. and Chen, R.Q. (2015), “Applying Kinect on the development of a customized 3d mannequin”, Evaluation, Vol. 39 No. 7, p. 61530.

Hu, P., Zhong, Y., Wu, G and Li, D. (2015), “Two-step registration in 3D human body scanning based on multiple RGB-D sensors”, Journal of Fiber Bioengineering and Informatics, Vol. 8 No. 4, pp. 705-712.

Hu, P., Komura, T., Holden, D. and Zhong, Y. (2017), “Scanning and animating characters dressed in multiple-layer garments”, The Visual Computer, Vol. 33 Nos 6-8, pp. 961-969.

Istook, C.L. and Hwang, S.J. (2001), “3D body scanning systems with application to the apparel industry”, Journal of Fashion Marketing and Management: An International Journal, Vol. 5 No. 2, pp. 120-132.

Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A. and Fitzgibbon, A. (2011), “Kinect Fusion: real-time 3D reconstruction and interaction using a moving depth camera”, Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, pp. 559-568.

Ju, T., Schaefer, S. and Warren, J. (2005), “Mean value coordinates for closed triangular meshes”, ACM Transactions on Graphics, Vol. 24 No. 3, pp. 561-566.

Ju, X., Werghi, N. and Siebert, J.P. (2000), “Automatic segmentation of 3D human body scans”, Proceeding of the Eleventh IASTED International Conference on Computer Graphics and Imaging, Las Vegas, CA.

Kavan, L. and Žára, J. (2005), “Spherical blend skinning: a real-time deformation of articulated models”, Proceedings of the 2005 Symposium on Interactive 3D Graphics and Games, ACM, pp. 9-16.

Kim, S.M. and Kang, T.J. (2003), “Garment pattern generation from body scan data”, Computer-Aided Design, Vol. 35 No. 7, pp. 611-618.

McCartney, J., Hinds, B.K. and Chong, K.W. (2005), “Pattern flattening for orthotropic materials”, Computer-Aided Design, Vol. 37 No. 6, pp. 631-644.

Nurre, J.H. (1997), “Locating landmarks on human body scan data”, Proceedings, International Conference on Recent Advances in 3-D Digital Imaging and Modeling, IEEE, pp. 289-295.

Olaru, S., Filipescu, E., Filipescu, E. et al. (2012), “3D fit garment simulation based on 3D body scanner anthropometric data”, 8th International DAAAM Baltic Conference on Industrial Engineering, p. 146.

Simmons, K.P. (2001), Body Measurement Techniques: A Comparison of Three-Dimensional Body Scanning and Physical Anthropometric Methods, Seoul.

Tong, J., Zhou, J., Liu, L., Pan, Z. and Yan, H. (2012), “Scanning 3d full human bodies using Kinects”, IEEE Transactions on Visualization and Computer Graphics, Vol. 18 No. 4, pp. 643-650.

Wang, C.C.L., Chang, T.K.K. and Yuen, M.M.F. (2003), “From laser-scanned data to feature human model: a system based on fuzzy logic concept”, Computer-Aided Design, Vol. 35 No. 3, pp. 241-253.

Xu, B., Huang, Y., Yu, W. and Chen, T. (2002), “Body scanning and modeling for custom fit garments”, Journal of Textile and Apparel, Technology and Management, Vol. 2 No. 2, pp. 1-11.

Yamazaki, S., Kouchi, M. and Mochimaru, M. (2013), “Markerless landmark localization on body shape scans by non-rigid model fitting”, Proceedings of the 2nd Digital Human Modeling Symposium, Ann Arbor, MI.

Zhong, Y. and Xu, B. (2006), “Automatic segmenting and measurement on scanned human body”, International Journal of Clothing Science and Technology, Vol. 18 No. 1, pp. 19-30.

Web references

Acknowledgements

This work is supported by Natural Science Foundation of China (Grant No. 61572124).

Corresponding author

Yueqi Zhong can be contacted at: zhyq@dhu.edu.cn