Vision-guided robotic automation of vat polymerization additive manufacturing production: design, calibration and verification

Wenzhen Yang (Department of Mechanical Engineering, Technical University of Denmark, Lyngby, Denmark) (School of Mechanical Engineering, Jiangnan University, Wuxi, China)
Johan K. Crone (Department of Electrical Engineering, Technical University of Denmark, Lyngby, Denmark)
Claus R. Lønkjær (Department of Electrical Engineering, Technical University of Denmark, Lyngby, Denmark)
Macarena Mendez Ribo (Department of Mechanical Engineering, Technical University of Denmark, Lyngby, Denmark)
Shuo Shan (Department of Mechanical Engineering, Technical University of Denmark, Lyngby, Denmark)
Flavia Dalia Frumosu (Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark)
Dimitrios Papageorgiou (Department of Electrical Engineering, Technical University of Denmark, Lyngby, Denmark)
Yu Liu (School of Mechanical Engineering, Jiangnan University, Wuxi, China)
Lazaros Nalpantidis (Department of Electrical Engineering, Technical University of Denmark, Lyngby, Denmark)
Yang Zhang (Department of Mechanical Engineering, Technical University of Denmark, Lyngby, Denmark)

Journal of Intelligent Manufacturing and Special Equipment

ISSN: 2633-6596

Article publication date: 18 April 2023

Issue publication date: 29 June 2023

564

Abstract

Purpose

This study aims to present a vision-guided robotic system design for application in vat photopolymerization additive manufacturing (AM), enabling vat photopolymerization AM hybrid with injection molding process.

Design/methodology/approach

In the system, a robot equipped with a camera and a custom-made gripper as well as driven by a visual servoing (VS) controller is expected to perceive objective, handle variation, connect multi-process steps in soft tooling process and realize automation of vat photopolymerization AM. Meanwhile, the vat photopolymerization AM printer is customized in both hardware and software to interact with the robotic system.

Findings

By ArUco marker-based vision-guided robotic system, the printing platform can be manipulated in arbitrary initial position quickly and robustly, which constitutes the first step in exploring automation of vat photopolymerization AM hybrid with soft tooling process.

Originality/value

The vision-guided robotic system monitors and controls vat photopolymerization AM process, which has potential for vat photopolymerization AM hybrid with other mass production methods, for instance, injection molding.

Keywords

Citation

Yang, W., Crone, J.K., Lønkjær, C.R., Ribo, M.M., Shan, S., Frumosu, F.D., Papageorgiou, D., Liu, Y., Nalpantidis, L. and Zhang, Y. (2023), "Vision-guided robotic automation of vat polymerization additive manufacturing production: design, calibration and verification", Journal of Intelligent Manufacturing and Special Equipment, Vol. 4 No. 2, pp. 85-98. https://doi.org/10.1108/JIMSE-01-2023-0001

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Wenzhen Yang, Johan K. Crone, Claus R. Lønkjær, Macarena Mendez Ribo, Shuo Shan, Flavia Dalia Frumosu, Dimitrios Papageorgiou, Yu Liu, Lazaros Nalpantidis and Yang Zhang

License

Published in Journal of Intelligent Manufacturing and Special Equipment. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

The Fourth Industrial Revolution is driving manufacturing industry into intelligent manufacturing, which means flexible, mass customized production (Zhong et al., 2017). Additive manufacturing (AM), also commonly referred to as 3D printing, is a prominent customized-manufacturing technology (Abdulhameed et al., 2019), which can produce complex customized products from digital model. With the advantages of low lead time and the ability to produce more complex features, it is applied in a multitude of fields, such as aerospace (Khorasani et al., 2021), medical applications (Salmi, 2021) and industrial manufacturing (Haleem and Javaid, 2019). However, AM technologies are incompatible on the demands of mass production for industry application due to low production speed and poor repeatability, which facilitates the development of automated AM (Ashima et al., 2021). The automated AM is an emerging study, which aims to eliminate the limitation of the traditional AM by integrating with other technologies.

At first, some researchers combine AM with other processes through robotics technologies. Keating and Oxman (2013) fix a material extrusion-based 3D printing head and a milling head in the workspace and use a KUKA robot to move building platform between additive and subtractive process steps. Similarly, Li et al. (2018) propose a hybrid additive-subtractive system by switching material extrusion based 3D printing head and milling head at the end of a 6-axis robot to get better surface quality of workpiece.

After that, some researchers integrate robotic technologies with a variety of types of AM to enhance its manufacturing ability (Urhal et al., 2019). Typically, the robot-assisted AM applies multi-axis robot to eliminate the limitation of space in AM process. Barnett and Gosselin (2015) develop a cable-suspended robot 3D printer with 6°-of-freedom (DOF), which can print large-scale structure, for instance, a statue with the height of 2.16 meter. Gosselin et al. (2016) install a material extrusion-based 3D printing head at the end of ABB 6620 6-axis robot to print large-scale structures by using concrete, which extends AM to the field of architecture. Wu et al. (2017) employ a 6 DOF robot under the material extrusion-based 3D printing head to improve its DOF and realize printing without support structure.

In recent years, Internet of things (IoT) technologies are combined with AM, for example, Barbosa and Aroca (2017) use Bluetooth low energy (BLE) technology to transfer information of AM printer to mobile application, which enables monitor and intervene in any place.

And artificial intelligence (AI) technologies are also combined with AM to realize automated AM. Aroca et al. (2017) integrate computer vision technology and communication interface into a fused deposition modeling (FDM) 3D printer to monitor printing status and use a custom-made 2D robot to remove the finished workpiece to enable continuous workpiece printing. Jin et al. (2020) integrate FDM AM with deep learning (DL) algorithms to detect the delamination of printed workpiece in real-time, which improves the repeatability of AM. Li et al. (2021) design a detect-remove-replace system which integrates DL-based vision technology and a 6-axis UR robotic arm in FDM printer to monitor printing condition and automatic extract finished workpiece. Liu et al. (2022) integrate robot-assisted post-process based on vision technology into powder bed fusion (PBF) 3D printing process, which will remove unfused powder around the 3D printed various-shaped workpieces instead of human.

In this study, we propose a vision-guided robotic system for application in vat photopolymerization AM. Its goal is integrating visual positioning, communication interface and robotic technologies into vat photopolymerization AM, connecting multi-process steps automatically such as vat photopolymerization AM as well as post-washing in soft tooling process chain (Zhang et al., 2018) to improve working environment and avoid repetitive movement for operators. The building platform of the home-made vat photopolymerization AM printer (Ribo, 2020) is specifically designed to be handled by the 6 DOF robotic system which communicates with the AM printer by communication interface and is driven by visual servoing (VS) (Siciliano et al., 2008; Pomares, 2019) controller to position the platform throughout different process steps. The contribution of the study pertains to.

  1. The integration of vat photopolymerization AM, communication interface and vision-guided robotic technology.

  2. The design and manufacturing of a customized building platform for the robotic system to facilitate automatic vat photopolymerization printing.

  3. The design and implementation of a VS system for positioning the robot end-effector.

  4. The definition of performance metrics related to accuracy and precision.

  5. The experimental validation of the obtained designs.

The remainder of the paper is structured as follows: Section 2 presents the main methods used in this study in relation to the equipment and robotic system designs. Section 3 presents the redesign of the vat photopolymerization AM printer. Section 4 presents the design of the VS controller. Section 5 presents the VS control process. Section 6 presents and discusses the experimental results. Section 7 presents the main conclusion of the paper.

2. System description and methodology

The proposed design is illustrated in Figure 1. A 6-axis robot UR5e is attached with a custom-made end-effector that matches with the new-designed building platform. The VS is a vision-guided robotics close-loop control technology and is implemented in robotics to control their motion, which is divided into two schemes: position-based visual servoing (PBVS) and image-based visual servoing (IBVS). The latter is easy to be realized with a 2D camera (Wu et al., 2016). In our system, the IBVS controller is applied to locate the position of the new-designed building platform accurately instead of the 3D printed workpiece so that the manipulation of the completed job is independent of the 3D printed workpieces. A UR controller is based on ROS (Ding et al., 2018) and Universal_Robots_ROS_Driver (Andersen, 2015) to control the robot. All devices are linked by integrated communication interfaces, like RS232, USB and Ethernet.

3. Redesign of the home-made vat photopolymerization AM printer

To match the goal of automatic vat photopolymerization printing, we redesigned the software and hardware. On the one hand, a serial communication interface was integrated into the home-made vat photopolymerization AM printer to send printing status such as “printing job is finished” and receive remote control commands. On the other hand, the new-designed building platform with integrated permanent electromagnet was designed to be disassembled and assembled repeatedly and accurately. Besides, hardware interaction ports were also prepared.

As shown in Figure 2, the old building platform is fixed to the alignment system, which is permanently attached, and workpiece removal is completely manual. However, the automatic printing requires the building platform to be detachable, repositionable and compatible with robot. Therefore, the new designed building platform consists of part A, upper mold plate, lower mold plate, part B and printing platform. The upper and lower mold plates are a couple of mold plates with micrometer tolerances, which are embedded into the new-designed building platform. The upper mold plate contains an electromagnet that is used to separate the upper and lower mold plates, and since the flatness is unknown and might introduce positioning errors, the electromagnet is engineered to have no contact with the opposite mold plate. An ArUco marker is printed and fixed on the front of the new-designed building platform. This marker, or identifier, is used to aid the robot to perceive objective and relocate automatically. A microswitch is installed on both left and right at the end of part A to define the limited position of the end-effector. To disable the electromagnet for extraction, the left and right microswitches form a series circuit which is powered by a 24V power supply. The guides are designed on both sides of part B to allow the insertion of the end-effector, where the slots on top of the guides are developed for the interlocking mechanism. To meet the exact requirements of repositioning, the clearance between the guides and the prongs of the end-effector is no more than 1 mm. The end-effector consists of a base with a two-pronged fork attached and a UR5e adapter. The UR5e adapter is used to mount the tool on the UR5e robot. And the convex keys are designed on the top of the fork of the base, which is interlocked with the slots of the guides on part B. The extraction process of the printing platform can be explained as follows: when the prongs of the end-effector are inserted into the guide holes of part B, the two microswitches are triggered and output a 24V signal to UR5e robot so that the UR5e robot disable the permanent electromagnet; afterward, part B is separated from the part A due to gravity and the slots get pulled down over the convex keys. The insertion process is the opposite of the extraction process.

After assemblage, the flatness and parallelism of the printing platform were measured based on Standard ISO1101 (Standard, 2004) by using Zeiss Prismo equipped with a VAST XT scanning probe in 20.3 °C. Because if the detachable repeated process causes the building platform to tilt, the initial layers of printed workpiece would deform as illustrated in Figure 3. To verify this assumption, total ten 40 mm diameter discs were printed with the old- and new-designed building platform, respectively, by using photopolymer resin typed FunToDo Black. The disc circularities were measured using DeMeet-400 CMM.

4. Visual servoing

The FLIR 2D camera BFS-U3-31S4C-C was fixed on the UR5e robot. Then, the camera was calibrated to adjust pixels to millimeters. The calibration is based on a 7 × 9 chess board pattern, square of which is 20 mm × 20 mm when printed on a A4 paper at a ratio of 1:1. When calibration, the pinhole camera model (Figure 4) was used. This is a widely used technique to model digital cameras (Zhang, 2004). Based on it, the relationship between the point (Xc,Yc,Zc) in the camera frame and the point in the pixel coordinate system (u,v), namely camera's intrinsic parameters, can be obtained.

Consider fT[u1(t)v1(t)uN(t)vN(t)] being a vector of N image features with coordinates (ui(t),vi(t)), i=1,,N at time t and the constant vector fdT[u1,dv1,duN,dvN,d], which contains the coordinates of the same image features on the normalized image plane (with its origin on the upper left corner) when the desired configuration between the robotic end-effector and the printer is achieved. The dynamics of the positioning error effd is then given by Siciliano et al. (2008). Its derivation e˙ is the velocity error. The relationship between the velocity error and the end-effector velocity can be established by a Jacobian matrix, as shown in equation (1).

(1)e˙=Levc
where vc is the end-effector velocity screw and LeT[Le,1TLe,NT] is the image Jacobian. The latter is a 2N × 6 matrix and constitutes a linear mapping between the rate of change of the positioning error in the image frame and the end-effector velocity screw. Each 2 × 6 row block Le,i of the image Jacobian is associated to the rate of change of the ith feature's coordinates and is given by
(2)Le,i=[1Zc,i0xiZc,i01Zc,iyiZc,ixiyi(1+xi2)yi1+yi2xiyixi]
where (Xc,i,Yc,i,Zc,i) are the feature's Cartesian coordinates expressed in the end-effector frame,
(3)xi=Xc,iZc,i=(uiCx)fx
(4)yi=Yc,iZc,i=viCyfy
and Cx,Cy,fx,fy are the camera's intrinsic parameters. Under the assumption that the left Moore–Penrose pseudoinverse of Le is full rank (at least three features), the end-effector velocity command
(5)vc=λLe+e,λ>0
ensures stability of the origin for the solutions of (1). In particular, if Le is square and nonsingular (exactly three features are chosen and det (Le)0,t0), the origin is an exponentially stable equilibrium point of the system in (1).

Although exponential convergence of the positioning error to zero is highly desirable, in practice more than 3 feature points are selected to avoid singularities (Nayak and Briot, 2020).

To calculate the image Jacobian, the 3D parameters of the chosen image features have to be estimated. The fiducial markers method is chosen to calculate the image Jacobian, which is usually used to estimate the 3D parameters of a point in 3D space with a marker with known characteristics.

Kalaitzakis et al. (2021) have conducted a study to test the capabilities of some of the most popular fiducial markers including ArUco, AprilTag and ARTag, where the ArUco marker was found to have a lower computational cost and greater detection rates in various poses.

The entire VS process from detection of the ArUco, estimation of the 3D parameters and calculation of the error to driving the robot is depicted in the flowchart in Figure 5. The ArUco marker's image of the goal position is acquired and saved manually before starting the VS controller, and the initial arbitrary robot pose needs to be available, which means the ArUco marker has to be within the camera's field of view clearly. After the VS controller runs, the ArUco marker's picture is acquired in real-time to estimate Jacobian matrix Le,i and error between ideal position and current position in equivalent normalized coordinate; if the error is bigger than the threshold, where the translation and angle accuracy tolerance are set as ±1mm and ±0.017rad, respectively, the controller outputs a velocity screw that contains spatial linear and angular velocities in Cartesian coordinate system to drive the robot to minimize the error until the error is in tolerance.

5. Scheme of VS process

Instead of doing trial and error of the VS process, a simulation platform was designed in Ubuntu operating system, and its architecture is shown in Figure 6. The VS-controller is based on ROS and integrated with a scene created in Unity 3D to display real-time relocation results. The UR-controller is built with a real-time kernel for real-time capabilities, and the URSim is a simulator provided by UR company. The simulation has the benefit of being interchangeable both in virtual and real robot system by only moving the communication cable between the real UR5e robot and URSim with the same IP address. When a simulation is started, the image containing ArUco marker based on camera position is generated, and converted by the VS script to velocity screw. The UR-controller recognizes and translates it to commands available for URSim/UR robot, and drives URSim/UR robot to change pose that will affect new pictures based on the Camera. This process keeps going until the error between the goal ArUco marker image and the generated image is smaller than set value. This process is shown in the link https://youtu.be/mZZe8m6wARI.

Finally, the vision-guided robotic system is built between vat photopolymerization AM printer, VS-controller, UR-controller and UR robot by using communication interface and electric ports.

6. Experimental results and discussion

6.1 Test scenarios and performance indices

The vision-guided robotic system is tested in the simulation platform of the VS process, firstly, and validated in detail in realistic scenarios. The performance indices involve 1. the tolerance of printing part in new designed system, 2. the accuracy of vision-guided robot manipulation and 3. whether our design enables vat photopolymerization AM process hybrid with other process in soft tooling process automatically. The experiment process is illustrated in Figure 7.

6.2 Results

The flatness and parallelism of the printing platform were measured 10 times, respectively. The average flatness of was estimated to be 278.3 µm with a standard deviation of 1.6 µm, and the average parallelism is estimated to be 355.6 µm with a standard deviation of 95.36 µm.

Afterward, ten for consistency discs were printed with the old building platform and the new-designed building platform, respectively, and circularity was measured to quantify the impact of the new-designed building platform. The discs printed with the old building platform have an average circularity of 663 µm with a standard deviation of 166 µm, and these printed with the new-designed building platform have an average circularity of 799 µm with a standard deviation of 112 µm.

Debugging and testing of the accuracy of vision-guided robot manipulation took place, first, in the designed simulation platform with a pre-selected target position, and some different assumed initial positions were given by changing the starting position of the ArUco marker. The results showed that the VS-controller immediately stopped the URSim robot with a high value of the parameter λ due to safety limits, but it took more than 2 min to converge on the target position with a low λ parameter. The solution was to make the λ depend on the distance Z between the camera and the ArUco marker, and the corrected λ is shown in equation (6). Three simulation results of VS process are provided in supplementary material as Figure S1.

(6)λ=0.05Z

Sequentially, the same tests took place on the real UR5e robot. The results shown in Figure 8 illustrate that the VS-controller can control the robot to converge on the goal position with three arbitrary initial positions after approximately 25s.

To qualify the error, ten tests took place in ten arbitrary positions. The goal position (X,Y,Z,RX,RY,RY) is (691.1 mm, 50.89 mm, 527.2 mm, 0.059 rad, 1.575 rad, −0.011 rad), which was measured using the UR PolyScope controller. The obtained difference between the goal position and actual position is shown in Table 1. The S is defined as equation (7), which is displayed the positioning accuracy visually.

(7)S=X2+Y2+Z2+Rx2+RY2+RZ26

6.3 Discussion

In the measurements of the printed discs, despite having the parallelism of 355.6 µm, the detachable new-designed building platform introduced an error of about 116 µm to the vat photopolymerization AM printed circularity. The low stiffness of the plastic material used in some parts of the building platform maybe introduces some error.

The results of the accuracy of robot manipulation indicate that the robot could move the end-effector the requested position within an error of 260 µm, which guarantees the smooth implementation of automated vat photopolymerization AM process.

Based on these results, when completing a printing job in the vat photopolymerization AM printer, a signal of “job finished” will be delivered to the VS-controller by RS232 communication interface, and then the VS-controller starts to drive the robot towards the target position based on the acquired picture. After reaching the goal position, the end-effector tool is inserted into the guides of the new-designed building platform until microswitches are triggered. Afterward, the UR5e robot receives a 24V voltage signal, and an appropriate voltage signal is then applied to the electromagnet by UR5e robot to detach the printing platform. Under the influence of gravity, the printing platform is pulled down and its slots are interlocked with the convex keys of the end-effector. Subsequently, the UR5e robot extracts and places the printing platform to the following process step preset station in soft tooling process, such as a washing station, according to a pre-defined path. Eventually, a new printing platform will be picked up from where the alternative printing platform was. A new printing process can be started by signaling a “start” to the vat photopolymerization AM printer. Ultimately, the vat photopolymerization AM process can be hybrid with other process steps of soft tooling process automatically. The hybrid process experiment can be found in the link https://youtube.com/shorts/suaJ-mckq-4?feature=share.

7. Conclusion

A vision-guided robotic system design for application in vat photopolymerization AM was proposed in this work. A VS-controller was designed to drive the end-effector of an UR robotic manipulator to the desired workpiece position. The implementation and tuning of the control loop were facilitated by the designed simulation platform of VS process. Experimental results demonstrated successful robot control with positioning error less than 300 µm. The importance of manipulating printing platform instead of printed workpieces is the fact that manipulating the printing process is robust to the variety printed shapes, and thus more generalizable in its automation application. This study constitutes the first step in exploring automation of vat photopolymerization AM process hybrid with soft tooling process, which facilitates vat photopolymerization AM hybrid with other mass production processes. In the future work, the other processing in soft tooling process such as shape measurement and injection molding will be integrated with the vat photopolymerization AM process automatically to realize automated production from drawing to ready-to-use product.

Figures

Illustration of the vision-guided robotic system

Figure 1

Illustration of the vision-guided robotic system

Before and after redesigned building platform

Figure 2

Before and after redesigned building platform

Illustration showing the result from the building platform

Figure 3

Illustration showing the result from the building platform

The pinhole camera model

Figure 4

The pinhole camera model

Flowchart for VS process

Figure 5

Flowchart for VS process

Architecture of designed simulation platform of VS process

Figure 6

Architecture of designed simulation platform of VS process

The experiment process in the image frame and the end-effector velocity

Figure 7

The experiment process in the image frame and the end-effector velocity

The tests of target position with three arbitrary initial positions

Figure 8

The tests of target position with three arbitrary initial positions

Simulation of target position with three arbitrary initial positions

Figure S1

Simulation of target position with three arbitrary initial positions

Difference between the goal position and actual position

TestX/mmY/mmZ/mmRX/mmRy/mmRz/mmS/mm
10.020.23−0.150.0010.0010.0020.11
2−0.140.40.70.0010.0060.0020.33
3−0.160.440.10.0010.0020.0020.19
40.010.05−0.34−0.0010.001−0.0010.14
5−0.28−0.370.1−0.0010.006−0.0010.19
6−0.45−0.010.14−0.0010.002−0.0010.19
7−0.29−0.650.04−0.0030.001−0.0020.29
8−0.14−0.660.19−0.0040.006−0.0040.28
9−0.62−0.720.03−0.0020.002−0.0010.38
1000.570.570.0010.0010.0030.32
Ave−0.21−0.070.14−0.0010.002−0.0010.24

Source(s): Authors’ own work

Supplementary material

Figure S1

References

Abdulhameed, O., Al-Ahmari, A., Ameen, W. and Mian, S.H. (2019), “Additive manufacturing: challenges, trends, and applications”, Advances in Mechanical Engineering, Vol. 11 No. 2, doi: 10.1177/1687814018822880.

Andersen, T.T. (2015), “Optimizing the universal robots ros driver”, available at: https://orbit.dtu.dk/en/publications/optimizing-the-universal-robots-ros-driver

Aroca, R.V., Ventura, C.E.H., De Mello, I. and Pazelli, T.F.P.A.T. (2017), “Sequential additive manufacturing: automatic manipulation of 3D printed parts”, Rapid Prototyping Journal, Vol. 23 No. 4, pp. 653-659, doi: 10.1108/Rpj-02-2016-0029.

Ashima, R., Haleem, A., Bahl, S., Javaid, M., Mahla, S.K. and Singh, S. (2021), “Automation and manufacturing of smart materials in Additive Manufacturing technologies using Internet of Things towards the adoption of Industry 4.0”, Materials Today: Proceedings, Vol. 45, pp. 5081-5088, doi: 10.1016/j.matpr.2021.01.583.

Barbosa, G. and Aroca, R. (2017), “An IoT-based solution for control and monitoring of additive manufacturing processes”, Journal of Powder Metallurgy and Mining, Vol. 6 No. 158, p. 2, doi: 10.4172/2168-9806.1000158.

Barnett, E. and Gosselin, C. (2015), “Large-scale 3D printing with a cable-suspended robot”, Additive Manufacturing, Vol. 7, pp. 27-44, doi: 10.1016/j.addma.2015.05.001.

Ding, C., Wu, J., Xiong, Z. and Liu, C. (2018), “A reconfigurable pick-place system under robot operating system”, International Conference on Intelligent Robotics and Applications, pp. 437-448, Springer, doi: 10.1007/978-3-319-97589-4_37.

Gosselin, C., Duballet, R., Roux, P., Gaudilliere, N., Dirrenberger, J. and Morel, P. (2016), “Large-scale 3D printing of ultra-high performance concrete - a new processing route for architects and builders”, Materials and Design, Vol. 100, pp. 102-109, doi: 10.1016/j.matdes.2016.03.097.

Haleem, A. and Javaid, M. (2019), “Additive manufacturing applications in industry 4.0: a review”, Journal of Industrial Integration and Management-Innovation and Entrepreneurship, Vol. 4 No. 4, 1930001, doi: 10.1142/S2424862219300011.

Jin, Z.Q., Zhang, Z.Z. and Gu, G.X. (2020), “Automated real-time detection and prediction of interlayer imperfections in additive manufacturing processes using artificial intelligence”, Advanced Intelligent Systems, Vol. 2 No. 1, doi: 10.1002/aisy.201900130.

Kalaitzakis, M., Cain, B., Carroll, S., Ambrosi, A., Whitehead, C. and Vitzilaios, N. (2021), “Fiducial markers for pose estimation”, Journal of Intelligent and Robotic Systems, Vol. 101 No. 4, p. 71, doi: 10.1007/s10846-020-01307-9.

Keating, S. and Oxman, N. (2013), “Compound fabrication: a multi-functional robotic platform for digital design and fabrication”, Robotics and Computer-Integrated Manufacturing, Vol. 29 No. 6, pp. 439-448, doi: 10.1016/j.rcim.2013.05.001.

Khorasani, M., Ghasemi, A., Rolfe, B. and Gibson, I. (2021), “Additive manufacturing a powerful tool for the aerospace industry”, Rapid Prototyping Journal, Vol. 28 No. 1, doi: 10.1108/Rpj-01-2021-0009.

Li, L., Haghighi, A. and Yang, Y.R. (2018), “A novel 6-axis hybrid additive-subtractive manufacturing process: design and case studies”, Journal of Manufacturing Processes, Vol. 33, pp. 150-160, doi: 10.1016/j.jmapro.2018.05.008.

Li, H.Y., Ma, Y.C., Bin Miswadi, M.N.A., Luu, L.N.N., Yang, L.J., Foong, S.H., Soh, G.S., Sivertsen, E. and Tan, U.X. (2021), “Detect-remove-replace: a robotic solution that enables unmanned continuous 3D printing”, IEEE Robotics and Automation Magazine, Vol. 29 No. 2, pp. 33-45, doi: 10.1109/Mra.2021.3103478.

Liu, Z.W., Geng, J.Y., Dai, X.K., Swierzewski, T. and Shimada, K. (2022), “Robotic depowdering for additive manufacturing via pose tracking”, IEEE Robotics and Automation Letters, Vol. 7 No. 4, pp. 10770-10777, doi: 10.1109/Lra.2022.3195189.

Nayak, A. and Briot, S. (2020), “Singularities in the image-based visual servoing of five points”, International Symposium on Advances in Robot Kinematics, pp. 150-157, Springer, doi: 10.1007/978-3-030-50975-0_19.

Pomares, J. (2019), “Visual servoing in robotics”, Electronics, Vol. 8 No. 11, doi: 10.3390/electronics8111298.

Ribo, M.M. (2020), “Vat Photopolymerization Process Chain”, PhD thesis, available at: https://orbit.dtu.dk/en/publications/vat-photopolymerization-process-chain

Salmi, M. (2021), “Additive manufacturing processes in medical applications”, Materials, Vol. 14 No. 1, p. 191, doi: 10.3390/ma14010191.

Siciliano, B., Khatib, O. and Kröger, T. (2008), Springer Handbook of Robotics, Springer, Berlin, Heidelberg.

Standard, I. (2004), 1101: 2004 Geometrical Product Specifications (GPS)–geometrical Tolerancing–Tolerances of Form, Orientation, Location and Run-Out, International Organization for Standarization, Geneva.

Urhal, P., Weightman, A., Diver, C. and Bartolo, P. (2019), “Robot assisted additive manufacturing: a review”, Robotics and Computer-Integrated Manufacturing, Vol. 59, pp. 335-345, doi: 10.1016/j.rcim.2019.05.005.

Wu, H.Y., Andersen, T.T., Andersen, N.A. and Ravn, O. (2016), “Visual servoing for object manipulation: a case study in slaughterhouse”, 2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV), pp. 1-6, IEEE, doi: 10.1109/ICARCV.2016.7838841.

Wu, C., Dai, C., Fang, G., Liu, Y.-J. and Wang, C.C. (2017), “RoboFDM: a robotic system for support-free fabrication using FDM”, 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1175-1180, IEEE, doi: 10.1109/ICRA.2017.7989140.

Zhang, Z. (2004), “Camera calibration with one-dimensional objects”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 26 No. 7, pp. 892-899, doi: 10.1109/TPAMI.2004.21.

Zhang, Y., Pedersen, D.B., Mischkot, M., Calaon, M., Baruffi, F. and Tosello, G. (2018), “A soft tooling process chain for injection molding of a 3D component with micro pillars”, Jove-Journal of Visualized Experiments, Vol. 138, doi: 10.3791/57335.

Zhong, R.Y., Xu, X., Klotz, E. and Newman, S.T. (2017), “Intelligent manufacturing in the context of industry 4.0: a review”, Engineering, Vol. 3 No. 5, pp. 616-630, doi: 10.1016/J.ENG.2017.05.015.

Acknowledgements

This work was supported by the Innovation Fund Denmark Project [Grant numbers 8057-00031B], China Scholarship Council [Grant numbers 202106790096] and Postgraduate Research and Practice Innovation Program of Jiangsu Province [Grant numbers KYCX21-2032].

Corresponding author

Wenzhen Yang can be contacted at: yang_wen_sky@163.com

Related articles