这是用户在 2025-8-6 20:50 为 https://app.immersivetranslate.com/pdf-pro/e69a8f47-18d8-4199-8509-66e286f525cc/ 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?

An Efficient and Robust Complex Weld Seam Feature Point Extraction Method for Seam Tracking and Posture Adjustment
一种用于焊缝跟踪与姿态调整的高效鲁棒复杂焊缝特征点提取方法

Yunkai Ma ( ) ( ) ^(()){ }^{()}, Junfeng Fan ( ) ( ) ^(()){ }^{()}, Huizhen Yang ( ) ( ) ^(()){ }^{()}, Hongliang Wang ( ) ( ) ^(()){ }^{()}, Shiyu Xing ( ) ( ) ^(()){ }^{()}, Fengshui Jing ® ®  ^("® "){ }^{\text {® }}, and Min Tan ® ®  ^("® "){ }^{\text {® }}
马运凯 ( ) ( ) ^(()){ }^{()} 、范俊峰 ( ) ( ) ^(()){ }^{()} 、杨慧珍 ( ) ( ) ^(()){ }^{()} 、王洪亮 ( ) ( ) ^(()){ }^{()} 、邢世玉 ( ) ( ) ^(()){ }^{()} 、景风水 ® ®  ^("® "){ }^{\text {® }} 、谭民 ® ®  ^("® "){ }^{\text {® }}

Abstract  摘要

To realize high-quality robotic welding, an efficient and robust complex weld seam feature point extraction method based on a deep neural network (ShuffleYOLO) is proposed for seam tracking and posture adjustment. The Shuffle-YOLO model can accurately extract the feature points of butt joints, lap joints, and irregular joints, and the model can also work well despite strong arc radiation and spatters. Based on the nearest neighbor algorithm and cubic B-spline curve-fitting algorithm, the position and posture models of the complex spatially curved weld seams are established. The robot welding posture adjustment and high-precision seam tracking of complex spatially curved weld seams are realized. Experiments show that the method proposed in this article can extract weld seam feature points quickly and robustly, which enables welding robots to accurately track the weld seams and adjust the welding torch postures simultaneously.
为实现高质量机器人焊接,本文提出一种基于深度神经网络(ShuffleYOLO)的高效鲁棒复杂焊缝特征点提取方法,用于焊缝跟踪与位姿调整。Shuffle-YOLO 模型能准确提取对接接头、搭接接头及不规则接头的特征点,且在强电弧辐射和飞溅干扰下仍能稳定工作。基于最近邻算法与三次 B 样条曲线拟合算法,建立了复杂空间曲线焊缝的位置与姿态模型,实现了机器人焊接姿态调整与复杂空间曲线焊缝的高精度跟踪。实验表明,本文方法能快速鲁棒地提取焊缝特征点,使焊接机器人能同时实现精确焊缝跟踪与焊枪姿态调整。

Index Terms-Complex spatially curved weld seam, laser visual sensor, posture adjustment, robot welding, seam tracking, weld seam feature point (WSFP) extraction.
索引术语-复杂空间曲线焊缝,激光视觉传感器,姿态调整,机器人焊接,焊缝跟踪,焊缝特征点(WSFP)提取

I. Introduction  一、引言

GAS metal arc welding is an important manufacturing technology in industrial production. Owing to the harsh welding environment, welders are gradually being replaced by robots. However, most welding robots today need to be taught
气体保护金属极电弧焊是工业生产中一项重要的制造技术。由于焊接环境恶劣,焊工正逐渐被机器人取代。然而目前大多数焊接机器人仍需通过人工示教
manually or programmed offline [1]. Affected by the workpiece machining error, assembly error, and welding thermal deformation, the teaching path of the robot cannot adapt to the change of the welding position, which has an adverse effect on the quality of the robotic welding. Therefore, it is urgent to improve the intelligence level of welding robots to ensure welding quality.
或离线编程[1]。受工件加工误差、装配误差及焊接热变形影响,机器人的示教路径无法适应焊接位置变化,这对机器人焊接质量产生不利影响。因此亟需提升焊接机器人的智能化水平以确保焊接质量。
The intelligent welding technology mainly includes the following aspects: weld seam initial point guidance [2], [3], [4], welding quality control and evaluation [5], [6], [7], [8], [9], [10], [11], and seam tracking [12], [13], [14], [15], [16], [17], [18]. So far, some weld seam initial point guidance methods have been proposed. In [2], workpiece images were captured by a monocular camera at two different positions, and the automatic identification and guidance of weld seam initial points were realized through image processing. In [3], the point cloud of the workpiece was obtained by a linear structured light sensor. Through point cloud segmentation and plane fitting, the identification of the weld seam initial point was realized. In [4], the weld seam images were acquired by a laser vision sensor with an additional LED light, and the initial point alignment of the narrow weld was realized. A lot of research has also been conducted on welding quality control and evaluation. To optimize the welding parameters, the multiple response surface method was proposed in [6] to determine the relationship between welding speed, current, and voltage. To reduce the welding deformation, the plastic-strain range memorization method was applied to the finite-element model of the butt joint to simulate the residual stress of the welded joint [7]. In [8], a method to determine the modeling parameters of the thermal-mechanical behavior of the finite-element model was proposed to ensure the authenticity of the simulation results. In [9], soft computing and machine learning techniques were combined with finite-element methods to determine the sequence and values of welding parameters to reduce strains and deformations. In addition, machine vision technology was used to detect and classify weld defects of thin-walled metal cans [10], and deep learning technology was used to process multisource sensor images to realize arc welding process detection and penetration detection [11].
智能焊接技术主要包括以下几个方面:焊缝初始点引导[2][3][4]、焊接质量控制与评价[5][6][7][8][9][10][11]以及焊缝跟踪[12][13][14][15][16][17][18]。迄今为止,已提出若干焊缝初始点引导方法。文献[2]通过单目相机在两个不同位置拍摄工件图像,借助图像处理实现焊缝初始点的自动识别与引导。文献[3]采用线结构光传感器获取工件点云,通过点云分割与平面拟合实现焊缝初始点识别。文献[4]利用附加 LED 光源的激光视觉传感器采集焊缝图像,实现了窄焊缝的初始点对中。在焊接质量控制与评价方面也有大量研究,文献[6]为优化焊接参数提出多重响应面法,用于确定焊接速度、电流与电压之间的关联关系。 为减少焊接变形,将塑性应变范围记忆法应用于对接接头的有限元模型中以模拟焊接接头的残余应力[7]。文献[8]提出了一种确定有限元模型热-力学行为建模参数的方法,以确保仿真结果的真实性。文献[9]将软计算和机器学习技术与有限元方法相结合,以确定焊接参数的顺序和数值,从而减少应变和变形。此外,机器视觉技术被用于检测和分类薄壁金属罐的焊接缺陷[10],深度学习技术则用于处理多源传感器图像,实现电弧焊接过程检测与熔透检测[11]。
The position accuracy of robot welding can be guaranteed by seam tracking [12]. Compared with arc sensors, sound sensors [13], and inductive sensors, visual sensors are precise,
通过焊缝跟踪技术可确保机器人焊接的定位精度[12]。与电弧传感器、声学传感器[13]和感应传感器相比,视觉传感器具有精度高、

\captionsetup{labelformat=empty}
Figure 1: Fig. 1. Different weld seam images. (a) Structured light image of V-grooved weld seam under strong spatter interference. (b) Serious deformation of laser stripes in spatially curved weld seam image. (c) Structured light image of an irregularly shaped weld seam.
图 1:不同焊缝图像。(a)强飞溅干扰下的 V 型坡口焊缝结构光图像。(b)空间曲线焊缝图像中激光条纹的严重变形。(c)不规则形状焊缝的结构光图像。
information-rich, and fast [14]. Vision sensors are further divided into passive light vision and active light vision. A passive vision-based seam tracking system was proposed in [15]. The centerline of the groove was obtained during the welding process. Combined with the trajectory controller, the seam tracking of the V-grooved weld was realized. However, methods based on passive vision are easily interfered by arc noise, and laser visual sensors are widely used in seam tracking due to their strong anti-interference ability. A seam tracking method for spatial circular weld based on a laser visual sensor was proposed in [16]. The tracking error model was established by fitting a spatial circle, and the robust seam tracking of the spatial circular weld was realized. In [17], a template matching algorithm was proposed to obtain the position of the weld seam, and seam tracking was realized by the first-in first-out queue method. A spatiotemporal background tracking algorithm was proposed in [18] to detect the weld seam feature points (WSFPs), and a model reference adaptive control method was used to realize seam tracking.
信息丰富且快速[14]。视觉传感器进一步分为被动光视觉与主动光视觉。文献[15]提出了一种基于被动视觉的焊缝跟踪系统,在焊接过程中获取坡口中心线,结合轨迹控制器实现了 V 型坡口焊缝的跟踪。然而基于被动视觉的方法易受电弧噪声干扰,而激光视觉传感器凭借其强抗干扰能力被广泛应用于焊缝跟踪。文献[16]提出基于激光视觉传感器的空间圆形焊缝跟踪方法,通过空间圆拟合建立跟踪误差模型,实现了空间圆形焊缝的鲁棒跟踪。文献[17]采用模板匹配算法获取焊缝位置,通过先进先出队列法实现焊缝跟踪。文献[18]提出时空背景跟踪算法检测焊缝特征点(WSFPs),并采用模型参考自适应控制方法实现焊缝跟踪。
This article mainly focuses on seam tracking and posture adjustment based on the laser visual sensor, which is a key technology of intelligent welding. The accurate extraction of WSFPs is the premise of seam tracking. However, the extraction of WSFP faces many challenges. For example, weld seam images are polluted by strong arc radiation and spatter, as shown in Fig. 1(a), the laser stripe contour of the spatially curved weld seam is serious deformed, as shown in Fig. 1(b), and some irregular shaped weld seams are difficult to extract as shown in Fig. 1©. The feature points of the weld seam images can be extracted by using the methods of geometric feature extraction [19], [20] and shape matching [21]. However, the geometric features designed manually in the above methods are not adaptive, so each welding type requires a specific extraction method. In addition, the above methods are effective under the condition of weak noise, and they may not work properly under the extreme condition of strong arc radiation and spatter. There are also some off-the-shelf laser-based seam tracking systems on the market, such as Meta and Servo-Robot. However, the off-the-shelf seam tracking systems have some shortcomings. First, the extraction frame rate is still low. The extraction frame rates for Meta and Servo-Robot are 25 and 30 Hz , respectively. Second, a lot of parameters need to be set for different types of weld seams.
本文主要研究基于激光视觉传感器的焊缝跟踪与姿态调整技术,这是智能焊接的核心技术。准确提取焊缝特征点(WSFPs)是实现焊缝跟踪的前提条件。然而,焊缝特征点提取面临诸多挑战:如图 1(a)所示,焊缝图像易受强烈电弧辐射和飞溅污染;图 1(b)展示的空间曲线焊缝激光条纹轮廓存在严重变形;而图 1(c)中某些不规则形状焊缝则难以提取。现有方法可通过几何特征提取[19][20]和形状匹配[21]等技术获取焊缝图像特征点,但这些方法中人工设计的几何特征缺乏自适应性,每种焊接类型都需要专用提取方法。此外,上述方法在弱噪声条件下有效,但在强电弧辐射和飞溅等极端工况下可能失效。市场上已有 Meta、Servo-Robot 等基于激光的现成焊缝跟踪系统。 然而,现成的焊缝跟踪系统存在一些缺陷。首先,提取帧率仍然较低——Meta 和 Servo-Robot 系统的提取帧率分别仅为 25Hz 和 30Hz。其次,针对不同类型的焊缝需要设置大量参数。
For a better recognition effect, a target tracking framework was proposed [22]. An online learning process was added to the framework to achieve a more reliable tracking effect. In [23],
为获得更好的识别效果,文献[22]提出了一种目标跟踪框架。该框架通过引入在线学习过程来实现更可靠的跟踪效果。文献[23]中,

based on the target tracking algorithm of Continuous Convolution Operator Tracker, a seam tracking system was realized. In [24], an efficient convolution operator (ECO) method was used to extract the coordinates of the WSFP. However, the above target tracking methods need to determine the target region of the first frame image. In addition, the target tracking algorithms suffer from model drift, and the training samples polluted by strong noise will lead to error accumulation.
基于连续卷积算子跟踪器的目标追踪算法实现了焊缝跟踪系统。文献[24]则采用高效卷积算子(ECO)方法来提取焊缝特征点坐标。但上述目标跟踪方法均需预先确定首帧图像的目标区域,且存在模型漂移问题——当训练样本被强噪声污染时会导致误差累积。
Recently, deep learning has made rapid progress in target detection. The deep learning methods include anchor-free methods, such as CornerNet and CenterNet [25], and anchor-based methods include two-stage (TS) methods and one-stage (OS) methods. The anchor-free methods have the disadvantage of low positioning accuracy. The TS methods divide the extraction of candidate regions and class regression into two parts, which affects the detection speed. The OS methods directly obtain the position and category information of all the prediction boxes. Compared with the TS methods, the OS methods run faster and occupy less memory. TS methods include Faster regionconvolution neural network (R-CNN) [26], RetinaNet [27], and EfficientDet [28]. OS methods include single-shot multibox detector (SSD) and the you only look once (YOLO) series (V1-V5) [29]. MobileNet (V1-V3) [30] and ShuffleNet (V1V2) [31] networks were proposed for smaller model size. To reduce the tracking model drift, the Faster R-CNN was used to reinitialize the object filter to improve the accuracy of WSFP detection [32]. In [33], a seam detection and tracking framework was built based on SSD. However, the model sizes of Faster R-CNN and SSD are 108 and 91 MB respectively, and the detection time is 113 and 33 ms , respectively, so the seam extraction efficiency needs to be further improved.
近年来,深度学习在目标检测领域取得快速进展。深度学习方法包括无锚框方法(如 CornerNet 和 CenterNet[25])以及基于锚框的方法,后者又分为两阶段(TS)方法和单阶段(OS)方法。无锚框方法存在定位精度低的缺陷;两阶段方法将候选区域提取与分类回归分为两个独立部分,影响了检测速度;而单阶段方法直接获取所有预测框的位置与类别信息,相比两阶段方法具有运行更快、内存占用更少的优势。典型两阶段方法包括 Faster R-CNN[26]、RetinaNet[27]和 EfficientDet[28];单阶段方法则以单次多框检测器(SSD)和 YOLO 系列(V1-V5)[29]为代表。针对模型轻量化需求,研究者相继提出 MobileNet(V1-V3)[30]与 ShuffleNet(V1-V2)[31]网络架构。为减少跟踪模型漂移,文献[32]采用 Faster R-CNN 重新初始化目标滤波器以提升焊缝特征点检测精度,文献[33]则基于 SSD 构建了焊缝检测与跟踪框架。 然而,Faster R-CNN 和 SSD 的模型大小分别为 108MB 和 91MB,检测时间分别达到 113ms 和 33ms,因此焊缝提取效率仍需进一步提升。
For complex spatially curved weld seams, it is necessary to realize posture adjustment. In [34], the influence of welding torch posture on welding quality was studied. It was found that the angle of the welding torch will affect the penetration and welding strength of the weld bead. A weld seam posture extraction algorithm based on binocular-coded structured light was proposed in [35]. However, the method cannot be used for the real-time adjustment of welding posture during the welding process. To solve this problem, a real-time posture estimation method was proposed in [36]. However, the performance of the posture estimation method is only verified through plane curve workpieces, and no posture adjustment experiment is carried out for spatially curved workpieces. In addition, the ECO method was used to extract WSFPs in [36]; the error caused by model drift will affect the accuracy of WSFP extraction.
针对复杂空间曲线焊缝,需实现焊枪姿态调整。文献[34]研究了焊枪姿态对焊接质量的影响,发现焊枪角度会影响焊道的熔深和焊接强度。文献[35]提出基于双目编码结构光的焊缝姿态提取算法,但该方法无法用于焊接过程中的实时姿态调整。为解决该问题,文献[36]提出实时姿态估计方法,但该姿态估计方法仅通过平面曲线工件验证性能,未针对空间曲线工件开展姿态调整实验。此外,文献[36]采用 ECO 方法提取焊缝特征点,模型漂移导致的误差会影响特征点提取精度。
To overcome the disadvantages of the existing methods in weld seam extraction, an efficient and robust complex WSFP extraction method for seam tracking and posture adjustment based on deep learning is proposed. The contribution of this article can be summarized as follows.
为克服现有焊缝提取方法的不足,本文提出了一种基于深度学习的高效鲁棒复合焊缝特征点提取方法,用于焊缝跟踪与姿态调整。本文贡献可概括如下。
  1. We designed and developed a WSFP extraction framework based on deep learning, which integrates data collection, data augmentation, data annotation, model training, and model deployment.
    我们设计并开发了基于深度学习的焊缝特征点提取框架,该框架集成了数据采集、数据增强、数据标注、模型训练与模型部署等环节。
  2. A Shuffle-YOLO model is proposed in this article to extract WSFPs of various weld types quickly and robustly. The model has a strong feature extraction ability and is
    本文提出 Shuffle-YOLO 模型用于快速鲁棒地提取各类焊缝特征点。该模型具有强大的特征提取能力,

\captionsetup{labelformat=empty}
Figure 2: Fig. 2. Laser visual sensor. (a) 3-D model. (b) Internal structure.
图 2:图 2. 激光视觉传感器。(a)三维模型。(b)内部结构。

\captionsetup{labelformat=empty}
Figure 3: Fig. 3. Vision model.
图 3:图 3. 视觉模型。
suitable for environments with strong arc radiation and spatters.
适用于具有强电弧辐射和飞溅的环境。

3) A real-time seam tracking and posture adjustment method is also proposed for complex spatially curved weld seams, which enables welding robots to accurately track the weld seams and adjust the welding postures simultaneously.
3) 针对复杂空间曲线焊缝,本文还提出了一种实时焊缝跟踪与姿态调整方法,使焊接机器人能够同时实现焊缝精确跟踪与焊接姿态调整。

II. Vision Model  二、视觉模型

A. Sensor Structure  A. 传感器结构

As shown in Fig. 2, the self-designed linear structured light sensor mainly includes an industrial camera, linear diode laser, optical filter, protective grass, etc. The model of the industrial camera is MER-131-75 from Daheng Image. The wavelength of the diode laser is about 635 nm . The allowable wavelength of the optical filter is also 635 nm , so the filter can filter out light other than 635 nm .
如图 2 所示,自主设计的线结构光传感器主要由工业相机、线激光二极管、光学滤光片、防护玻璃等组成。工业相机型号为大恒图像的 MER-131-75。激光二极管波长约为 635nm,光学滤光片的允许波长同样为 635nm,因此该滤光片可滤除 635nm 以外的杂散光。

B. Vision Model  B. 视觉模型

In Fig. 3, the relationship between 2-D image coordinate p 2 d p 2 d p_(2d)p_{2 \mathrm{~d}} and 3-D coordinate P 3 d P 3 d P_(3d)\boldsymbol{P}_{3 \mathrm{~d}} in the camera coordinate system can be expressed by
在图 3 中,相机坐标系下二维图像坐标 p 2 d p 2 d p_(2d)p_{2 \mathrm{~d}} 与三维坐标 P 3 d P 3 d P_(3d)\boldsymbol{P}_{3 \mathrm{~d}} 的关系可表示为
[ u v 1 ] = 1 z c [ k x 0 u 0 0 k y v 0 0 0 1 ] [ x c y c z c ] u v 1 = 1 z c k x 0 u 0 0 k y v 0 0 0 1 x c y c z c [[u],[v],[1]]=(1)/(z_(c))[[k_(x),0,u_(0)],[0,k_(y),v_(0)],[0,0,1]][[x_(c)],[y_(c)],[z_(c)]]\left[\begin{array}{l} u \\ v \\ 1 \end{array}\right]=\frac{1}{z_{c}}\left[\begin{array}{ccc} k_{x} & 0 & u_{0} \\ 0 & k_{y} & v_{0} \\ 0 & 0 & 1 \end{array}\right]\left[\begin{array}{l} x_{c} \\ y_{c} \\ z_{c} \end{array}\right]
where ( u 0 , v 0 u 0 , v 0 u_(0),v_(0)u_{0}, v_{0} ) is the intersection of the optical axis centerline and the imaging plane. k x k x k_(x)k_{x} and k y k y k_(y)k_{y} are magnification factors of the x x xx-axis and the y y yy-axis, respectively. The above parameters can be obtained by the calibration method proposed in [37].
其中( u 0 , v 0 u 0 , v 0 u_(0),v_(0)u_{0}, v_{0} )为光轴中心线与成像平面的交点。 k x k x k_(x)k_{x} k y k y k_(y)k_{y} 分别是 x x xx 轴与 y y yy 轴的放大系数。上述参数可通过文献[37]提出的标定方法获取。

\captionsetup{labelformat=empty}
Figure 4: Fig. 4. Framework of WSFP extraction.
图 4:焊缝特征点提取框架
Suppose that the equation of the laser plane is
假设激光平面方程为
z c = α x c + β y c + γ z c = α x c + β y c + γ z_(c)=alphax_(c)+betay_(c)+gammaz_{c}=\alpha x_{c}+\beta y_{c}+\gamma
where α , β α , β alpha,beta\alpha, \beta, and γ γ gamma\gamma can be calculated from [38].
其中 α , β α , β alpha,beta\alpha, \beta γ γ gamma\gamma 可根据文献[38]计算得出

According to (1) and (2), P 3 d ( x c , y c , z c ) P 3 d x c , y c , z c P_(3d)(x_(c),y_(c),z_(c))\boldsymbol{P}_{\mathbf{3} \boldsymbol{d}}\left(x_{c}, y_{c}, z_{c}\right) can be calculated by
根据式(1)和式(2), P 3 d ( x c , y c , z c ) P 3 d x c , y c , z c P_(3d)(x_(c),y_(c),z_(c))\boldsymbol{P}_{\mathbf{3} \boldsymbol{d}}\left(x_{c}, y_{c}, z_{c}\right) 可通过计算得到
{ z c = γ k x k y / ( k x k y + α k y ( u 0 u ) + β k x ( v 0 v ) ) x c = z c ( u 1 u 0 ) / k x y c = z c ( v 1 v 0 ) / k y . z c = γ k x k y / k x k y + α k y u 0 u + β k x v 0 v x c = z c u 1 u 0 / k x y c = z c v 1 v 0 / k y . {[z_(c)=gammak_(x)k_(y)//(k_(x)k_(y)+alphak_(y)(u_(0)-u)+betak_(x)(v_(0)-v))],[x_(c)=z_(c)(u_(1)-u_(0))//k_(x)],[y_(c)=z_(c)(v_(1)-v_(0))//k_(y)].:}\left\{\begin{array}{l} z_{c}=\gamma k_{x} k_{y} /\left(k_{x} k_{y}+\alpha k_{y}\left(u_{0}-u\right)+\beta k_{x}\left(v_{0}-v\right)\right) \\ x_{c}=z_{c}\left(u_{1}-u_{0}\right) / k_{x} \\ y_{c}=z_{c}\left(v_{1}-v_{0}\right) / k_{y} \end{array} .\right.
The hand-eye transform matrix T e h T e h T_(eh)\boldsymbol{T}_{\boldsymbol{e h}} can be calculated from [39]. T h T h T_(h)\boldsymbol{T}_{\boldsymbol{h}} is the transformation matrix between the tool coordinate system and the base coordinate system (BCS), which can be read and calculated from the robot controller. The 3-D coordinate P b ( x b , y b , z b ) P b x b , y b , z b P_(b)(x_(b),y_(b),z_(b))\boldsymbol{P}_{\boldsymbol{b}}\left(x_{b}, y_{b}, z_{b}\right) in the BCS can be obtained by
手眼变换矩阵 T e h T e h T_(eh)\boldsymbol{T}_{\boldsymbol{e h}} 可根据文献[39]计算得出。 T h T h T_(h)\boldsymbol{T}_{\boldsymbol{h}} 表示工具坐标系与基坐标系(BCS)之间的转换矩阵,可直接从机器人控制器读取并计算。基坐标系中的三维坐标 P b ( x b , y b , z b ) P b x b , y b , z b P_(b)(x_(b),y_(b),z_(b))\boldsymbol{P}_{\boldsymbol{b}}\left(x_{b}, y_{b}, z_{b}\right) 可通过下式获得:
P b = T h × T e h × P 3 d P b = T h × T e h × P 3 d P_(b)^(')=T_(h)xxT_(eh)xxP_(3d)^(')P_{b}^{\prime}=T_{h} \times T_{e h} \times P_{3 \mathrm{~d}}^{\prime}
where P b P b P_(b)^(')\boldsymbol{P}_{\boldsymbol{b}}^{\prime} is [ x b , y b , z b , 1 ] T x b , y b , z b , 1 T [x_(b),y_(b),z_(b),1]^(T)\left[x_{b}, y_{b}, z_{b}, 1\right]^{\mathrm{T}} and P 3 d P 3 d P_(3d)^(')\boldsymbol{P}_{\boldsymbol{3} \boldsymbol{d}}^{\prime} is [ x c , y c , z c , 1 ] T x c , y c , z c , 1 T [x_(c),y_(c),z_(c),1]^(T)\left[x_{c}, y_{c}, z_{c}, 1\right]^{\mathrm{T}}.
其中 P b P b P_(b)^(')\boldsymbol{P}_{\boldsymbol{b}}^{\prime} 对应 [ x b , y b , z b , 1 ] T x b , y b , z b , 1 T [x_(b),y_(b),z_(b),1]^(T)\left[x_{b}, y_{b}, z_{b}, 1\right]^{\mathrm{T}} P 3 d P 3 d P_(3d)^(')\boldsymbol{P}_{\boldsymbol{3} \boldsymbol{d}}^{\prime} 对应 [ x c , y c , z c , 1 ] T x c , y c , z c , 1 T [x_(c),y_(c),z_(c),1]^(T)\left[x_{c}, y_{c}, z_{c}, 1\right]^{\mathrm{T}}

III. WSFP Extraction Based on Deep Learning
三、基于深度学习的焊缝特征点提取

Fast and accurate WSFP extraction is the premise of seam tracking and posture adjustment. As shown in Fig. 4, the WSFP extraction framework based on deep learning adopts the following steps: data collection, data augmentation, data annotation, model training, and model deployment.
快速精确的焊缝特征点提取是实现焊缝跟踪与位姿调整的前提。如图 4 所示,基于深度学习的焊缝特征点提取框架包含以下步骤:数据采集、数据增强、数据标注、模型训练及模型部署。

A. Data Collection  A. 数据采集

In this article, the weld seam images consist of two parts. One part is from actual welding experiments, and the other part is from data augmentation.
本文中焊缝图像由两部分组成:一部分来自实际焊接实验,另一部分通过数据增强生成。

B. Data Augmentation  B. 数据增强

As shown in Fig. 5, to improve the robustness of the ShuffleYOLO model, some data augmentation algorithms, such as image flipping, noise adding, brightness adjustment, image filtering, and affine transformation, are selected to increase the image number of datasets.
如图 5 所示,为提高 ShuffleYOLO 模型的鲁棒性,采用图像翻转、噪声添加、亮度调节、图像滤波及仿射变换等数据增强算法来扩充数据集图像数量。

\captionsetup{labelformat=empty}
Figure 5: Fig. 5. Weld seam data augmentation. (a) Raw image. (b) Image flipping. (c) Gaussian noise. (d) Brightness adjustment. (e) Median filtering. (f) Affine transformation.
图 5:焊缝数据增强示意图。(a)原始图像 (b)图像翻转 (c)高斯噪声 (d)亮度调节 (e)中值滤波 (f)仿射变换
Algorithm 1: Automatic Annotation of the Image Dataset.
    Unified naming of dataset images.
    Geometric feature point extraction [9].
    repeat
        WSFP extraction based on ECO [14].
        Save the coordinates of WSFP to a \(c s v\) file.
        Save the top, bottom, left, and right boundaries of the
        target box with the width of \(T_{w}\) pixels into a \(c s v\) file.
    until All the images are annotated
    Read data from a \(c s v\) file and write to an \(x m l\) file
Fig. 6. Structure of the Shuffle-YOLO model.
图 6. Shuffle-YOLO 模型结构

C. Data Annotation  C. 数据标注

Data annotation is divided into automatic annotation and manual annotation. Weld seam images that have weak arc radiation and spatters (about 90 % 90 % 90%90 \% ) were automatically annotated, and the rest (about 10 % 10 % 10%10 \% ) were manually annotated. To improve the annotation accuracy and quality, a new annotation method based on ECO is adopted, as shown in Algorithm 1.
数据标注分为自动标注与人工标注两种方式。对电弧辐射和飞溅较弱的焊缝图像(约 90 % 90 % 90%90 \% )采用自动标注,其余图像(约 10 % 10 % 10%10 \% )则进行人工标注。为提高标注精度与质量,采用基于 ECO 的新型标注方法,如算法 1 所示。

D. Model Training  D. 模型训练

  1. Network Structure: The structure of the Shuffle-YOLO model is shown in Fig. 6. Convolution (Conv), batch normalization (BN), and activation functions constitute the CBL module. The Focus module is an image slicing operation that divides and fuses images according to pixel lattice. The CSP module mainly includes the CBL module, Conv, and Bottleneck modules, which further enhances the integration ability of neural networks [29]. Shuffle1 and Shuffle2 modules are feature extraction modules
    网络结构:Shuffle-YOLO 模型结构如图 6 所示。卷积层(Conv)、批量归一化(BN)和激活函数共同构成 CBL 模块。Focus 模块是图像切片操作,按像素网格对图像进行分割融合。CSP 模块主要包含 CBL 模块、卷积层和 Bottleneck 模块,可进一步增强神经网络集成能力[29]。Shuffle1 与 Shuffle2 模块为特征提取模块

    from ShuffleNet-V2 [31], where the channel splitting operator divides the input of the channel into two branches to improve the performance of ShuffleNet, and the channel shuffle module ensures that information can be exchanged between two merged branches. The DWConv module is depthwise separable convolution. The loss function of Shuffle-YOLO is expressed as
    源自 ShuffleNet-V2[31]的通道分割算子将输入通道划分为两个分支以提升 ShuffleNet 性能,而通道混洗模块确保信息能在两个合并分支间交互。DWConv 模块指深度可分离卷积。Shuffle-YOLO 的损失函数表达式为
L box = λ coord i = 0 s 2 j = 0 b I i , j obj ( 2 w i × h i ) [ ( x i x ^ i j ) 2 + ( y i y ^ i j ) 2 + ( w i w ^ i j ) 2 + ( h i h ^ i j ) 2 ] L box  = λ coord  i = 0 s 2 j = 0 b I i , j obj  2 w i × h i x i x ^ i j 2 + y i y ^ i j 2 + w i w ^ i j 2 + h i h ^ i j 2 {:[L_("box ")=lambda_("coord ")sum_(i=0)^(s^(2))sum_(j=0)^(b)I_(i,j)^("obj ")(2-w_(i)xxh_(i))],[quad[(x_(i)- hat(x)_(i)^(j))^(2)+(y_(i)- hat(y)_(i)^(j))^(2)+(w_(i)- hat(w)_(i)^(j))^(2)+(h_(i)- hat(h)_(i)^(j))^(2)]]:}\begin{aligned} & L_{\text {box }}=\lambda_{\text {coord }} \sum_{i=0}^{s^{2}} \sum_{j=0}^{b} I_{i, j}^{\text {obj }}\left(2-w_{i} \times h_{i}\right) \\ & \quad\left[\left(x_{i}-\hat{x}_{i}^{j}\right)^{2}+\left(y_{i}-\hat{y}_{i}^{j}\right)^{2}+\left(w_{i}-\hat{w}_{i}^{j}\right)^{2}+\left(h_{i}-\hat{h}_{i}^{j}\right)^{2}\right] \end{aligned}
L cls = λ class i = 0 s 2 j = 0 b I i , j obj c classes p i ( c ) log ( p ^ i ( c ) ) L obj = λ n i = 0 s 2 j = 0 b I i , j n ( C i C ^ i ) 2 + λ o i = 0 s 2 j = 0 b I i , j obj ( C i C ^ i ) 2 L cls = λ class  i = 0 s 2 j = 0 b I i , j obj c  classes  p i ( c ) log p ^ i ( c ) L obj = λ n i = 0 s 2 j = 0 b I i , j n C i C ^ i 2 + λ o i = 0 s 2 j = 0 b I i , j obj C i C ^ i 2 {:[L_(cls)=lambda_("class ")sum_(i=0)^(s^(2))sum_(j=0)^(b)I_(i,j)^(obj)sum_(c in" classes ")p_(i)(c)log( hat(p)_(i)(c))],[L_(obj)=lambda_(n)sum_(i=0)^(s^(2))sum_(j=0)^(b)I_(i,j)^(n)(C_(i)- hat(C)_(i))^(2)+lambda_(o)sum_(i=0)^(s^(2))sum_(j=0)^(b)I_(i,j)^(obj)(C_(i)- hat(C)_(i))^(2)]:}\begin{aligned} & L_{\mathrm{cls}}=\lambda_{\text {class }} \sum_{i=0}^{s^{2}} \sum_{j=0}^{b} I_{i, j}^{\mathrm{obj}} \sum_{c \in \text { classes }} p_{i}(c) \log \left(\hat{p}_{i}(c)\right) \\ & L_{\mathrm{obj}}=\lambda_{n} \sum_{i=0}^{s^{2}} \sum_{j=0}^{b} I_{i, j}^{n}\left(C_{i}-\hat{C}_{i}\right)^{2}+\lambda_{o} \sum_{i=0}^{s^{2}} \sum_{j=0}^{b} I_{i, j}^{\mathrm{obj}}\left(C_{i}-\hat{C}_{i}\right)^{2} \end{aligned}
Loss = L box + L cls + L obj  Loss  = L box + L cls + L obj " Loss "=L_(box)+L_(cls)+L_(obj)\text { Loss }=L_{\mathrm{box}}+L_{\mathrm{cls}}+L_{\mathrm{obj}}
where L box L box  L_("box ")L_{\text {box }} is Box loss (localization loss), L cls L cls  L_("cls ")L_{\text {cls }} is Class loss (classification loss), and L obj L obj  L_("obj ")L_{\text {obj }} is Object loss (confidence loss). λ coord , λ class , λ n λ coord  , λ class  , λ n lambda_("coord "),lambda_("class "),lambda_(n)\lambda_{\text {coord }}, \lambda_{\text {class }}, \lambda_{n}, and λ o λ o lambda_(o)\lambda_{o} are coefficients. x x xx and y y yy represent the coordinates of the prediction center point, w w ww and h h hh represent the width and height of the anchor box, p i ( c ) p i ( c ) p_(i)(c)p_{i}(c) represents the probability of the target is class c c cc, and C C CC is confidence. x ^ , y ^ , w ^ , h ^ , p ^ i ( c ) , C ^ x ^ , y ^ , w ^ , h ^ , p ^ i ( c ) , C ^ hat(x), hat(y), hat(w), hat(h), hat(p)_(i)(c), hat(C)\hat{x}, \hat{y}, \hat{w}, \hat{h}, \hat{p}_{i}(c), \hat{C} indicates the corresponding true value. s s ss represents dividing the input image into s × s s × s s xx ss \times s grids and predicting b b bb anchors in each grid. If the anchor box at ( i , j ) ( i , j ) (i,j)(i, j) contains the target, I i , j obj I i , j obj I_(i,j)^(obj)I_{i, j}^{\mathrm{obj}} is 1 . Otherwise, the value is 0 . If the anchor box at ( i , j ) ( i , j ) (i,j)(i, j) does not contain the target, I i , j n I i , j n I_(i,j)^(n)I_{i, j}^{n} is 1 . Otherwise, the value is 0 .
其中 L box L box  L_("box ")L_{\text {box }} 为边界框损失(定位损失), L cls L cls  L_("cls ")L_{\text {cls }} 为类别损失(分类损失), L obj L obj  L_("obj ")L_{\text {obj }} 为目标损失(置信度损失)。 λ coord , λ class , λ n λ coord  , λ class  , λ n lambda_("coord "),lambda_("class "),lambda_(n)\lambda_{\text {coord }}, \lambda_{\text {class }}, \lambda_{n} λ o λ o lambda_(o)\lambda_{o} 为系数。 x x xx y y yy 表示预测中心点坐标, w w ww h h hh 表示锚框宽度与高度, p i ( c ) p i ( c ) p_(i)(c)p_{i}(c) 表示目标属于类别 c c cc 的概率, C C CC 为置信度。 x ^ , y ^ , w ^ , h ^ , p ^ i ( c ) , C ^ x ^ , y ^ , w ^ , h ^ , p ^ i ( c ) , C ^ hat(x), hat(y), hat(w), hat(h), hat(p)_(i)(c), hat(C)\hat{x}, \hat{y}, \hat{w}, \hat{h}, \hat{p}_{i}(c), \hat{C} 表示对应真实值。 s s ss 表示将输入图像划分为 s × s s × s s xx ss \times s 个网格单元,每个网格预测 b b bb 个锚框。若 ( i , j ) ( i , j ) (i,j)(i, j) 处的锚框包含目标,则 I i , j obj I i , j obj I_(i,j)^(obj)I_{i, j}^{\mathrm{obj}} 为 1,否则为 0。若 ( i , j ) ( i , j ) (i,j)(i, j) 处的锚框不包含目标,则 I i , j n I i , j n I_(i,j)^(n)I_{i, j}^{n} 为 1,否则为 0。
Compared to the recently widely used YOLOV5 model [29], the Shuffle-YOLO reduces the network parameters from 7068936 to 445 160, the floating point operations (FLOPs) from 16.4G FLOPs to 2.4G FLOPs, and the model size from 14.1 to 1.29 MB .
相较于近期广泛使用的 YOLOV5 模型[29],Shuffle-YOLO 将网络参数量从 7068936 减少至 445160,浮点运算量(FLOPs)从 16.4G FLOPs 降至 2.4G FLOPs,模型体积从 14.1MB 压缩至 1.29MB。

2) Shuffle-YOLO Model Training: Prepare the weld seam dataset, which includes b m p b m p bmpb m p weld seam images and x m l x m l xmlx m l files obtained by data annotation. The Shuffle-YOLO model is trained from the weights of the Common Objects in Context dataset. Loading pretrained weights for network training helps to shorten the training time and achieve higher accuracy. Finally, the p t p t ptp t weight file is obtained by Shuffle-YOLO model training.
2) Shuffle-YOLO 模型训练:准备包含 b m p b m p bmpb m p 张焊缝图像及通过数据标注获得的 x m l x m l xmlx m l 个文件的焊缝数据集。该模型基于 COCO 数据集权重进行训练,加载预训练权重可缩短训练时间并提高精度,最终通过训练获得 p t p t ptp t 个权重文件。

E. Model Deployment  E. 模型部署

First, the weight file is converted to a torchscript.pt file, and the weld seam image is converted to a tensor. Secondly, the C++ interface provided by LibTorch is used for image feature area prediction, and the center point of the predicted area is regarded as the image coordinate of the WSFP. According to (3) and (4), the coordinates of the WSFP in the robot BCS can be calculated.
首先将权重文件转换为 torchscript.pt 格式,并将焊缝图像转换为张量。随后利用 LibTorch 提供的 C++接口进行图像特征区域预测,将预测区域中心点作为焊缝特征点(WSFP)的图像坐标。根据公式(3)和(4)可计算机器人基坐标系(BCS)中 WSFP 的坐标。

\captionsetup{labelformat=empty}
Figure 6: Fig. 7. Seam tracking method of complex spatially curved weld seam.
图 6:图 7. 复杂空间曲线焊缝的焊缝跟踪方法

IV. Seam Tracking and Posture Adjustment
四、焊缝跟踪与姿态调整

Based on the accurate extraction of the WSFP by the ShuffleYOLO model, a seam tracking and posture adjustment method for complex spatially curved weld seams is proposed.
基于 ShuffleYOLO 模型对焊缝特征点(WSFP)的精确提取,本文提出了一种适用于复杂空间曲线焊缝的跟踪与姿态调整方法。

A. Seam Tracking  A. 焊缝跟踪

  1. Nearest Neighbor Algorithm: As shown in Fig. 7, the distance between the laser stripe and welding torch is called the leading distance (LD), and the number of WSFPs collected within LD is k k kk. The coordinates of V-grooved WSFP include the bottom feature points (BPs), left feature points (LPs), and right feature points (RPs). k k kk BPs, LPs, and RPs are stored in containers named COB, COL, and COR, respectively. To improve computational efficiency, only the latest k k kk feature points are kept in the container. Let the current tool center point (TCP) coordinate of the robot be P t ( x t , y t , z t ) P t x t , y t , z t P_(t)(x_(t),y_(t),z_(t))P_{t}\left(x_{t}, y_{t}, z_{t}\right), and P i ( x i , y i , z i ) P i x i , y i , z i P_(i)(x_(i),y_(i),z_(i))P_{i}\left(x_{i}, y_{i}, z_{i}\right) be the i i ii th WSFP in container COB. To accurately acquire the welding deviation of the robot, the Euclidean distance between P t P t P_(t)P_{t} and P i P i P_(i)P_{i} is calculated
    最近邻算法:如图 7 所示,激光条纹与焊枪之间的距离称为导前距离(LD),在 LD 范围内采集的焊缝特征点(WSFP)数量为 k k kk 。V 型坡口的 WSFP 坐标包含底部特征点(BPs)、左侧特征点(LPs)和右侧特征点(RPs)。 k k kk BPs、LPs 和 RPs 分别存储在名为 COB、COL 和 COR 的容器中。为提高计算效率,容器中仅保留最新的 k k kk 个特征点。设机器人当前工具中心点(TCP)坐标为 P t ( x t , y t , z t ) P t x t , y t , z t P_(t)(x_(t),y_(t),z_(t))P_{t}\left(x_{t}, y_{t}, z_{t}\right) P i ( x i , y i , z i ) P i x i , y i , z i P_(i)(x_(i),y_(i),z_(i))P_{i}\left(x_{i}, y_{i}, z_{i}\right) 为容器 COB 中第 i i ii 个 WSFP。为精确获取机器人焊接偏差,需计算 P t P t P_(t)P_{t} P i P i P_(i)P_{i} 之间的欧氏距离
D ( i ) = ( x t x i ) 2 + ( y t y i ) 2 + ( z t z i ) 2 . D ( i ) = x t x i 2 + y t y i 2 + z t z i 2 . D(i)=sqrt((x_(t)-x_(i))^(2)+(y_(t)-y_(i))^(2)+(z_(t)-z_(i))^(2)).D(i)=\sqrt{\left(x_{t}-x_{i}\right)^{2}+\left(y_{t}-y_{i}\right)^{2}+\left(z_{t}-z_{i}\right)^{2}} .
Find the number i i ii that corresponds to the minimum value in D ( i ) D ( i ) D(i)D(i). Point P i P i P_(i)P_{i} is the point nearest to the current position of the welding torch on the scanning path.
找出 D ( i ) D ( i ) D(i)D(i) 中最小值对应的编号 i i ii 。点 P i P i P_(i)P_{i} 即为扫描路径上距离焊枪当前位置最近的点。

2) Cubic B-Spline Fitting for COB: The cubic B-spline fitting of the collected WSFPs makes the calculation of seam tracking error more accurate. Four adjacent coordinate points P 1 ( x 1 , y 1 , z 1 ) , P 2 ( x 2 , y 2 , z 2 ) , P 3 ( x 3 , y 3 , z 3 ) P 1 x 1 , y 1 , z 1 , P 2 x 2 , y 2 , z 2 , P 3 x 3 , y 3 , z 3 P_(1)(x_(1),y_(1),z_(1)),P_(2)(x_(2),y_(2),z_(2)),P_(3)(x_(3),y_(3),z_(3))P_{1}\left(x_{1}, y_{1}, z_{1}\right), P_{2}\left(x_{2}, y_{2}, z_{2}\right), P_{3}\left(x_{3}, y_{3}, z_{3}\right), and P 4 ( x 4 , y 4 , z 4 ) P 4 x 4 , y 4 , z 4 P_(4)(x_(4),y_(4),z_(4))P_{4}\left(x_{4}, y_{4}, z_{4}\right) in container COB are selected for fitting
2) COB 的三次 B 样条拟合:对采集的 WSFPs 进行三次 B 样条拟合可使焊缝跟踪误差的计算更为精确。选取容器 COB 中四个相邻坐标点 P 1 ( x 1 , y 1 , z 1 ) , P 2 ( x 2 , y 2 , z 2 ) , P 3 ( x 3 , y 3 , z 3 ) P 1 x 1 , y 1 , z 1 , P 2 x 2 , y 2 , z 2 , P 3 x 3 , y 3 , z 3 P_(1)(x_(1),y_(1),z_(1)),P_(2)(x_(2),y_(2),z_(2)),P_(3)(x_(3),y_(3),z_(3))P_{1}\left(x_{1}, y_{1}, z_{1}\right), P_{2}\left(x_{2}, y_{2}, z_{2}\right), P_{3}\left(x_{3}, y_{3}, z_{3}\right) P 4 ( x 4 , y 4 , z 4 ) P 4 x 4 , y 4 , z 4 P_(4)(x_(4),y_(4),z_(4))P_{4}\left(x_{4}, y_{4}, z_{4}\right) 进行拟合
P τ = [ τ 3 τ 2 τ 1 ] [ 1 3 3 1 3 6 3 0 3 0 3 0 1 4 1 0 ] [ P 1 P 2 P 3 P 4 ] P τ = τ 3      τ 2      τ      1 1 3 3 1 3 6 3 0 3 0 3 0 1 4 1 0 P 1 P 2 P 3 P 4 P_(tau)=[[tau^(3),tau^(2),tau,1]][[-1,3,-3,1],[3,-6,3,0],[-3,0,3,0],[1,4,1,0]][[P_(1)],[P_(2)],[P_(3)],[P_(4)]]P_{\tau}=\left[\begin{array}{llll} \tau^{3} & \tau^{2} & \tau & 1 \end{array}\right]\left[\begin{array}{cccc} -1 & 3 & -3 & 1 \\ 3 & -6 & 3 & 0 \\ -3 & 0 & 3 & 0 \\ 1 & 4 & 1 & 0 \end{array}\right]\left[\begin{array}{l} P_{1} \\ P_{2} \\ P_{3} \\ P_{4} \end{array}\right]
where 0 τ 1 0 τ 1 0 <= tau <= 10 \leq \tau \leq 1. The component form is
其中 0 τ 1 0 τ 1 0 <= tau <= 10 \leq \tau \leq 1 。分量形式为
{ x ( τ ) = λ 0 + λ 1 τ + λ 2 τ 2 + λ 3 τ 3 y ( τ ) = μ 0 + μ 1 τ + μ 2 τ 2 + μ 3 τ 3 z ( τ ) = ν 0 + ν 1 τ + ν 2 τ 2 + ν 3 τ 3 x ( τ ) = λ 0 + λ 1 τ + λ 2 τ 2 + λ 3 τ 3 y ( τ ) = μ 0 + μ 1 τ + μ 2 τ 2 + μ 3 τ 3 z ( τ ) = ν 0 + ν 1 τ + ν 2 τ 2 + ν 3 τ 3 {[x(tau)=lambda_(0)+lambda_(1)tau+lambda_(2)tau^(2)+lambda_(3)tau^(3)],[y(tau)=mu_(0)+mu_(1)tau+mu_(2)tau^(2)+mu_(3)tau^(3)],[z(tau)=nu_(0)+nu_(1)tau+nu_(2)tau^(2)+nu_(3)tau^(3)]:}\left\{\begin{array}{l} x(\tau)=\lambda_{0}+\lambda_{1} \tau+\lambda_{2} \tau^{2}+\lambda_{3} \tau^{3} \\ y(\tau)=\mu_{0}+\mu_{1} \tau+\mu_{2} \tau^{2}+\mu_{3} \tau^{3} \\ z(\tau)=\nu_{0}+\nu_{1} \tau+\nu_{2} \tau^{2}+\nu_{3} \tau^{3} \end{array}\right.

\captionsetup{labelformat=empty}
Figure 7: Fig. 8. V-grooved WSFP. (a) Spatial view. (b) Section view.
图 7:图 8. V 型坡口焊缝特征点。(a)空间视图。(b)截面视图。
where  其中
[ λ 0 μ 0 ν 0 λ 1 μ 1 ν 1 λ 2 μ 2 ν 2 λ 3 μ 3 ν 3 ] = [ 1 6 2 3 1 6 0 1 2 0 1 2 0 1 2 1 1 2 0 1 6 1 2 1 2 1 6 ] [ x 1 y 1 z 1 x 2 y 2 z 2 x 3 y 3 z 3 x 4 y 4 z 4 ] . λ 0      μ 0      ν 0 λ 1      μ 1      ν 1 λ 2      μ 2      ν 2 λ 3      μ 3      ν 3 = 1 6 2 3 1 6 0 1 2 0 1 2 0 1 2 1 1 2 0 1 6 1 2 1 2 1 6 x 1      y 1      z 1 x 2      y 2      z 2 x 3      y 3      z 3 x 4      y 4      z 4 . [[lambda_(0),mu_(0),nu_(0)],[lambda_(1),mu_(1),nu_(1)],[lambda_(2),mu_(2),nu_(2)],[lambda_(3),mu_(3),nu_(3)]]=[[(1)/(6),(2)/(3),(1)/(6),0],[-(1)/(2),0,(1)/(2),0],[(1)/(2),-1,(1)/(2),0],[-(1)/(6),(1)/(2),-(1)/(2),(1)/(6)]][[x_(1),y_(1),z_(1)],[x_(2),y_(2),z_(2)],[x_(3),y_(3),z_(3)],[x_(4),y_(4),z_(4)]].\left[\begin{array}{lll} \lambda_{0} & \mu_{0} & \nu_{0} \\ \lambda_{1} & \mu_{1} & \nu_{1} \\ \lambda_{2} & \mu_{2} & \nu_{2} \\ \lambda_{3} & \mu_{3} & \nu_{3} \end{array}\right]=\left[\begin{array}{cccc} \frac{1}{6} & \frac{2}{3} & \frac{1}{6} & 0 \\ -\frac{1}{2} & 0 & \frac{1}{2} & 0 \\ \frac{1}{2} & -1 & \frac{1}{2} & 0 \\ -\frac{1}{6} & \frac{1}{2} & -\frac{1}{2} & \frac{1}{6} \end{array}\right]\left[\begin{array}{lll} x_{1} & y_{1} & z_{1} \\ x_{2} & y_{2} & z_{2} \\ x_{3} & y_{3} & z_{3} \\ x_{4} & y_{4} & z_{4} \end{array}\right] .
  1. Calculation of Welding Deviation: P e ( x e , y e , z e ) P e x e , y e , z e P_(e)(x_(e),y_(e),z_(e))P_{e}\left(x_{e}, y_{e}, z_{e}\right) is the desired welding point coordinate and E r ( e x , e y , e z ) E r e x , e y , e z E_(r)(/_\e_(x),/_\e_(y),/_\e_(z))E_{r}\left(\triangle e_{x}, \triangle e_{y}, \triangle e_{z}\right) is the deviation between the TCP coordinate and P e P e P_(e)P_{e}. Based on the nearest neighbor algorithm, P i P i P_(i)P_{i} is obtained. Therefore, the coefficients λ 0 , λ 1 , λ 2 , λ 3 , μ 0 , μ 1 , μ 2 , μ 3 , ν 0 , ν 1 , ν 2 λ 0 , λ 1 , λ 2 , λ 3 , μ 0 , μ 1 , μ 2 , μ 3 , ν 0 , ν 1 , ν 2 lambda_(0),lambda_(1),lambda_(2),lambda_(3),mu_(0),mu_(1),mu_(2),mu_(3),nu_(0),nu_(1),nu_(2)\lambda_{0}, \lambda_{1}, \lambda_{2}, \lambda_{3}, \mu_{0}, \mu_{1}, \mu_{2}, \mu_{3}, \nu_{0}, \nu_{1}, \nu_{2}, and ν 3 ν 3 nu_(3)\nu_{3} of point P i P i P_(i)P_{i} are obtained from (12). Assuming that the main welding direction of the robot is the y y yy-axis, the y y yy-axis coordinates y t y t y_(t)y_{t} of point P t P t P_(t)P_{t} are substituted into y ( τ ) = μ 0 + μ 1 τ + μ 2 τ 2 + μ 3 τ 3 y ( τ ) = μ 0 + μ 1 τ + μ 2 τ 2 + μ 3 τ 3 y(tau)=mu_(0)+mu_(1)tau+mu_(2)tau^(2)+mu_(3)tau^(3)y(\tau)=\mu_{0}+\mu_{1} \tau+\mu_{2} \tau^{2}+\mu_{3} \tau^{3}, and the value of parameter τ τ tau\tau can be obtained and marked as τ 0 τ 0 tau_(0)\tau_{0} ( 0 τ 0 1 0 τ 0 1 0 <= tau_(0) <= 10 \leq \tau_{0} \leq 1 ). Therefore, the value of P e P e P_(e)P_{e} is calculated:
    焊接偏差计算: P e ( x e , y e , z e ) P e x e , y e , z e P_(e)(x_(e),y_(e),z_(e))P_{e}\left(x_{e}, y_{e}, z_{e}\right) 为期望焊点坐标, E r ( e x , e y , e z ) E r e x , e y , e z E_(r)(/_\e_(x),/_\e_(y),/_\e_(z))E_{r}\left(\triangle e_{x}, \triangle e_{y}, \triangle e_{z}\right) 为 TCP 坐标与 P e P e P_(e)P_{e} 的偏差。基于最近邻算法得到 P i P i P_(i)P_{i} ,因此由式(12)获得点 P i P i P_(i)P_{i} 的系数 λ 0 , λ 1 , λ 2 , λ 3 , μ 0 , μ 1 , μ 2 , μ 3 , ν 0 , ν 1 , ν 2 λ 0 , λ 1 , λ 2 , λ 3 , μ 0 , μ 1 , μ 2 , μ 3 , ν 0 , ν 1 , ν 2 lambda_(0),lambda_(1),lambda_(2),lambda_(3),mu_(0),mu_(1),mu_(2),mu_(3),nu_(0),nu_(1),nu_(2)\lambda_{0}, \lambda_{1}, \lambda_{2}, \lambda_{3}, \mu_{0}, \mu_{1}, \mu_{2}, \mu_{3}, \nu_{0}, \nu_{1}, \nu_{2} ν 3 ν 3 nu_(3)\nu_{3} 。假设机器人主焊接方向为 y y yy 轴,将点 P t P t P_(t)P_{t} y y yy 轴坐标 y t y t y_(t)y_{t} 代入 y ( τ ) = μ 0 + μ 1 τ + μ 2 τ 2 + μ 3 τ 3 y ( τ ) = μ 0 + μ 1 τ + μ 2 τ 2 + μ 3 τ 3 y(tau)=mu_(0)+mu_(1)tau+mu_(2)tau^(2)+mu_(3)tau^(3)y(\tau)=\mu_{0}+\mu_{1} \tau+\mu_{2} \tau^{2}+\mu_{3} \tau^{3} ,可求得参数 τ τ tau\tau 的值并记为 τ 0 τ 0 tau_(0)\tau_{0} 0 τ 0 1 0 τ 0 1 0 <= tau_(0) <= 10 \leq \tau_{0} \leq 1 ),由此计算 P e P e P_(e)P_{e} 的值:
x e = x ( τ 0 ) = λ 0 + λ 1 τ 0 + λ 2 τ 0 2 + λ 3 τ 0 3 z e = z ( τ 0 ) = ν 0 + ν 1 τ 0 + ν 2 τ 0 2 + ν 3 τ 0 3 . x e = x τ 0 = λ 0 + λ 1 τ 0 + λ 2 τ 0 2 + λ 3 τ 0 3 z e = z τ 0 = ν 0 + ν 1 τ 0 + ν 2 τ 0 2 + ν 3 τ 0 3 . {:[x_(e)=x(tau_(0))=lambda_(0)+lambda_(1)tau_(0)+lambda_(2)tau_(0)^(2)+lambda_(3)tau_(0)^(3)],[z_(e)=z(tau_(0))=nu_(0)+nu_(1)tau_(0)+nu_(2)tau_(0)^(2)+nu_(3)tau_(0)^(3).]:}\begin{aligned} & x_{e}=x\left(\tau_{0}\right)=\lambda_{0}+\lambda_{1} \tau_{0}+\lambda_{2} \tau_{0}^{2}+\lambda_{3} \tau_{0}^{3} \\ & z_{e}=z\left(\tau_{0}\right)=\nu_{0}+\nu_{1} \tau_{0}+\nu_{2} \tau_{0}^{2}+\nu_{3} \tau_{0}^{3} . \end{aligned}
Therefore, the welding deviation E r E r E_(r)E_{r} is
因此焊接偏差 E r E r E_(r)E_{r}
[ e x e z ] = [ x e x t z e z t ] . e x e z = x e x t z e z t . [[/_\e_(x)],[/_\e_(z)]]=[[x_(e)-x_(t)],[z_(e)-z_(t)]].\left[\begin{array}{l} \triangle e_{x} \\ \triangle e_{z} \end{array}\right]=\left[\begin{array}{l} x_{e}-x_{t} \\ z_{e}-z_{t} \end{array}\right] .
Assuming that the main welding direction of the robot is x x xx axis, the welding deviation E r E r E_(r)E_{r} is
假设机器人主焊接方向为 x x xx 轴,则焊接偏差 E r E r E_(r)E_{r}
[ e y e z ] = [ y e y t z e z t ] . e y e z = y e y t z e z t . [[/_\e_(y)],[/_\e_(z)]]=[[y_(e)-y_(t)],[z_(e)-z_(t)]].\left[\begin{array}{c} \triangle e_{y} \\ \triangle e_{z} \end{array}\right]=\left[\begin{array}{l} y_{e}-y_{t} \\ z_{e}-z_{t} \end{array}\right] .
According to (15) and (16), the welding deviation is sent to the robot controller to offset the welding torch. In this way, the real-time and accurate seam tracking of complex spatially curved weld seams is realized.
根据式(15)和式(16)将焊接偏差量发送至机器人控制器以补偿焊枪偏移,从而实现复杂空间曲线焊缝的实时精确跟踪。

B. Posture Adjustment  B. 姿态调整

  1. Establishment of the Weld Seam Posture Model: As shown in Fig. 8, the feature points of V-grooved weld seam include L P , B P L P , B P LP,BPL P, B P, and R P R P RPR P. Vectors B P i L P i B P i L P i vec(BP_(i)LP_(i))\overrightarrow{B P_{i} L P_{i}} and B P i R P i B P i R P i vec(BP_(i)RP_(i))\overrightarrow{B P_{i} R P_{i}} constitute vectors v l i v l i v_(l_(i))v_{l_{i}} and v r i v r i v_(r_(i))v_{r_{i}} respectively. Vector v m i v m i v_(m_(i))v_{m_{i}} is the unit vector in the direction of the sum of v l i v l i v_(l_(i))v_{l_{i}} and v r i v r i v_(r_(i))v_{r_{i}}
    焊缝姿态模型建立:如图 8 所示,V 型坡口焊缝特征点包括 L P , B P L P , B P LP,BPL P, B P R P R P RPR P 。向量 B P i L P i B P i L P i vec(BP_(i)LP_(i))\overrightarrow{B P_{i} L P_{i}} B P i R P i B P i R P i vec(BP_(i)RP_(i))\overrightarrow{B P_{i} R P_{i}} 分别构成向量 v l i v l i v_(l_(i))v_{l_{i}} v r i v r i v_(r_(i))v_{r_{i}} 。向量 v m i v m i v_(m_(i))v_{m_{i}} v l i v l i v_(l_(i))v_{l_{i}} v r i v r i v_(r_(i))v_{r_{i}} 方向和的单位向量
v m i = v l i + v r i | v l i + v r i | . v m i = v l i + v r i v l i + v r i . v_(m_(i))=(v_(l_(i))+v_(r_(i)))/(|v_(l_(i))+v_(r_(i))|).v_{m_{i}}=\frac{v_{l_{i}}+v_{r_{i}}}{\left|v_{l_{i}}+v_{r_{i}}\right|} .
The posture model is shown in Fig. 9, where P i ( x i , y i , z i ) P i x i , y i , z i P_(i)(x_(i),y_(i),z_(i))P_{i}\left(x_{i}, y_{i}, z_{i}\right) are the coordinates of the point nearest to TCP on the scanning path, o i o i o_(i)o_{i} is the direction vector of point P i , a i P i , a i P_(i),a_(i)P_{i}, a_{i} is the approach vector of point P i P i P_(i)P_{i}, and n i n i n_(i)n_{i} is the normal vector of point P i P i P_(i)P_{i}. Vectors o i o i o_(i)o_{i}, a i a i a_(i)a_{i}, and n i n i n_(i)n_{i} form the desired coordinate system of the welding
姿态模型如图 9 所示,其中 P i ( x i , y i , z i ) P i x i , y i , z i P_(i)(x_(i),y_(i),z_(i))P_{i}\left(x_{i}, y_{i}, z_{i}\right) 为扫描路径上距离 TCP 最近点的坐标, o i o i o_(i)o_{i} 为点 P i , a i P i , a i P_(i),a_(i)P_{i}, a_{i} 的方向向量, P i P i P_(i)P_{i} 为点 n i n i n_(i)n_{i} 的接近向量, P i P i P_(i)P_{i} 为点 o i o i o_(i)o_{i} 的法向量。向量 a i a i a_(i)a_{i} n i n i n_(i)n_{i} 与@9#共同构成焊接期望坐标系

\captionsetup{labelformat=empty}
Figure 8: Fig. 9. Posture model.
图 8:图 9. 姿态模型
Algorithm 2: Convert \(R_{d}\) to Euler angle \(\psi, \theta, \phi\).
    Input: \(R_{d}\);
    Output: \(\psi, \theta, \phi\);
        if \(n_{d_{i z}} \neq \pm 1\) then
            \(\theta=-\arcsin \left(n_{d_{i z}}\right)\)
            \(\psi=\arctan 2\left(n_{d_{i z}} / \cos \left(\theta_{1}\right), a_{d_{i z}} / \cos \left(\theta_{1}\right)\right)\)
            \(\phi=\arctan 2\left(n_{d_{i y}} / \cos \left(\theta_{1}\right), a_{d_{i x}} / \cos \left(\theta_{1}\right)\right)\)
        else
            \(\phi=0\); can be anything.
            if \(n_{d_{i z}}==-1\) then
                \(\theta=\pi / 2, \psi=\phi+\arctan 2\left(o_{d_{i x}}, a_{d_{i x}}\right)\).
            else
                \(\theta=-\pi / 2, \psi=-\phi+\arctan 2\left(-o_{d_{i x}},-a_{d_{i x}}\right)\).
            end if
        end if

\captionsetup{labelformat=empty}
Figure 9: Fig. 10. Teaching posture and desired posture.
图 9:图 10. 示教姿态与期望姿态
torch. o i o i o_(i)o_{i} can be obtained from the tangent direction of the i i ii th sampling point of the spatially curved weld seam
torch. o i o i o_(i)o_{i} 可从空间曲线焊缝第 i i ii 个采样点的切线方向获得
o i = ( p i x t i + p i y t j + p i z t k ) | p i x t i + p i y t j + p i z t k | . o i = p i x t i + p i y t j + p i z t k p i x t i + p i y t j + p i z t k . o_(i)=(((delp_(i_(x)))/(del t)i+(delp_(i_(y)))/(del t)j+(delp_(i_(z)))/(del t)k))/(|(delp_(i_(x)))/(del t)i+(delp_(i_(y)))/(del t)j+(delp_(i_(z)))/(del t)k|).o_{i}=\frac{\left(\frac{\partial p_{i_{x}}}{\partial t} i+\frac{\partial p_{i_{y}}}{\partial t} j+\frac{\partial p_{i_{z}}}{\partial t} k\right)}{\left|\frac{\partial p_{i_{x}}}{\partial t} i+\frac{\partial p_{i_{y}}}{\partial t} j+\frac{\partial p_{i_{z}}}{\partial t} k\right|} .
The approach vector a i a i a_(i)a_{i} is calculated as follows:
进给向量 a i a i a_(i)a_{i} 的计算方法如下:
a i = v m i ( v m i o i ) o i | v m i ( v m i o i ) o i | . a i = v m i v m i o i o i v m i v m i o i o i . a_(i)=(v_(m_(i))-(v_(m_(i))*o_(i))o_(i))/(|v_(m_(i))-(v_(m_(i))*o_(i))o_(i)|).a_{i}=\frac{v_{m_{i}}-\left(v_{m_{i}} \cdot o_{i}\right) o_{i}}{\left|v_{m_{i}}-\left(v_{m_{i}} \cdot o_{i}\right) o_{i}\right|} .
The normal vector n i n i n_(i)n_{i} is calculated as follows:
法向量 n i n i n_(i)n_{i} 的计算方法如下:
n i = o i × a i . n i = o i × a i . n_(i)=o_(i)xxa_(i).n_{i}=o_{i} \times a_{i} .
  1. Calculation of Posture Deviation: As shown in Fig. 10, the teaching posture of the welding torch can be expressed by n t i , o t i n t i , o t i n_(t_(i)),o_(t_(i))n_{t_{i}}, o_{t_{i}}, and a t i a t i a_(t_(i))a_{t_{i}}, and the desired posture of the welding torch can be expressed by n d i , o d i n d i , o d i n_(d_(i)),o_(d_(i))n_{d_{i}}, o_{d_{i}}, and a d i a d i a_(d_(i))a_{d_{i}}, where n d i , o d i n d i , o d i n_(d_(i)),o_(d_(i))n_{d_{i}}, o_{d_{i}}, and a d i a d i a_(d_(i))a_{d_{i}} can be calculated by (18)-(20) from point B P i , L P i B P i , L P i BP_(i),LP_(i)B P_{i}, L P_{i}, and R P i R P i RP_(i)R P_{i},
    姿态偏差计算:如图 10 所示,焊枪示教姿态可由 n t i , o t i n t i , o t i n_(t_(i)),o_(t_(i))n_{t_{i}}, o_{t_{i}} a t i a t i a_(t_(i))a_{t_{i}} 表示,焊枪期望姿态可由 n d i , o d i n d i , o d i n_(d_(i)),o_(d_(i))n_{d_{i}}, o_{d_{i}} a d i a d i a_(d_(i))a_{d_{i}} 表示,其中 n d i , o d i n d i , o d i n_(d_(i)),o_(d_(i))n_{d_{i}}, o_{d_{i}} a d i a d i a_(d_(i))a_{d_{i}} 可通过(18)-(20)式从点 B P i , L P i B P i , L P i BP_(i),LP_(i)B P_{i}, L P_{i} R P i R P i RP_(i)R P_{i} 计算得出,

\captionsetup{labelformat=empty}
Figure 10: Fig. 11. Experimental system.
图 10:图 11. 实验系统
Table 1: TABLE I  表 1:表 I
Experimental Equipment  实验设备
\captionsetup{labelformat=empty}
Equipment  设备 Model  型号 Equipment  设备 Model  型号
Industrial computer  工业计算机 ZOTAC Robot controller  机器人控制器 DX200
Visual sensor  视觉传感器 Self-developed  自主研发 Wire feeder  送丝机 Kaierda  凯尔达
Welding robot  焊接机器人 MA1440 Welding machine  焊机 RD500S
Welding material  焊接材料 Q235 Shielding gas  保护气体 CO2+Ar  二氧化碳+氩气
Equipment Model Equipment Model Industrial computer ZOTAC Robot controller DX200 Visual sensor Self-developed Wire feeder Kaierda Welding robot MA1440 Welding machine RD500S Welding material Q235 Shielding gas CO2+Ar| Equipment | Model | Equipment | Model | | :--- | :---: | :--- | :---: | | Industrial computer | ZOTAC | Robot controller | DX200 | | Visual sensor | Self-developed | Wire feeder | Kaierda | | Welding robot | MA1440 | Welding machine | RD500S | | Welding material | Q235 | Shielding gas | CO2+Ar |
respectively. The desired rotation matrix R d R d R_(d)R_{d} is
分别。所需的旋转矩阵 R d R d R_(d)R_{d}
R d = [ n d i o d i a d i ] = [ n d i x o d i x a d i x n d i y o d i y a d i y n d i z o d i z a d i z ] . R d = n d i o d i a d i = n d i x o d i x a d i x n d i y o d i y a d i y n d i z o d i z a d i z . R_(d)=[[n_(d_(i)),o_(d_(i)),a_(d_(i))]]=[[n_(d_(ix)),o_(d_(ix)),a_(d_(ix))],[n_(d_(iy)),o_(d_(iy)),a_(d_(iy))],[n_(d_(iz)),o_(d_(iz)),a_(d_(iz))]].R_{d}=\left[\begin{array}{lll} n_{d_{i}} & o_{d_{i}} & a_{d_{i}} \end{array}\right]=\left[\begin{array}{lll} n_{d_{i x}} & o_{d_{i x}} & a_{d_{i x}} \\ n_{d_{i y}} & o_{d_{i y}} & a_{d_{i y}} \\ n_{d_{i z}} & o_{d_{i z}} & a_{d_{i z}} \end{array}\right] .
Through R d R d R_(d)R_{d}, the Euler angles ψ , θ ψ , θ psi,theta\psi, \theta, and ϕ ϕ phi\phi in the robot BCS can be calculated by Algorithm 2. Assume that the current robot posture R t R t R_(t)R_{t} is ( R t x , R t y , R t z R t x , R t y , R t z R_(t_(x)),R_(t_(y)),R_(t_(z))R_{t_{x}}, R_{t_{y}}, R_{t_{z}} ). Therefore, the real-time welding posture deviations Δ R x , Δ R y Δ R x , Δ R y DeltaR_(x),DeltaR_(y)\Delta R_{x}, \Delta R_{y}, and Δ R z Δ R z DeltaR_(z)\Delta R_{z} are as follows:
通过 R d R d R_(d)R_{d} ,机器人基坐标系中的欧拉角 ψ , θ ψ , θ psi,theta\psi, \theta ϕ ϕ phi\phi 可由算法 2 计算得出。假设当前机器人位姿 R t R t R_(t)R_{t} 为( R t x , R t y , R t z R t x , R t y , R t z R_(t_(x)),R_(t_(y)),R_(t_(z))R_{t_{x}}, R_{t_{y}}, R_{t_{z}} ),则实时焊接姿态偏差 Δ R x , Δ R y Δ R x , Δ R y DeltaR_(x),DeltaR_(y)\Delta R_{x}, \Delta R_{y} Δ R z Δ R z DeltaR_(z)\Delta R_{z} 如下:
[ Δ R x Δ R y Δ R z ] = [ R t x ψ R t y θ R t z ϕ ] . Δ R x Δ R y Δ R z = R t x ψ R t y θ R t z ϕ . [[DeltaR_(x)],[DeltaR_(y)],[DeltaR_(z)]]=[[R_(t_(x))-psi],[R_(t_(y))-theta],[R_(t_(z))-phi]].\left[\begin{array}{l} \Delta R_{x} \\ \Delta R_{y} \\ \Delta R_{z} \end{array}\right]=\left[\begin{array}{l} R_{t_{x}}-\psi \\ R_{t_{y}}-\theta \\ R_{t_{z}}-\phi \end{array}\right] .
  1. Automatic Posture Adjustment: According to (22), the posture deviation is sent to the robot controller for real-time posture adjustment. The MotoPlus program runs in robot controller DX200, and the robot continuously compensates the posture deviations to realize the automatic adjustment of the desired welding posture.
    自动姿态调整:根据式(22),将姿态偏差量发送至机器人控制器进行实时位姿调整。MotoPlus 程序运行于 DX200 机器人控制器中,通过持续补偿姿态偏差实现期望焊接位姿的自动调整。

V. Experiments and Results
五、实验与结果

A. Experiment Setup  A. 实验配置

As shown in Fig. 11, the experimental system mainly includes the industrial computer, laser visual sensor, welding robot, robot controller, and welding machine. The ZOTAC industrial computer is featured with Intel i7-10700 CPU and NVIDIA GeForce RTX-3070 GPU. The laser vision sensor, industrial computer, and robot controller communicate via Ethernet. The experimental equipment used in the experiments is shown in Table I. The experimental workpieces include the workpiece of the V-grooved joint, lap joint, and fillet joint. According to the welding manual and experimental test, the welding parameters with better welding effects are shown in Table II.
如图 11 所示,实验系统主要由工业计算机、激光视觉传感器、焊接机器人、机器人控制器及焊机组成。ZOTAC 工业计算机搭载 Intel i7-10700 CPU 与 NVIDIA GeForce RTX-3070 GPU。激光视觉传感器、工业计算机和机器人控制器通过以太网通信。实验所用设备如表 I 所示。实验工件包含 V 型坡口接头、搭接接头和角接接头试件。根据焊接手册与实验测试,焊接效果较优的工艺参数如表 II 所示。
Table 2: TABLE II  表 2:表 II
Welding Parameters of Different Workpieces
不同工件的焊接参数
\captionsetup{labelformat=empty}
Workpiece type  工件类型 V-grooved joint  V 型坡口接头 Lap joint  搭接接头 Fillet joint  角接接头
Welding speed ( mm / s ) ( mm / s ) (mm//s)(\mathrm{mm} / \mathrm{s})  焊接速度 ( mm / s ) ( mm / s ) (mm//s)(\mathrm{mm} / \mathrm{s}) 8 5 5
Welding voltage ( V ) ( V ) (V)(\mathrm{V})  焊接电压 ( V ) ( V ) (V)(\mathrm{V}) 22 24 24
Welding current ( A ) ( A ) (A)(\mathrm{A})  焊接电流 ( A ) ( A ) (A)(\mathrm{A}) 200 230 235
Workpiece type V-grooved joint Lap joint Fillet joint Welding speed (mm//s) 8 5 5 Welding voltage (V) 22 24 24 Welding current (A) 200 230 235| Workpiece type | V-grooved joint | Lap joint | Fillet joint | | :--- | :---: | :---: | :---: | | Welding speed $(\mathrm{mm} / \mathrm{s})$ | 8 | 5 | 5 | | Welding voltage $(\mathrm{V})$ | 22 | 24 | 24 | | Welding current $(\mathrm{A})$ | 200 | 230 | 235 |
Table 3: TABLE III  表 3: 表 III
Training Parameters of the Shuffle-YOLO Model
Shuffle-YOLO 模型的训练参数
\captionsetup{labelformat=empty}
Parameters  参数 Values  数值 Parameters  参数 Values  数值
Image size  图像尺寸 640 × 640 640 × 640 640 xx640640 \times 640 Batch size  批处理大小 8
Epochs  训练周期数 400 Initial learning rate  初始学习率 0.01
Box loss gain  边界框损失增益 0.05 IoU training threshold  IoU 训练阈值 0.20
Parameters Values Parameters Values Image size 640 xx640 Batch size 8 Epochs 400 Initial learning rate 0.01 Box loss gain 0.05 IoU training threshold 0.20| Parameters | Values | Parameters | Values | | :--- | :---: | :--- | :---: | | Image size | $640 \times 640$ | Batch size | 8 | | Epochs | 400 | Initial learning rate | 0.01 | | Box loss gain | 0.05 | IoU training threshold | 0.20 |

\captionsetup{labelformat=empty}
Figure 11: Fig. 12. Training results. (a) Loss curves. (b) Precision and mAP 0.5 curves.
图 11:图 12. 训练结果。(a) 损失曲线。(b) 精确度和 mAP 0.5 曲线。

\captionsetup{labelformat=empty}
Figure 12: Fig. 13. Extraction results of WSFP by Shuffle-YOLO.
图 12:图 13. Shuffle-YOLO 对 WSFP 的提取结果。
The operating system running on the industrial computer is Windows 11. Since Windows is not a real-time operating system, the real-time extension suite Kithara was introduced to Windows to ensure real-time performance. As a real-time system, the industrial computer sends measurement data to the robot controller every 30 ms , and the control cycle of the robot is 40 ms .
工业计算机运行的操作系统为 Windows 11。由于 Windows 并非实时操作系统,因此引入了 Kithara 实时扩展套件以确保实时性能。作为实时系统,该工业计算机每 30 毫秒向机器人控制器发送一次测量数据,而机器人的控制周期为 40 毫秒。

B. Training of the Shuffle-YOLO Model
B. Shuffle-YOLO 模型训练

  1. Model Training: 500 raw weld seam images were collected during robot welding. Then, 2500 weld seam images are obtained by data augmentation, where 500 images are from noise adding, 500 images are from image filtering, 500 images are from image flipping, 500 images are from brightness adjustment, and 500 images are from affine transformation. The dataset with 3000 images was divided into training and test datasets in a ratio of 9:1. Training parameters are shown in Table III.
    模型训练:在机器人焊接过程中采集了 500 张原始焊缝图像。随后通过数据增强获得 2500 张焊缝图像,其中 500 张来自添加噪声处理,500 张来自图像滤波处理,500 张来自图像翻转处理,500 张来自亮度调整处理,500 张来自仿射变换处理。将 3000 张图像的数据集按 9:1 比例划分为训练集与测试集。训练参数详见表 III。
  2. Training Results: As shown in Fig. 12, after 400 epochs of training, Box loss reaches 0.010, Object loss reaches 0.003, and Class loss reaches 0.0003. The mean average precision (mAP) 0.5 and precision of all classes reach 0.996 and 0.999, respectively. The extraction results of the V-grooved spatially curved weld seam are shown in Fig. 13.
    训练结果:如图 12 所示,经过 400 轮训练后,边界框损失降至 0.010,目标检测损失降至 0.003,分类损失降至 0.0003。所有类别的平均精度(mAP 0.5)和精确率分别达到 0.996 和 0.999。V 型坡口空间曲线焊缝的特征点提取结果如图 13 所示。

\captionsetup{labelformat=empty}
Figure 13: Fig. 14. Welding torch trajectory of the V-grooved joint. (a) Spatial view. (b) Front view. (c) Side view.
图 13:图 14. V 型坡口接头的焊枪运动轨迹。(a)空间视图 (b)正视图 (c)侧视图

\captionsetup{labelformat=empty}
Figure 14: Fig. 15. Welding torch trajectory of the lap joint. (a) Spatial view. (b) Front view. (c) Side view.
图 14:图 15. 搭接接头焊枪轨迹。(a)空间视图 (b)前视图 (c)侧视图

\captionsetup{labelformat=empty}
Figure 15: Fig. 16. Welding torch trajectory of the fillet joint. (a) Spatial view. (b) Front view. (c) Side view.
图 15:图 16. 角接接头焊枪轨迹。(a)空间视图 (b)前视图 (c)侧视图

\captionsetup{labelformat=empty}
Figure 16: Fig. 17. Seam tracking errors of the V-grooved joint. (a) x x xx-axis. (b) z z zz-axis.
图 16:图 17. V 型坡口焊缝的跟踪误差。(a) x x xx 轴。(b) z z zz 轴。

\captionsetup{labelformat=empty}
Figure 17: Fig. 18. Seam tracking errors of the lap joint. (a) x x xx-axis. (b) z z zz-axis.
图 17:图 18. 搭接焊缝的跟踪误差。(a) x x xx 轴。(b) z z zz 轴。

C. Test of Seam Tracking Performance
C. 焊缝跟踪性能测试

We carried out 15 groups of seam tracking experiments. Since the results of each experiment are consistent, only one group of welding torch trajectories is analyzed. The seam tracking trajectories of the V-grooved joint, lap joint, and fillet joint under different perspectives are shown in Figs. 14-16, respectively. The blue lines are teaching trajectories, the green lines are desired welding trajectories, and the red lines are seam tracking trajectories. First, the workpiece is taught manually to obtain the
我们进行了 15 组焊缝跟踪实验。由于各组实验结果一致,仅分析其中一组焊枪轨迹。图 14-16 分别展示了 V 型坡口接头、搭接接头和角接头在不同视角下的焊缝跟踪轨迹。其中蓝色线条为示教轨迹,绿色线条为理想焊接轨迹,红色线条为实际跟踪轨迹。首先通过人工示教获取工件

\captionsetup{labelformat=empty}
Figure 18: Fig. 19. Seam tracking errors of the fillet joint. (a) x x xx-axis. (b) z z zz-axis.
图 18:图 19. 角接头焊缝跟踪误差。(a) x x xx 轴方向 (b) z z zz 轴方向
Table 4: TABLE IV  表 4:表 IV
Seam Tracking Error of the V-Grooved Joint
V 型坡口焊缝的跟踪误差
\captionsetup{labelformat=empty}
Axis  轴向 Max error / mm / mm //mm/ \mathrm{mm}  最大误差 / mm / mm //mm/ \mathrm{mm} Mean error / mm / mm //mm/ \mathrm{mm}  X 轴平均误差 / mm / mm //mm/ \mathrm{mm} RMSE / mm / mm //mm/ \mathrm{mm}  X 轴均方根误差 / mm / mm //mm/ \mathrm{mm}
X-axis  X 轴 0.801 0.281 0.195
Z-axis  Z 轴 0.775 0.263 0.169
Axis Max error //mm Mean error //mm RMSE //mm X-axis 0.801 0.281 0.195 Z-axis 0.775 0.263 0.169| Axis | Max error $/ \mathrm{mm}$ | Mean error $/ \mathrm{mm}$ | RMSE $/ \mathrm{mm}$ | | :---: | :---: | :---: | :---: | | X-axis | 0.801 | 0.281 | 0.195 | | Z-axis | 0.775 | 0.263 | 0.169 |
Table 5: TABLE V  表 5:表 V
Seam Tracking Error of the Lap Joint
搭接接头焊缝跟踪误差
\captionsetup{labelformat=empty}
Axis   Max error / mm / mm //mm/ \mathrm{mm}  最大误差 / mm / mm //mm/ \mathrm{mm} Mean error / mm / mm //mm/ \mathrm{mm}  平均误差 / mm / mm //mm/ \mathrm{mm} RMSE / mm / mm //mm/ \mathrm{mm}  均方根误差 / mm / mm //mm/ \mathrm{mm}
X-axis  X 轴 0.838 0.377 0.428
Z-axis  Z 轴 0.734 0.282 0.337
Axis Max error //mm Mean error //mm RMSE //mm X-axis 0.838 0.377 0.428 Z-axis 0.734 0.282 0.337| Axis | Max error $/ \mathrm{mm}$ | Mean error $/ \mathrm{mm}$ | RMSE $/ \mathrm{mm}$ | | :---: | :---: | :---: | :---: | | X-axis | 0.838 | 0.377 | 0.428 | | Z-axis | 0.734 | 0.282 | 0.337 |
Table 6: TABLE VI  表 6:表 VI
Seam Tracking Error of the Fillet Joint
角焊缝的焊缝跟踪误差
\captionsetup{labelformat=empty}
Axis   Max error / mm / mm //mm/ \mathrm{mm}  最大误差 / mm / mm //mm/ \mathrm{mm} Mean error / mm / mm //mm/ \mathrm{mm}  平均误差 / mm / mm //mm/ \mathrm{mm} RMSE / mm / mm //mm/ \mathrm{mm}  均方根误差 / mm / mm //mm/ \mathrm{mm}
X-axis  X 轴 0.565 0.236 0.245
Z-axis  Z 轴 0.452 0.214 0.196
Axis Max error //mm Mean error //mm RMSE //mm X-axis 0.565 0.236 0.245 Z-axis 0.452 0.214 0.196| Axis | Max error $/ \mathrm{mm}$ | Mean error $/ \mathrm{mm}$ | RMSE $/ \mathrm{mm}$ | | :---: | :---: | :---: | :---: | | X-axis | 0.565 | 0.236 | 0.245 | | Z-axis | 0.452 | 0.214 | 0.196 |

\captionsetup{labelformat=empty}
Figure 19: Fig. 20. Spatially curved welding. (a) V-grooved joint. (b) Lap joint. (c) Fillet joint. (d) Welding effect of the V-grooved joint. (e) Welding effect of the lap joint. (f) Welding effect of the fillet joint.
图 19:图 20. 空间曲线焊接。(a)V 型坡口接头。(b)搭接接头。(c)角接接头。(d)V 型坡口接头焊接效果。(e)搭接接头焊接效果。(f)角接接头焊接效果。
teaching trajectory. Then, the workpiece is moved artificially. The offset workpiece is taught manually, and the TCP coordinates are recorded in real time as the desired welding trajectory. Finally, the weld seam is tracked through the teaching program, and the real-time coordinates of the robot’s TCP are recorded as seam tracking trajectory.
示教轨迹。随后人工移动工件,对偏移后的工件进行手动示教,并将 TCP 坐标实时记录为期望焊接轨迹。最后通过示教程序进行焊缝跟踪,同时记录机器人 TCP 的实时坐标作为焊缝跟踪轨迹。
Seam tracking error is the error between desired trajectory and the tracking trajectory. As shown in Figs. 17-19 and Tables IV-VI, the max error is no higher than 0.84 mm , the mean error is no higher than 0.38 mm , and the root-mean-square error (RMSE) is less than 0.43 mm . The workpieces and the effect of seam tracking are shown in Fig. 20.
焊缝跟踪误差即期望轨迹与跟踪轨迹之间的偏差。如图 17-19 及表 IV-VI 所示,最大误差不超过 0.84 毫米,平均误差不超过 0.38 毫米,均方根误差(RMSE)小于 0.43 毫米。工件实物及焊缝跟踪效果如图 20 所示。
As shown in Fig. 21, the metallographic structure of the Vgroove joint, lap joint, and fillet joint shows that the chamfer
如图 21 所示,V 型坡口接头、搭接接头和角接接头的金相组织显示其倒角

\captionsetup{labelformat=empty}
Figure 20: Fig. 21. Metallographic analysis. (a) V-grooved joint. (b) Lap joint. (c) Fillet joint.
图 20:图 21. 金相分析。(a)V 型坡口接头。(b)搭接接头。(c)角接接头。

\captionsetup{labelformat=empty}
Figure 21: Fig. 22. Posture adjustment of the welding torch. (a) R x R x RxR x. (b) R y R y RyR y. (c) Rz.
图 21:图 22. 焊枪姿态调整。(a) R x R x RxR x 。(b) R y R y RyR y 。(c)Rz。

\captionsetup{labelformat=empty}
Figure 22: Fig. 23. Posture adjustment error of the welding torch.
图 22:图 23. 焊枪姿态调整误差
Table 7: TABLE VII  表 7:表 VII
Error of Posture Adjustment
姿态调整误差
\captionsetup{labelformat=empty}
Axis   Max error / / //^(@)/{ }^{\circ}  最大误差 / / //^(@)/{ }^{\circ} Mean error / / //^(@)/{ }^{\circ}  平均误差 / / //^(@)/{ }^{\circ} RMSE / / //^(@)/{ }^{\circ}  均方根误差 / / //^(@)/{ }^{\circ}
Rx 1.523 0.742 0.811
Ry 1.667 0.723 0.830
Rz 1.442 0.709 0.845
Axis Max error //^(@) Mean error //^(@) RMSE //^(@) Rx 1.523 0.742 0.811 Ry 1.667 0.723 0.830 Rz 1.442 0.709 0.845| Axis | Max error $/{ }^{\circ}$ | Mean error $/{ }^{\circ}$ | RMSE $/{ }^{\circ}$ | | :---: | :---: | :---: | :---: | | Rx | 1.523 | 0.742 | 0.811 | | Ry | 1.667 | 0.723 | 0.830 | | Rz | 1.442 | 0.709 | 0.845 |
filling is indeed satisfactory. The results show that the method proposed in this article can ensure the quality of robot welding.
填充效果确实令人满意。结果表明,本文提出的方法能够保证机器人焊接质量。

D. Test of Posture Adjustment Performance
D. 姿态调整性能测试

We carried out 15 groups of posture adjustment experiments on the V-grooved workpiece with complex spatial curves. Since the results of each experiment are consistent, only one group of welding torch postures is analyzed, as shown in Fig. 22. The postures of 20 points were collected and analyzed. The blue lines are teaching postures, the green lines are desired welding postures, and the red lines are seam tracking postures. First, lay the workpiece, and teach the welding robot to obtain the teaching posture. Then, offset one end of the workpiece and raise it. During the process of welding, the welding posture is automatically adjusted, and the tracking posture is recorded.
我们在具有复杂空间曲线的 V 型坡口工件上进行了 15 组姿态调整实验。由于各组实验结果一致,仅分析其中一组焊枪姿态数据,如图 22 所示。共采集分析 20 个点的姿态数据,其中蓝线为示教姿态,绿线为理想焊接姿态,红线为焊缝跟踪姿态。首先平置工件,通过示教焊接机器人获取示教姿态;随后偏移并抬高工件一端,在焊接过程中自动调整焊枪姿态并记录跟踪姿态。
The posture adjustment error is the error between the desired posture and the tracking posture. The desired posture can be calculated by (21). As shown in Fig. 23 and Table VII, the max posture adjustment error is less than 1.67 1.67 1.67^(@)1.67^{\circ}, the mean error is less than 0.75 0.75 0.75^(@)0.75^{\circ}, and the RMSE is less than 0.85 0.85 0.85^(@)0.85^{\circ}. Experimental results show that the proposed posture adjustment method can accurately adjust the robot’s welding postures.
姿态调整误差即理想姿态与跟踪姿态之间的偏差,理想姿态可通过公式(21)计算得出。如图 23 和表 VII 所示,最大姿态调整误差小于 1.67 1.67 1.67^(@)1.67^{\circ} ,平均误差小于 0.75 0.75 0.75^(@)0.75^{\circ} ,均方根误差小于 0.85 0.85 0.85^(@)0.85^{\circ} 。实验结果表明,所提出的姿态调整方法能精确调节机器人焊接姿态。

\captionsetup{labelformat=empty}
Figure 23: Fig. 24. WSFP extraction. (a) Shuffle-YOLO model. (b) Geometric feature method. (c) ECO method.
图 23:图 24. 焊缝特征点提取。(a) Shuffle-YOLO 模型。(b) 几何特征方法。(c) ECO 方法。

\captionsetup{labelformat=empty}
Figure 24: Fig. 25. Irregular weld seam extraction. (a) Irregular lap weld seam. (b) Shuffle-YOLO. (c) Geometric method. (d) ECO. (e) Irregular fillet weld seam. (f) Shuffle-YOLO. (g) Geometric method. (h) ECO.
图 24:图 25. 不规则焊缝提取。(a) 不规则搭接焊缝。(b) Shuffle-YOLO。(c) 几何方法。(d) ECO。(e) 不规则角焊缝。(f) Shuffle-YOLO。(g) 几何方法。(h) ECO。

E. Comparative Experiments
E. 对比实验

  1. Robustness Performance of WSFP Extraction: As shown in Fig. 24(a) and (b), the Shuffle-YOLO model can work well despite strong arc radiation and spatters. However, the geometricbased method [20] shows detection deviation under strong arc radiation and spatters. As shown in Fig. 24©, ECO methods [24] are effective under the condition of weak noise. However, they may not work well under the condition of strong arc radiation and spatters.
    焊缝特征点提取的鲁棒性表现:如图 24(a)和(b)所示,Shuffle-YOLO 模型在强电弧辐射和飞溅条件下仍能良好工作。然而基于几何的方法[20]在强电弧辐射和飞溅条件下会出现检测偏差。如图 24(c)所示,ECO 方法[24]在弱噪声条件下有效,但在强电弧辐射和飞溅条件下可能失效。
As shown in Fig. 25(a) and (e), some irregular weld seams are difficult to extract because they are difficult to be represented by geometric features. As shown in Fig. 25© and (g), the geometric-based method [20] is inaccurate in extraction. As shown in Fig. 25(d) and (h), the ECO-based method [24] suffers from a positional shift when the shape of the weld seam changes. However, the Shuffle-YOLO model can extract irregular weld seams accurately, as shown in Fig. 25(b) and (f), which improves the robustness and adaptability of seam tracking.
如图 25(a)和(e)所示,某些不规则焊缝因难以用几何特征表示而难以提取。如图 25(c)和(g)所示,基于几何的方法[20]在提取时存在不准确性。如图 25(d)和(h)所示,当焊缝形状变化时,基于 ECO 的方法[24]会出现位置偏移。而 Shuffle-YOLO 模型能准确提取不规则焊缝(如图 25(b)和(f)所示),从而提升了焊缝跟踪的鲁棒性和适应性。
Table 8: TABLE VIII  表 8:表 VIII
WSFP Extraction Frame Rate
焊缝特征点提取帧率
\captionsetup{labelformat=empty}
Methods  方法 Ours  我们的方法 In [20]  文献[20] In [24]  在[24]中 Meta  元数据 Servo-Robot  伺服机器人
Frame Rate ( Hz ) ( Hz ) (Hz)(\mathrm{Hz})  帧率 ( Hz ) ( Hz ) (Hz)(\mathrm{Hz}) 105.26 35.70 52.91 25 30
Methods Ours In [20] In [24] Meta Servo-Robot Frame Rate (Hz) 105.26 35.70 52.91 25 30| Methods | Ours | In [20] | In [24] | Meta | Servo-Robot | | :---: | :---: | :---: | :---: | :---: | :---: | | Frame Rate $(\mathrm{Hz})$ | 105.26 | 35.70 | 52.91 | 25 | 30 |
Table 9: TABLE IX  表 9:表 IX
Comparison Experiment With Deep Learning Models
与深度学习模型的对比实验
\captionsetup{labelformat=empty}
Method  方法 Model Size /MB  模型大小/MB Detection Time /ms  检测时间/毫秒 Mean Error /pixel  平均误差/像素 RMSE /pixel  均方根误差/像素
CenterNet [25] 124 57.11 3.651 1.982
Faster RCNN [26] 108 113.36 6.974 4.748
RetinaNet [27] 139 77.06 3.797 1.700
EfficientDet [28] 15 75.72 4.325 1.980
YOLOV5 [29] 14.10 15.96 2.416 1.821
MobileNetV3 [30] 7.03 11.30 2.160 1.140
SSD [33] 91.63 33.91 5.759 1.232
Shuffle-YOLO 1.29 9.50 2.089 1.096
Method Model Size /MB Detection Time /ms Mean Error /pixel RMSE /pixel CenterNet [25] 124 57.11 3.651 1.982 Faster RCNN [26] 108 113.36 6.974 4.748 RetinaNet [27] 139 77.06 3.797 1.700 EfficientDet [28] 15 75.72 4.325 1.980 YOLOV5 [29] 14.10 15.96 2.416 1.821 MobileNetV3 [30] 7.03 11.30 2.160 1.140 SSD [33] 91.63 33.91 5.759 1.232 Shuffle-YOLO 1.29 9.50 2.089 1.096| Method | Model Size /MB | Detection Time /ms | Mean Error /pixel | RMSE /pixel | | :--- | :--- | :--- | :--- | :--- | | CenterNet [25] | 124 | 57.11 | 3.651 | 1.982 | | Faster RCNN [26] | 108 | 113.36 | 6.974 | 4.748 | | RetinaNet [27] | 139 | 77.06 | 3.797 | 1.700 | | EfficientDet [28] | 15 | 75.72 | 4.325 | 1.980 | | YOLOV5 [29] | 14.10 | 15.96 | 2.416 | 1.821 | | MobileNetV3 [30] | 7.03 | 11.30 | 2.160 | 1.140 | | SSD [33] | 91.63 | 33.91 | 5.759 | 1.232 | | Shuffle-YOLO | 1.29 | 9.50 | 2.089 | 1.096 |
Table 10: TABLE X  表 10:表 X
Comparison Experiment With Non-Deep-Learning Methods
与非深度学习方法的对比实验
\captionsetup{labelformat=empty}
Method  方法
  检测时间/毫秒
Detection
Time /ms
Detection Time /ms| Detection | | :---: | | Time /ms |
  平均误差/像素
Mean Error
/pixel
Mean Error /pixel| Mean Error | | :---: | | /pixel |
  均方根误差/像素
RMSE
/pixel
RMSE /pixel| RMSE | | :---: | | /pixel |
Geometric method [20]  几何方法[20] 28.01 3.831 3.924
ECO method [24]  ECO 方法[24] 18.90 3.508 1.387
Shuffle-YOLO 9.50 2.089 1.096
Method "Detection Time /ms" "Mean Error /pixel" "RMSE /pixel" Geometric method [20] 28.01 3.831 3.924 ECO method [24] 18.90 3.508 1.387 Shuffle-YOLO 9.50 2.089 1.096| Method | Detection <br> Time /ms | Mean Error <br> /pixel | RMSE <br> /pixel | | :--- | :---: | :---: | :---: | | Geometric method [20] | 28.01 | 3.831 | 3.924 | | ECO method [24] | 18.90 | 3.508 | 1.387 | | Shuffle-YOLO | 9.50 | 2.089 | 1.096 |
  1. Efficiency Performance of WSFP Extraction: To test the efficiency of the Shuffle-YOLO model, the execution time of each algorithm is compared. The experimental platform is equipped with a 10700 CPU, RTX3070 GPU, and 32-GB RAM. The efficiency of the Shuffle-YOLO model is compared with that of the weld seam extraction algorithm proposed in [20] and [24]. In addition, two commercial platforms widely used in seam tracking are also considered: Meta and Servo-Robot. The experimental comparison results are presented in Table VIII.
    焊缝特征点提取效率测试:为验证 Shuffle-YOLO 模型的执行效率,对比了各算法的运行耗时。实验平台配置为 10700 CPU、RTX3070 GPU 和 32GB 内存。将 Shuffle-YOLO 模型与文献[20]和[24]提出的焊缝提取算法进行效率对比,同时纳入两款广泛应用于焊缝跟踪的商业平台 Meta 和 Servo-Robot 作为参照。实验对比结果如表 VIII 所示。
Experimental results show that the weld seam extraction frequency of the Shuffle-YOLO model reaches 105 Hz , which is much higher than that of Meta and Servo-Robot. The ShuffleYOLO model is 2.94 times faster than the geometric feature method [20] and 1.98 times faster than the target tracking method [24]. The Shuffle-YOLO model has obvious advantages in the speed of WSFP extraction.
实验结果表明,Shuffle-YOLO 模型的焊缝提取频率达到 105Hz,显著高于 Meta 和 Servo-Robot 平台。该模型运行速度是几何特征法[20]的 2.94 倍,是目标跟踪法[24]的 1.98 倍,在焊缝特征点提取速度方面具有明显优势。

F. Discussion  F. 讨论

We designed and developed a WSFP extraction framework based on deep learning. In this framework, target detection networks, such as CenterNet, Faster R-CNN, RetinaNet, EfficientDet, SSD, and YOLO, can also be used to obtain WSFP. To reduce the model size and improve detection speed, the Shuffle-YOLO model was proposed. In order to verify the performance of the proposed model, the Shuffle-YOLO model was compared with the existing deep-learning-based methods and classical methods (non-deep-learning). As shown in Fig. 26 and Table IX, compared with the existing deep learning models, the Shuffle-YOLO model presents smaller size and higher detection speed and accuracy. As shown in Table X, compared with non-deep-learning methods, the Shuffle-YOLO
我们设计并开发了一个基于深度学习的焊缝特征点提取框架。在该框架中,目标检测网络(如 CenterNet、Faster R-CNN、RetinaNet、EfficientDet、SSD 和 YOLO)均可用于获取焊缝特征点。为减小模型体积并提升检测速度,本文提出了 Shuffle-YOLO 模型。为验证所提模型性能,将 Shuffle-YOLO 模型与现有基于深度学习的方法及经典方法(非深度学习)进行对比。如图 26 和表 IX 所示,与现有深度学习模型相比,Shuffle-YOLO 模型具有更小的体积、更高的检测速度与精度。如表 X 所示,相较于非深度学习方法,Shuffle-YOLO

\captionsetup{labelformat=empty}
Figure 25: Fig. 26. Comparison of the WSFP extraction.
图 25:图 26. 焊缝特征点提取对比
model presents better feature point extraction speed and accuracy. In summary, compared with the existing weld seam extraction methods (deep learning methods and classical methods) and commercial products (Meta and Servo-Robot), the proposed Shuffle-YOLO shows higher robustness and more satisfying detection speed. In addition, a real-time seam tracking and posture adjustment method for V-grooved spatially curved weld seams is proposed, which is applicable for weld types such as the lap joint and fillet joint under the condition of selecting appropriate WSFPs.
模型展现出更优的特征点提取速度与精度。综上所述,与现有焊缝提取方法(深度学习方法和经典方法)及商业产品(Meta 和 Servo-Robot)相比,所提出的 Shuffle-YOLO 具有更高的鲁棒性和更令人满意的检测速度。此外,本研究提出了一种针对 V 型坡口空间曲线焊缝的实时焊缝跟踪与姿态调整方法,该方法在选取合适焊缝特征点的条件下,可适用于搭接接头和角接接头等焊接类型。
To sum up, compared with the existing WSFP extraction methods, the Shuffle-YOLO network has the following advantages: First, it has high robustness and strong anti-interference performance. The Shuffle-YOLO model can work properly despite strong arc radiation and spatters. The reason is that the deep neural network can directly extract high-dimensional features from the weld seam image and accurately extract the WSFP coordinates. Second, the detection speed is fast. The frequency of the Shuffle-YOLO-based method reaches 105 Hz . The higher WSFP extraction speed enables welding robots to adapt to faster seam tracking. Third, the Shuffle-YOLO-based method does not need to set welding feature parameters, which is convenient to use. For new weld seam types, the WSFP can be extracted quickly and accurately after the establishment, annotation, and training of the dataset. More importantly, the Shuffle-YOLO model can be continuously updated. Therefore, the extraction
综上所述,与现有焊缝特征点提取方法相比,Shuffle-YOLO 网络具有以下优势:首先,鲁棒性强且抗干扰性能优异。即使在强烈电弧辐射和飞溅环境下,Shuffle-YOLO 模型仍能正常工作。其原理在于深度神经网络可直接从焊缝图像中提取高维特征,并精准获取焊缝特征点坐标。其次,检测速度快。基于 Shuffle-YOLO 的方法频率可达 105Hz,更高的特征点提取速度使焊接机器人能适应更快速的焊缝跟踪。第三,该方法无需设置焊接特征参数,使用便捷。对于新型焊缝,只需完成数据集的建立、标注和训练后,即可快速准确地提取特征点。更重要的是,Shuffle-YOLO 模型可持续更新,因此通过训练新焊缝数据集能持续提升

performance of WSFP can be continuously improved by training on new weld seam datasets.
焊缝特征点的提取性能。

VI. Conclusion  六、结论

In this article, an efficient and robust complex WSFP extraction method for seam tracking and posture adjustment based on the Shuffle-YOLO model was proposed. Some conclusions are drawn as follows.
本文提出了一种基于 Shuffle-YOLO 模型的高效鲁棒复合焊缝特征点提取方法,用于焊缝跟踪与姿态调整。主要结论如下:
  1. A WSFP extraction framework based on deep learning was designed and developed. In this framework, the target detection network based on deep learning is directly used to extract WSFPs.
    设计并开发了基于深度学习的焊缝特征点提取框架。该框架直接采用基于深度学习的目标检测网络实现焊缝特征点提取。
  2. The Shuffle-YOLO model can quickly and accurately extract feature points of different weld seams. The model has strong feature extraction ability and can work well despite the case of strong arc radiation and spatters. In addition, the model is also suitable for lap weld seams, fillet weld seams, and irregular weld seams.
    Shuffle-YOLO 模型能够快速准确地提取不同焊缝类型的特征点。该模型具有强大的特征提取能力,在强电弧辐射和飞溅干扰下仍能稳定工作。此外,该模型还适用于搭接焊缝、角焊缝以及不规则焊缝等多种焊缝类型。
  3. The proposed seam tracking and posture adjustment method can not only realize the high-precision seam tracking of the complex spatially curved weld seam but also automatically adjust the posture of the welding torch to guarantee the welding quality.
    所提出的焊缝跟踪与姿态调整方法不仅能实现复杂空间曲线焊缝的高精度跟踪,还能自动调整焊枪姿态以确保焊接质量。
  4. Through experiments, the frequency of the Shuffle-YOLO-based algorithm reaches 105 Hz , and the detection mean error is less than 2.1 pixels. The mean error of seam tracking is not higher than 0.38 mm , and the mean error of posture adjustment is less than 0.75 0.75 0.75^(@)0.75^{\circ}. Experimental results show that the proposed method can realize the welding of complex spatially curved weld seams with high quality.
    实验表明,基于 Shuffle-YOLO 算法的处理频率达到 105Hz,检测平均误差小于 2.1 像素。焊缝跟踪平均误差不超过 0.38 毫米,姿态调整平均误差小于 0.75 0.75 0.75^(@)0.75^{\circ} 。实验结果证明,该方法能高质量地完成复杂空间曲线焊缝的焊接作业。

    Welding quality control mainly includes welding parameter optimization and welding deformation reduction, which is also an important aspect of intelligent welding. Based on the application of the deep-learning-based WSFP extraction method in seam tracking and posture adjustment, we will continue to study welding parameter optimization methods and welding deformation reduction methods based on the finite-element model. In the future, we hope to combine seam tracking with welding quality control to further improve the intelligent level of robot welding.
    焊接质量控制主要包括焊接参数优化和焊接变形抑制,这也是智能焊接的重要研究方向。基于深度学习焊缝特征点提取方法在焊缝跟踪与位姿调整中的应用,我们将继续研究基于有限元模型的焊接参数优化方法和焊接变形抑制方法。未来期望将焊缝跟踪与焊接质量控制相结合,进一步提升机器人焊接的智能化水平。

References  参考文献

[1] S. Zhang, S. Wang, F. Jing, and M. Tan, “A sensorless hand guiding scheme based on model identification and control for industrial robot,” IEEE Trans. Ind. Informat., vol. 15, no. 9, pp. 5204-5213, Sep. 2019.
[1] S. Zhang, S. Wang, F. Jing, and M. Tan, “基于模型识别与控制的工业机器人无传感器手导引方案,” IEEE 工业信息学汇刊, vol. 15, no. 9, pp. 5204-5213, 2019 年 9 月.

[2] S. Wei, H. Ma, T. Lin, and S. Chen, “Autonomous guidance of initial welding position with “single camera and double positions” method,” Sens. Rev., vol. 30, no. 1, pp. 62-68, 2010.
[2] S. Wei, H. Ma, T. Lin, and S. Chen, “采用‘单相机双位置’法的焊接起始位置自主引导,” 传感器评论, vol. 30, no. 1, pp. 62-68, 2010 年.

[3] P. Kiddee et al., “Point cloud based three-dimensional reconstruction and identification of initial welding position,” in Transactions on Intelligent Welding Manufacturing. Singapore: Springer, 2018, pp. 61-77.
[3] P. Kiddee 等,《基于点云的三维重建与焊接初始位置识别》,载于《智能焊接制造学报》。新加坡:Springer 出版社,2018 年,第 61-77 页。

[4] J. Fan et al., “An initial point alignment and seam-tracking system for narrow weld,” IEEE Trans. Ind. Informat., vol. 16, no. 2, pp. 877-886, Feb. 2020.
[4] J. Fan 等,《窄焊缝初始点对准与跟踪系统》,载于《IEEE 工业信息学汇刊》,第 16 卷第 2 期,第 877-886 页,2020 年 2 月。

[5] A. Olabi, R. Lostado, and K. Benyounis, “Review of microstructures, mechanical properties, and residual stresses of ferritic and martensitic stainless-steel welded joints,” in Comprehensive Materials Processing, Oxford, U.K.: Elsevier, 2014, pp. 181-192.
[5] A. Olabi、R. Lostado 与 K. Benyounis,《铁素体与马氏体不锈钢焊接接头的微观结构、力学性能及残余应力研究综述》,载于《材料加工大全》。英国牛津:Elsevier 出版社,2014 年,第 181-192 页。

[6] R. L. Lorza, R. E. García, M. N. M. Calvo, and R. M. Vidal, “Improvement in the design of welded joints of EN 235JR low carbon steel by multiple response surface methodology,” Metals, vol. 6, no. 9, 2016, Art. no. 205.
[6] R. L. Lorza、R. E. García、M. N. M. Calvo 与 R. M. Vidal,《基于多响应面方法改进 EN 235JR 低碳钢焊接接头设计》,载于《金属》期刊,第 6 卷第 9 期,2016 年,文献编号 205。

[7] R. L. Lorza, M. C. Bobadilla, M. N. M. Calvo, and P. V. Roldán, “Residual stresses with time-independent cyclic plasticity in finite element analysis of welded joints,” Metals, vol. 7, 2017, Art. no. 136.
[7] R. L. Lorza, M. C. Bobadilla, M. N. M. Calvo, 和 P. V. Roldán, "焊接接头有限元分析中考虑时间无关循环塑性的残余应力研究,"《金属》, 第 7 卷, 2017 年, 文献编号 136.

[8] R. L. Lorza, R. E. García, R. F. Martinez, and M. N. M. Calvo, “Using genetic algorithms with multi-objective optimization to adjust finite element models of welded joints,” Metals, vol. 8, no. 4, 2018, Art. no. 230.
[8] R. L. Lorza, R. E. García, R. F. Martinez, 和 M. N. M. Calvo, "采用多目标遗传算法优化焊接接头有限元模型,"《金属》, 第 8 卷第 4 期, 2018 年, 文献编号 230.

[9] R. Lostado, R. F. Martinez, B. M. Donald, and P. Villanueva, “Combining soft computing techniques and the finite element method to design and optimize complex welded products,” Integr. Comput.-Aided Eng., vol. 22, pp. 153-170, 2015.
[9] R. Lostado, R. F. Martinez, B. M. Donald, 和 P. Villanueva, "融合软计算技术与有限元方法实现复杂焊接产品设计与优化,"《集成计算机辅助工程》, 第 22 卷, 第 153-170 页, 2015 年.

[10] J. Sun, C. Li, X.-J. Wu, V. Palade, and W. Fang, “An effective method of weld defect detection and classification based on machine vision,” IEEE Trans. Ind. Informat., vol. 15, no. 12, pp. 6322-6333, Dec. 2019.
[10] 孙健, 李超, 吴晓军, V. Palade, 和方伟, "基于机器视觉的焊缝缺陷检测与分类有效方法,"《IEEE 工业信息学汇刊》, 第 15 卷第 12 期, 第 6322-6333 页, 2019 年 12 月.

[ 11 ] [ 11 ] [11][11] Y. Feng, Z. Chen, D. Wang, J. Chen, and Z. Feng, “DeepWelding: A deep learning enhanced approach to GTAW using multisource sensing images,” IEEE Trans. Ind. Informat., vol. 16, no. 1, pp. 465-474, Jan. 2020.
[ 11 ] [ 11 ] [11][11] 冯毅、陈志强、王东、陈杰、冯志强,《DeepWelding:基于多源传感图像深度学习增强的 GTAW 方法》,《IEEE 工业信息学汇刊》,第 16 卷第 1 期,第 465-474 页,2020 年 1 月。

[12] A. Rout, B. Deepak, and B. Biswal, “Advances in weld seam tracking techniques for robotic welding: A review,” Robot. Comput. Integr. Manuf., vol. 56, pp. 12-37, 2019.
[12] 罗特、迪帕克、比斯瓦尔,《机器人焊接焊缝跟踪技术进展:综述》,《机器人及计算机集成制造》,第 56 卷,第 12-37 页,2019 年。

[13] W. Ren, G. Wen, B. Xu, and Z. Zhang, “A novel convolutional neural network based on time-frequency spectrogram of arc sound and its application on GTAW penetration classification,” IEEE Trans. Ind. Informat., vol. 17, no. 2, pp. 809-819, Feb. 2021.
[13] 任伟、温国标、徐斌、张正,《基于电弧声时频谱图的新型卷积神经网络及其在 GTAW 熔透分类中的应用》,《IEEE 工业信息学汇刊》,第 17 卷第 2 期,第 809-819 页,2021 年 2 月。

[14] Z. Wang, “Unsupervised recognition and characterization of the reflected laser lines for robotic gas metal arc welding,” IEEE Trans. Ind. Informat., vol. 13, no. 4, pp. 1866-1876, Aug. 2017.
[14] 王振,《机器人熔化极气体保护焊反射激光线的无监督识别与表征》,《IEEE 工业信息学汇刊》,第 13 卷第 4 期,第 1866-1876 页,2017 年 8 月。

[15] A. Weis et al., “Automated seam tracking system based on passive monocular vision for automated linear robotic welding process,” in Proc. IEEE 15th Int. Conf. Ind. Informat., 2017, pp. 305-310.
[15] A. Weis 等,"基于被动单目视觉的自动化直线焊接机器人焊缝跟踪系统",收录于《第 15 届 IEEE 工业信息学国际会议论文集》,2017 年,第 305-310 页。

[16] Y. Ma et al., “A fast and robust seam tracking method for spatial circular weld based on laser visual sensor,” IEEE Trans. Instrum. Meas., vol. 70, 2021, Art. no. 5015311.
[16] 马勇等,"基于激光视觉传感器的空间环形焊缝快速鲁棒跟踪方法",《IEEE 仪器与测量学报》,第 70 卷,2021 年,文献编号 5015311。

[17] Y. Ding, W. Huang, and R. Kovacevic, “An on-line shape-matching weld seam tracking system,” Robot. Comput. Integr. Manuf., vol. 42, pp. 103-112, 2016.
[17] 丁毅、黄伟、R. Kovacevic,"基于在线形状匹配的焊缝跟踪系统",《机器人技术与计算机集成制造》,第 42 卷,第 103-112 页,2016 年。

[18] Y. Zou, X. Chen, G. Gong, and J. Li, “A seam tracking system based on a laser vision sensor,” Measurement, vol. 127, pp. 489-500, 2018.
[18] 邹毅、陈晓、龚刚、李杰,"基于激光视觉传感器的焊缝跟踪系统",《测量》,第 127 卷,第 489-500 页,2018 年。

[19] Y. Ma et al., “Efficient and accurate start point guiding and seam tracking method for curve weld based on structure light,” IEEE Trans. Instrum. Meas., vol. 70, 2021, Art. no. 3001310.
[19] 马勇等,"基于结构光的曲线焊缝高效精确起点引导与焊缝跟踪方法",IEEE 仪器与测量汇刊,第 70 卷,2021 年,文献编号 3001310。

[20] R. Xiao, Y. Xu, Z. Hou, C. Chen, and S. Chen, “An adaptive feature extraction algorithm for multiple typical seam tracking based on vision sensor in robotic arc welding,” Sens. Actuator A: Phys., vol. 297, 2019, Art. no. 111533.
[20] 肖锐、徐洋、侯震、陈超、陈实,"基于视觉传感器的机器人电弧焊多典型焊缝自适应特征提取算法",传感器与执行器 A:物理卷,第 297 卷,2019 年,文献编号 111533。

[21] X. Li, X. Li, S. S. Ge, M. O. Khyam, and C. Luo, “Automatic welding seam tracking and identification,” IEEE Trans. Ind. Electron., vol. 64, no. 9, pp. 7261-7271, Sep. 2017.
[21] 李旭、李翔、葛树生、M.O. Khyam、罗超,"自动焊缝跟踪与识别",IEEE 工业电子汇刊,第 64 卷第 9 期,第 7261-7271 页,2017 年 9 月。

[22] Z. Kalal, K. Mikolajczyk, and J. Matas, “Tracking-learning-detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 7, pp. 1409-1422, Jul. 2012.
[22] Z. Kalal、K. Mikolajczyk、J. Matas,"跟踪-学习-检测",IEEE 模式分析与机器智能汇刊,第 34 卷第 7 期,第 1409-1422 页,2012 年 7 月。

[23] Y. Zou and T. Chen, “Laser vision seam tracking system based on image processing and continuous convolution operator tracker,” Opt. Lasers Eng., vol. 105, pp. 141-149, 2018.
[23] 邹毅、陈涛,《基于图像处理与连续卷积算子跟踪器的激光视觉焊缝跟踪系统》,《光学与激光工程》,第 105 卷,第 141-149 页,2018 年。

[24] J. Fan, S. Deng, Y. Ma, C. Zhou, F. Jing, and M. Tan, “Seam feature point acquisition based on efficient convolution operator and particle filter in GMAW,” IEEE Trans. Ind. Informat., vol. 17, no. 2, pp. 1220-1230, Feb. 2021.
[24] 范杰、邓松、马勇、周超、景峰、谭民,《基于高效卷积算子与粒子滤波的 GMAW 焊缝特征点获取方法》,《IEEE 工业信息学汇刊》,第 17 卷第 2 期,第 1220-1230 页,2021 年 2 月。

[25] X. Zhou, V. Koltun, and P. Krähenbühl, “Tracking objects as points,” in Proc. Eur. Conf. Comput. Vis., 2020, pp. 474-490.
[25] 周骁、V. Koltun、P. Krähenbühl,《以点为目标的对象跟踪》,载《欧洲计算机视觉会议论文集》,2020 年,第 474-490 页。

[26] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137-1149, Jun. 2017.
[26] 任少卿、何恺明、Ross Girshick、孙剑,《Faster R-CNN:基于区域提议网络的实时目标检测算法》,《IEEE 模式分析与机器智能汇刊》,第 39 卷第 6 期,第 1137-1149 页,2017 年 6 月。

[27] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 29993007.
[27] T.-Y. Lin, P. Goyal, R. Girshick, K. He, 和 P. Dollár,《密集目标检测的焦点损失函数》,收录于《IEEE 国际计算机视觉会议论文集》,2017 年,第 2999-3007 页。

[28] M. Tan, R. Pang, and Q. V. Le, “EfficientDet: Scalable and efficient object detection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 10778-10787.
[28] M. Tan, R. Pang, 和 Q. V. Le,《EfficientDet:可扩展的高效目标检测》,收录于《IEEE/CVF 计算机视觉与模式识别会议论文集》,2020 年,第 10778-10787 页。

[29] Ultralytics, “YOLOv5,” 2021. [Online]. Available: https://github.com/ ultralytics/yolov5
[29] Ultralytics,《YOLOv5》,2021 年。[在线]。访问地址:https://github.com/ultralytics/yolov5

[30] A. Howard et al., “Searching for MobileNetV3,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 1314-1324.
[30] A. Howard 等,《MobileNetV3 的搜索》,收录于《IEEE/CVF 国际计算机视觉会议论文集》,2019 年,第 1314-1324 页。

[31] N. Ma, X. Zhang, H. T. Zheng, and J. Sun, “ShuffleNet V2: Practical guidelines for efficient CNN architecture design,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 116-131.
[31] 马宁, 张翔, 郑海涛, 孙剑, "ShuffleNet V2: 高效 CNN 架构设计的实用准则," 欧洲计算机视觉会议论文集, 2018 年, 第 116-131 页.

[32] Y. Zou, R. Lan, X. Wei, and J. Chen, "Robust seam tracking via a deep learning framework combining tracking and detection,"Appl. Opt., vol. 59, no. 14, pp. 4321-4331, 2020.
[32] 邹毅, 兰荣, 魏星, 陈健, "基于跟踪与检测深度融合的鲁棒焊缝跟踪方法," 应用光学, 第 59 卷, 第 14 期, 第 4321-4331 页, 2020 年.

[33] Y. Zou, M. Zhu, and X. Chen, “A robust detector for automated welding seam tracking system,” J. Dyn. Syst. Meas. Control, vol. 143, no. 7, 2021, Art. no. 071001.
[33] 邹毅, 朱明, 陈曦, "自动化焊缝跟踪系统的鲁棒检测器设计," 动态系统测量与控制杂志, 第 143 卷, 第 7 期, 2021 年, 文献编号 071001.

[34] H. Lee, C. Ji, and J. Yu, “Effects of welding current and torch position parameters on bead geometry in cold metal transfer welding,” J. Mech. Sci. Technol., vol. 32, no. 9, pp. 4335-4343, 2018.
[34] 李浩, 季超, 余杰, "冷金属过渡焊接中电流与焊枪位置参数对焊道形貌的影响," 机械科学与技术杂志, 第 32 卷, 第 9 期, 第 4335-4343 页, 2018 年.

[35] L. Yang, Y. Liu, J. Peng, and Z. Liang, “A novel system for off-line 3D seam extraction and path planning based on point cloud segmentation for arc welding robot,” Robot. Comput. Integr. Manuf., vol. 64, 2020, Art. no. 101929.
[35] 杨磊、刘洋、彭健与梁志强,《基于点云分割的弧焊机器人离线三维焊缝提取与路径规划新系统》,《机器人技术与计算机集成制造》,第 64 卷,2020 年,文献编号 101929。

[36] Y. Zou, J. Chen, and X. Wei, “Research on a real-time pose estimation method for a seam tracking system,” Opt. Lasers. Eng., vol. 127, 2020, Art. no. 105947.
[36] 邹毅、陈杰与魏星,《焊缝跟踪系统实时位姿估计方法研究》,《光学与激光工程》,第 127 卷,2020 年,文献编号 105947。

[37] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 11, pp. 1330-1334, Nov. 2000.
[37] 张正友,《一种灵活的新相机标定技术》,《IEEE 模式分析与机器智能汇刊》,第 22 卷第 11 期,第 1330-1334 页,2000 年 11 月。

[38] J. Fan, F. Jing, Z. Fang, and Z. Liang, “A simple calibration method of structured light plane parameters for welding robots,” in Proc. IEEE 35th Chin. Control Conf., 2016, pp. 6127-6132.
[38] 范佳、景峰、方志强与梁志强,《焊接机器人结构光平面参数简易标定方法》,收录于《第 35 届中国控制会议论文集》,2016 年,第 6127-6132 页。

[39] R. Y. Tsai and R. K. Lenz, “A new technique for fully autonomous and efficient 3D robotics hand/eye calibration,” IEEE Trans. Robot. Autom., vol. 5, no. 3, pp. 345-358, Jun. 1989.
[39] R. Y. Tsai 和 R. K. Lenz,《一种用于完全自主高效三维机器人手眼标定的新技术》,IEEE 机器人与自动化汇刊,第 5 卷第 3 期,第 345-358 页,1989 年 6 月。

Yunkai Ma received the B.S. degree in intelligent science and technology from Qingdao University, Qingdao, China, in 2017, and the M.S. degree in control engineering from the Harbin Institute of Technology, Harbin, China, in 2019. He is currently working toward the Ph.D. degree in technology of computer applications with the School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
马运凯于 2017 年获中国青岛大学智能科学与技术专业学士学位,2019 年获中国哈尔滨工业大学控制工程专业硕士学位。现为中国科学院大学人工智能学院计算机应用技术专业在读博士研究生。
His research interests include machine vision and welding automation.
他的研究方向包括机器视觉与焊接自动化。

Junfeng Fan received the B.S. degree in mechanical engineering and automation from the Beijing Institute of Technology, Beijing, China, in 2014, and the Ph.D. degree in control theory and control engineering from the Institute of Automation, Chinese Academy of Sciences (IACAS), Beijing, in 2019.
范俊峰 2014 年获中国北京理工大学机械工程及自动化专业学士学位,2019 年获中国科学院自动化研究所控制理论与控制工程专业博士学位。
He is currently an Associate Professor with the State Key Laboratory of Management and Control for Complex Systems, IACAS. His research interests include industrial robotics.
他现任中国科学院自动化研究所复杂系统管理与控制国家重点实验室副研究员,研究方向为工业机器人技术。

Huizhen Yang received the B.S. degree in automation and the M.S. and Ph.D. degrees in control theory and control engineering from Northwestern Polytechnical University, Xi’an, China, in 1995, 1998, and 2005, respectively.
杨慧珍于 1995 年、1998 年和 2005 年分别获得中国西安西北工业大学自动化学士学位、控制理论与控制工程硕士及博士学位。
From 2008 to 2009, she was a Visiting Scholar with the School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA. She is currently an Associate Professor with the School of Marine Science and Technology, Northwestern Polytechnical University. Her research interests include cooperative target tracking by multiple autonomous underwater vehicles.
2008 至 2009 年间,她作为访问学者赴美国佐治亚州亚特兰大市佐治亚理工学院电气与计算机工程学院交流。现任西北工业大学航海学院副教授,主要研究方向为多自主水下航行器协同目标跟踪。

Hongliang Wang received the junior college degree in welding technology and automation from Lanzhou Petrochemical Vocational and Technical College, Lanzhou, China, in 2010, and the B.S. degree in electrical engineering and automation from Ningxia University, Yinchuan, China, in 2019.
王洪亮 2010 年毕业于中国兰州兰州石化职业技术学院焊接技术及自动化专科,2019 年获中国银川宁夏大学电气工程及其自动化学士学位。
He is a Senior Welder Technician, Chief Technician of Shanghai, Owner of Innovation Studio in Jiading District, Shanghai. He has been a Commissioning Engineer with Yaskawa Shougang Robot Co., Ltd., Shanghai, China, for 11 years.
他是一名高级焊接技师、上海市首席技师、上海市嘉定区创新工作室领衔人。曾在中国上海首钢安川机器人有限公司担任调试工程师长达 11 年。
Mr. Wang was honored as the first advanced individual of Yaskawa Shougang Robot Co. in 2011. He was the recipient of the Science and Technology Progress Award of Jiading District, Shanghai, in 2016, and the title of the Technical Model in Jiading District, Shanghai, in 2020.
王先生于 2011 年荣获首钢安川机器人公司首届先进个人称号,2016 年获上海市嘉定区科技进步奖,2020 年被授予上海市嘉定区技术能手称号。

Shiyu Xing received the B.S. degree in automotive engineering from Nanchang University, Nanchang, China, in 2016, and the M.S. degree in mechanical engineering from Washington University in Saint Louis, Saint Louis, MO, USA, in 2018. He is currently working toward the Ph.D. degree in technology of computer applications with the Institute of Automation, Chinese Academy of Sciences, Beijing, China.
邢世玉 2016 年获得中国南昌大学车辆工程专业学士学位,2018 年获美国圣路易斯华盛顿大学机械工程硕士学位。目前在中国北京中国科学院自动化研究所攻读计算机应用技术专业博士学位。
From 2018 to 2020, he was a Car2X Engineer with Mercedes Benz R&D, Beijing. His research interests include robotic 3-D reconstruction, state estimation, and calibration.
2018 至 2020 年期间,他曾担任梅赛德斯-奔驰研发中心(北京)车联网工程师。主要研究方向包括机器人三维重建、状态估计及标定技术。

Fengshui Jing received the Ph.D. degree in control theory and control engineering from the Institute of Automation, Chinese Academy of Sciences (IACAS), Beijing, China, in 2002.
景奉水于 2002 年在中国北京中国科学院自动化研究所(IACAS)获得控制理论与控制工程专业博士学位。
He is currently a Professor with the State Key Laboratory of Management and Control for Complex Systems, IACAS. His research interests include robotics and manufacturing systems.
现任中国科学院自动化研究所复杂系统管理与控制国家重点实验室教授,研究方向包括机器人技术与制造系统。

Min Tan received the B.S. degree from Tsing Hua University, Beijing, China, in 1986, and the Ph.D. degree from the Institute of Automation, Chinese Academy of Sciences (IACAS), Beijing, in 1990, both in control science and engineering.
谭民 1986 年毕业于中国北京清华大学,1990 年在中国北京中国科学院自动化研究所(IACAS)获得控制科学与工程专业博士学位。
He is currently a Professor with the State Key Laboratory of Management and Control for Complex Systems, IACAS. He has authored more than 100 papers in journals, books, and conference proceedings. His research interests include robotics and control systems.
现任中国科学院自动化研究所复杂系统管理与控制国家重点实验室教授,在期刊、专著及会议论文集发表论文 100 余篇,研究方向包括机器人技术与控制系统。

  1. Manuscript received 4 October 2022; revised 11 January 2023; accepted 27 January 2023. Date of publication 1 February 2023; date of current version 19 September 2023. This work was supported in part by the National Natural Science Foundation of China under Grant 62173327 and Grant 62003341 and in part by the Youth Innovation Promotion Association, Chinese Academy of Sciences, under Grant 2022130. Paper no. TII-22-4159. (Corresponding author: Junfeng Fan.)
    稿件收稿日期:2022 年 10 月 4 日;修改日期:2023 年 1 月 11 日;录用日期:2023 年 1 月 27 日。出版日期 2023 年 2 月 1 日;当前版本日期 2023 年 9 月 19 日。本研究得到国家自然科学基金项目(62173327、62003341)和中国科学院青年创新促进会项目(2022130)资助。论文编号 TII-22-4159。(通讯作者:范俊峰。)
    Yunkai Ma, Shiyu Xing, Fengshui Jing, and Min Tan are with the School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China, and also with the Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China (e-mail: mayunkai2019@ia.ac.cn; xingshiyu2020@ia.ac.cn; fengshui.jing@ia.ac.cn; min.tan@ia.ac.cn).
    马运凯、邢世玉、荆风水、谭民任职于中国科学院大学人工智能学院(北京 100049),同时隶属于中国科学院自动化研究所(北京 100190)(电子邮箱:mayunkai2019@ia.ac.cn;xingshiyu2020@ia.ac.cn;fengshui.jing@ia.ac.cn;min.tan@ia.ac.cn)。
    Junfeng Fan is with the Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China (e-mail: junfeng.fan@ia.ac.cn).
    范俊峰任职于中国科学院自动化研究所(北京 100190)(电子邮箱:junfeng.fan@ia.ac.cn)。

    Huizhen Yang is with the School of Marine Science and Technology, Northwestern Polytechnical University, Xi’an 710072, China (e-mail: rainsun_ly@nwpu.edu.cn).
    杨慧珍任职于西北工业大学航海学院(西安 710072)(电子邮箱:rainsun_ly@nwpu.edu.cn)。
    Hongliang Wang is with the Yaskawa Shougang Robot Co., Ltd., Shanghai 201815, China (e-mail: wanghongliang@ysr-motoman.cn).
    王洪亮任职于安川首钢机器人有限公司(上海 201815),电子邮箱:wanghongliang@ysr-motoman.cn。

    Color versions of one or more figures in this article are available at https://doi.org/10.1109/TII.2023.3241595 .
    本文部分彩图可访问 https://doi.org/10.1109/TII.2023.3241595 查看。
    Digital Object Identifier 10.1109/TII.2023.3241595
    数字对象标识符 10.1109/TII.2023.3241595