Loading [MathJax]/extensions/MathMenu.js
A Robust Visual SLAM Method for Additive Manufacturing of Vehicular Parts Under Dynamic Scenes | IEEE Journals & Magazine | IEEE Xplore

A Robust Visual SLAM Method for Additive Manufacturing of Vehicular Parts Under Dynamic Scenes


Additive manufacturing has significant advantages in complex parts of the vehicle manufacturing. Considering dynamic characteristics of additive manufacturing scenarios, ...

Abstract:

Additive manufacturing has significant advantages in complex parts of the vehicle manufacturing. As additive manufacturing is a kind of precise production activity, diffe...Show More
Society Section: IEEE Vehicular Technology Society Section

Abstract:

Additive manufacturing has significant advantages in complex parts of the vehicle manufacturing. As additive manufacturing is a kind of precise production activity, different components of manufacturing instruments need to be located in appropriate positions to ensure accuracy. The visual Simultaneous Localization and Mapping (SLAM) can be considered to be a practical means for this purpose. Considering dynamic characteristics of additive manufacturing scenarios, this paper constructs a deep learning-enhanced robust SLAM approach for production monitoring of additive manufacturing. The proposed method combines the semantic segmentation technique with the motion-consistency detection algorithm together. Firstly, the Transformer-based backbone network is used to segment the images to establish the a prior semantic information of dynamic objects. Next, the feature points of dynamic objects are projected by the motion-consistency detection algorithm. Then, the static feature points are adopted for feature matching and position estimation. In addition, we conducted a couple of experiments to test function of the proposed method. The obtained results show that the proposal can have excellent performance to promote realistic additive manufacturing process. As for numerical results, the proposal can improve image segmentation effect about 10% to 15% in terms of scenarios of visual SLAM-based additive manufacturing.
Society Section: IEEE Vehicular Technology Society Section
Additive manufacturing has significant advantages in complex parts of the vehicle manufacturing. Considering dynamic characteristics of additive manufacturing scenarios, ...
Published in: IEEE Access ( Volume: 11)
Page(s): 22114 - 22123
Date of Publication: 02 March 2023
Electronic ISSN: 2169-3536

Funding Agency:


CCBY - IEEE is not the copyright holder of this material. Please follow the instructions via https://creativecommons.org/licenses/by/4.0/ to obtain full-text articles and stipulations in the API documentation.
SECTION I.

Introduction

Additive Manufacturing is an emerging processing technology based on the principle of discrete stacking [1], [2], [3], which breaks the traditional reduced material manufacturing and equal material manufacturing production methods [4], [5]. It is a new manufacturing technology that does not require the collaboration of jigs and fixtures and is not processed by machine tools and equipment [6], [7], [8]. With the unification of production standards and the maturity of raw material technology, additive manufacturing technology is well boosted by intelligent technology and the intersection of basic disciplines like automotive industry, aerospace, bio-engineering and other fields [9], [10]. At the same time, its gradual prevalence can also breed some latent technical breakthroughs in many cross-discipline applications [11], [12]. Therefore, it is believed to have unlimited market potential in terms of smart manufacturing [13], [14], [15].

Additive manufacturing technology firstly uses computer-aid design softwares to conduct 3D modeling for mechanical parts. Then, slicing software is utilized to slice the 3D models according to the parameters of the parts. On this basis, the computer algorithms are used to precisely connect each layer to form a layer stack to quickly realize the additive manufacturing of parts [16], [17]. Contemporarily, the application of additive manufacturing technology has bee more and more general in the automotive industry [18], [19],liu2022human. In this context, some brand-name automotive companies currently choose to use additive manufacturing technology in the automotive development stage to achieve the purpose of rapid verification and optimization of components [20], [21]. In the small batch production of automotive parts often involve some complex parts with thin walls and internal abdominal cavities, the traditional forging and casting processes have limitations in processing and cannot meet the production requirements [22], [23]. Due to the point-by-point, line-by-line and domain-by-domain local forming characteristics of additive manufacturing technology, it is possible to achieve highly flexible near-net-shape additive manufacturing in the manufacture of complex parts [24], [25]. Therefore, additive manufacturing technology has significant advantages in the manufacturing of complex parts, and the application prospect is very promising [26], [27].

A. Motivation

In additive manufacturing, how to use vision sensors for accurate positioning and mapping of parts is the key to realize autonomation. With the continuous development of research, robots are equipped with more diverse sensors, including vision, laser, radar, and multi-sensor fusion methods. Robots are able to perceive their environment and have the ability to estimate the state of their systems using the sensors they carry, they can sense their surroundings and make decisions autonomously. These digital technologies require accurate and robust localization with the ability to progressively build and maintain models of the world scenes. In this work, localization refers to the ability to obtain the internal system state of the robot’s motion, including position, orientation, and velocity. While mapping refers to the ability to sense the state of the external environment and capture information about the surroundings, including the geometry, appearance, and semantic information of a 2D or 3D scene [28]. These components can perceive internal or external states individually or, like simultaneous localization and mapping (SLAM) [29], so as to facilitate control decision of robot’ poses.

The localization and mapping problem has been studied for decades and various sophisticated hand-designed (hand-designed) models and algorithms are being developed, such as odometer estimation, image-based localization, position recognition, SLAM, motion reconstruction (SfM) [29], [30]. Under ideal conditions, these sensors and models are able to estimate the system state accurately regardless of the time, environment constraints. However, in reality, sensor measurement errors, system modeling errors, complex environmental dynamics and unrealistic constraints (conditions) affect the accuracy and reliability of manually designed systems [31]. Although modern vision SLAM systems are quite mature and have satisfactory performance [32], the aforementioned classical SLAM systems are with the assumption that the objects for SLAM are static, and the detection and processing of dynamic objects are very limited.

However, in actual indoor and outdoor scenes, it is impossible to circumvent moving objects [33]. In this case, unexpected changes in the surrounding environment may seriously affect the camera pose estimation, increase the trajectory error or even lead to system failure. Thus, the detection of moving objects and the correct segmentation of dynamic regions become important research aspects of vision SLAM in dynamic scenes. Because of the limitations of model-based solutions and the rapid development of machine learning, especially deep learning, researchers have been prompted to consider data-driven learning methods as an alternative approach to solve this issue. The class of relationships between sensor data input values (e.g., vision, inertial guidance, LiDAR data or other sensors) and target output values (e.g., position, orientation, scene geometry or semantics) as a mapping function [34], [35].

B. Contributions

While traditional model-based solutions are implemented by manually designing algorithms, learning-based approaches construct this mapping function by learning large amounts of data. The learning-based approach has three advantages. Firstly, the learning approach can automatically discover task-relevant features using a highly expressive deep neural network as a general-purpose approximator. This feature enables trained models to adapt to various scenarios (e.g., featureless scenes, dynamic high-speed scenes, dynamic blur, accurate camera calibration) [36], [37]. Secondly, the learning approach allows learning from past experiences and actively developing new information. By building a general data-driven model, researchers can solve domain-specific problems without having to go through the trouble of specifying the entire knowledge about mathematical and physical rules when building the model. Thirdly, deep neural networks have the ability to be scaled to large-scale problems. Trained on large data sets through back propagation and gradient descent algorithms, a large number of parameters in the DNN framework can be automatically optimized by minimizing the loss function. Thus, harnessing the power of data and computation to solve localization and mapping is potentially achievable.

The multi-sensor fusion scheme needs to add information from different sensor sources, facing multiple difficulties such as data correlation, signal synchronization, and fusion processing, which greatly increases the complexity of the system, and the dense scene flow approach is computationally intensive and a great challenge for real-time computing [38], [39]. The multi-sensor fusion scheme can construct semantic maps to enrich the robot’s understanding of the environment and thus obtain advanced perception, but there are problems of misjudgment for movable objects. To address it, this paper proposes a robust SLAM algorithm for dynamic scenes, which uses deep learning to quickly identify dynamic object frames combined with sparse feature optical flow calculation to make further dynamic judgments, the scenario of the proposed method is shown in Figure 1. Edge detection algorithms are used to effectively segment the edges of dynamic objects to ensure that no too many static feature points are mistakenly removed. And a static environment 3D point cloud map without dynamic objects are constructed to truly realize the powerful sensing capability of autonomous robots.

FIGURE 1. - A typical example that illustrates scenarios of robust visual SLAM for additive manufacturing.
FIGURE 1.

A typical example that illustrates scenarios of robust visual SLAM for additive manufacturing.

To sum up, the main contributions of this paper can be stated as the following three aspects:

  • This work aims at the additive manufacturing of automobile parts, and explores to employ deep learning-based vision sensing to enhance the manufacturing process.

  • This work proposes a robust visual SLAM method for additive manufacturing of vehicular parts under dynamic scenes.

  • This work conducts simulation experiments on real-world scenarios to evaluate performance of the proposal, and corresponding discussions are also made for it.

SECTION II.

Methodology

In our study, the transformer real-time target detection algorithm is used to quickly obtain the rough rectangular range of potential semantic dynamic objects in the three-channel image of input, the ORB feature points and the optical flow field are extracted and calculated, respectively, which largely reduce the time to calculate the optical flow field of all pixel points. Then by combining the semantic data with the dynamic feature points filtered by the optical flow field calculation, the true motion of the object can be obtained. Then, the canny operator is adopted to detect the edges of the dynamic objects to extract the edge data of the dynamic objects, and to do position estimation of camera by minimizing re-projection error of static feature points other than dynamic objects. Finally, the map is constructed using the key frames with the dynamic objects removed. The overall flow is shown in Figure 2.

FIGURE 2. - The workflow for main detailed process of proposed methodology.
FIGURE 2.

The workflow for main detailed process of proposed methodology.

A. Real-Time Target Detection Based on Transformer

The Detection Transformer (DEtection TRansformer, DETR) [40], [41] with an ensemble global loss that makes predictions through bilateral match and a classical encoder-decoder architecture, which containing three components: a CNN based backbone to extracte feature representations, a Transformer pretraining model to enhance features, and a simple feedforward network (FFN) for performing the object detection prediction.The detail structure is shown as Figure 3. Starting from an initial image $x_{img}\in R^{3\times H_{0} \times W_{0}}$ (3 color channels, To batch the input images together with sufficient 0 padding to have the same dimension ($H_{0}$ ,$W_{0}$ ) as the largest image in same batch), a convolutional network then to generate a activation map $f\in R^{C \times H \times W}$ with lower resolution.

FIGURE 3. - Sketch map for the technical structure of a Transformer-based vision sensing approach.
FIGURE 3.

Sketch map for the technical structure of a Transformer-based vision sensing approach.

First, the high-level activation map of the channel dimension $f$ is reduced from $C$ to dimension $d$ using a $1 \times 1$ convolution to generate a new feature map, which is written as $z_{0} \in R^{d\times H \times W}$ . Since a sequence is expected for the encoder as input, so the spatial dimension of the feature map $z_{0}$ is collapsed to generate a new feature map with dimension $d\times HW$ . For an encoder, it is constituted by a head part, an attention mechanism part and a FFN part. Since the architecture of transformer is alignment-independent (order-independent), a fixed position encoding [8], [9] is provided and the processed results are added to the input of each attention layer.

For a decoder, it transforms $N$ embeddings with size $d$ by multi-head attention mechanism. The authors in [40] adopted an auto-regressive model to predict one element of the output sequence at once. Because the decoder is also permutation-independent (order-independent), hence thedifferent results will be produced according to $N$ input embeddings. And the input embeddings are learned through the positional encodings, and the results is defined as the queries of object, and then are added to the input of each attention layer. The decoder can process $N$ queries of objects into output feature map. The elements of the output embedding corresponding to queries of object are decoded independently into bounding box coordinates and are assigned the category labels through a FFN, which get the $N$ final predictions. A self attention mechanism is applied on these embeddings, the model utilizes pairwise relationships between all objects to perform global inference on all objects.

The final prediction results is calculated by a three-layer foward propagation network. Also, there is a hidden layer with dimension $d$ and a projection layer before the final results. The normalized center coordinates, height and width of the bounding box with respect to the input image are predicted by the FFN, and the category labels is predicted through a softmax function in the last layer. Thus, a fixed size of $N$ bounding boxes is predicted, and $N$ is typically bigger than the number of targets of interest in the original input. In addition, a category label is appended to indicate that no targets are detected within the slots (e.g., no targets of interest in the image or targets of interest do not fill the $N$ slots). This category is similar to the “background” category in standard target detection methods.

B. ORB Feature Extraction

In order to carry out the static and dynamic analysis of the object while saving the computational cost and ensuring the real time performance, this study calculates the optical flow field to estimate the motion state of the extracted ORB feature points, which are mainly divided into two parts, FAST corner point extraction and BRIEF descriptor calculation [33], which is given in the followin:

  1. Construct the image pyramid, at the same time extract the FAST corner points for each pyamid layer using a uniform extraction strategy based on quadtree [41], the specific calculation process is described as follows:

    • Step 1:

      Select pixel p in the image and obtain its luminance, assumed to be Ip;

    • Step 2:

      set the threshold $T=I_{p}\times 0.2$ ;

    • Step 3:

      traverse a circle with radius 3 centered on pixel p. The 16 pixel points on the circle with radius 3;

    • Step 4:

      Let the brightness of each traversal point be $I_{c}p$ . If there are $N$ consecutive points with $I_{cp}>I_{p}+T$ or $I_{cp} < I_{p}-T$ , the point is considered to be a featured point, and $N$ is 12 in this study;

    • Step 5:

      performs the above operation for each pixel in the image.

  2. Calculate the rotation angle of the FAST corner point by using the gray scale center of mass method. Define the moments of the image as:\begin{equation*} m_{ab}= \sum p^{a} q^{b} \cdot I(p,q) \tag{1}\end{equation*} View SourceRight-click on figure for MathML and additional features. where $I(p,q)$ is the gray value of the FAST corner point $(p,q)$ , a, b are the order of the moments, and the image center of mass coordinates are:\begin{equation*} C= (m_{10}/m_{00}, m_{01}/m_{00}) \tag{2}\end{equation*} View SourceRight-click on figure for MathML and additional features.

    The rotation angle is:\begin{equation*} \theta = arctan (m_{10}, m_{00}) \tag{3}\end{equation*} View SourceRight-click on figure for MathML and additional features.

  3. Calculate the rotated BRIEF descriptor, choose the window $W$ of $S \times S$ , and define:\begin{align*} \tau \left ({{I;p,q} }\right) = {\begin{cases} 1,\quad if{\mathrm{ }}I(p) < I(q)\\ 0,\quad else \end{cases}} \tag{4}\end{align*} View SourceRight-click on figure for MathML and additional features. where:$I(p)$ is the grayscale value at $p$ . Randomly selected n pairs of feature points, the Generate an n-dimensional BRIEF description sub-vector:\begin{equation*} {f_{n}}(w) = \sum \limits _{1 \le i \le 2} {2^{i}\tau (I;p,q)} \tag{5}\end{equation*} View SourceRight-click on figure for MathML and additional features.

C. Edge Detection

Directly removing the rectangular area of dynamic objects removes too much static scene, which is not conducive to accurate camera positioning and map construction. In order to extract edge of dynamic objects more accurately, this study uses canny operator to detect the edges of the filtered dynamic objects. Canny is a second-order differential operator [42], which extracts the edges of the image by the zero point of the second-order derivative at the edge of the given image. The strong edges and weak edges are detected separately, and the real weak edges can be detected. The detail steps are described as follows:

  1. Eliminate image noise. Firstly, the image is smoothed by using Gaussian function. Define $f(p,q)$ as the input image, $O(x,y)$ as the output image, and $g(p,q)$ as the Gaussian function, where the Gaussian function is defined as:\begin{align*} g(p,q) &= \frac {1}{{2\pi {\sigma ^{2}}}}\exp \left({- \frac {{p^{2} + {q^{2}}}}{{2{\sigma ^{2}}}}}\right) \tag{6}\\ O(p,q) &= f(p,q) \times g(p,q) \tag{7}\end{align*} View SourceRight-click on figure for MathML and additional features. 2) The gradient magnitude and direction calculation. Using the image processed by Gaussian filtering, a suitable gradient operator is adopted for the gradient magnitude and direction calculation of each pixel by calculating the difference of the first-order bias between adjacent pixels. Where, $A_{p}$ ,$A_{q}$ are the Sobel gradient operator, $E_{p}$ ,$E_{q}$ is the difference between horizontal and vertical direction, respectively. The gradient $E(p,q)$ and direction $\theta (p,q)$ are written as following:\begin{align*} {E_{p}}(p,q) &= {A_{p}} \times O(p,q) \tag{8}\\ {E_{q}}(p,q) &= {A_{q}} \times O(p,q) \tag{9}\\ E(p,q) &= {(E_{p}^{2} + E_{q}^{2})^{1/2}} \tag{10}\\ \theta (p,q) &= \arctan \frac {E_{q}(p,q)}{E_{p}(p,q)} \tag{11}\end{align*} View SourceRight-click on figure for MathML and additional features. 3) Filtering non-extreme values. In the Gaussian filtering process, the edges may be amplified, and the Non-Maximum Suppression (NMS) is adopted to filter the points those are not edges. If the current calculated gradient amplitude in the field of the point is greater than along the gradient direction of the point. If the current calculated gradient amplitude of the other 2 neighboring points is greater than the gradient amplitude along the direction of the point, the point belongs to the possible edge point, otherwise it is not, and the suppression means is taken to set the gray value to 0.

  2. Double threshold detection and connected edges. After the above steps of processing only get the candidate edge points, and then use the upper and lower threshold detection process to eliminate the pseudo-edge points. Points larger than. Points with upper threshold are detected as edge points, points smaller than lower threshold are detected as non-edge points, points between the two values are detected as weak edge points, and if they are adjacent to the pixel point identified as an edge point, they are judged as edge points; otherwise, they are non-edge points.

D. Location Estimation and Point Cloud Overlay

After determining the exact contour of the dynamic objects, the dynamic points distributed within the objects are excluded, and only the stable feature points in the non-dynamic region are used for a more accurate camera pose solution. $(u_{c}^{i},v_{c}^{i})$ is set to be the pixel coordinates of the static points in the current frame $c$ , and the depth value $z_{c}^{i}$ is used to obtain the 3D spatial point coordinates $P_{c}^{i}(p_{c}^{i},q_{c}^{i},z_{c}^{i})$ .\begin{equation*} P_{c}^{i}(p_{c}^{i},q_{c}^{i},z_{c}^{i}) = \left({z_{c}^{i}\frac {{u_{c}^{i} - {c_{p}}}}{f_{p}},z_{c}^{i}\frac {{v_{c}^{i} - {c_{q}}}}{f_{q}},z_{c}^{i}}\right) \tag{12}\end{equation*} View SourceRight-click on figure for MathML and additional features. where ($f_{p}$ ,$f_{q}$ ), is the focal length of camera, $(c_{p},c_{q},)$ is the principal point coordinates of camera.

Building 3D point cloud maps of the environment can provide better visualization of the environment. The semantic information carried by the point cloud can provide the basis for robot navigation and obstacle avoidance [22], [23]. When constructing point clouds, if there are large errors in the poses, the maps will be overlapped with obvious interlocks, which is not good for navigation. This problem can be effectively solved by overlaying the point clouds with the dynamic objects removed. The ORB_SLAM2 algorithm is used to obtain key-frames, and the point clouds of all key-frames are superimposed, which is too complicated and redundant [35], [43], [44]. In the process of key-frame screening, the following two strategies are considered: 1) key-frame validity judgment. If the area of the rejected point cloud is more than half of the current key-frame area, the key-frame is considered to contain insufficient valid information and is not involved in the overlay. 2) key-frame redundancy judgment. The feature points that can be observed by multiple key-frames are called co-visual landmark points of multiple key-frames. To detect the co-viewing landmark points observed in the current key-frame, assume that the set of identified drawing key-frames is $F$ , the set of observed landmark points is $L$ , and the set of landmark points observed in the current key-frame is $L_{c}$ , If the number of $L\cap L_{c}$ exceeds half of $L_{c}$ , the current key-frame is considered to contain too many co-viewing landmarks and the information is redundant, so it does not participate in the superposition. If the above two conditions are satisfied, $F$ and $L_{c}$ are updated, which ensures that new point cloud information is introduced and there is enough static environment information.

SECTION III.

Experiments and Analysis

In order to evaluate the actual performance and effectiveness of the proposed ORB_SLAM2_transformer system in this paper, the system is tested in three aspects: the performance segmentation performance of the transformer network, the performance of dynamic feature point rejection, and the performance of localization in dynamic scenes.

A. Experimental Data and Settings

The “Freiburg2_desk_with _person” dataset from the Vision Group of the Technical University of Munich (TUM), Germany, was selected as the open-source dataset, which contains a total of 4067 frames with static tables and chairs and multiple slow moving human targets [45]. This dataset is designed to check the robustness of the SLAM system to dynamic objects and people, to distinguish the map and to check the changes in the scene, which meets the requirements of the experiments in this paper.In order to test the robustness of the proposed method, an additional “DataSet_Factory” data set based on real scenes is constructed [36]. The data set is obtained by fixing a camera on a mobile experimental platform equipped with LIDAR, which has a static industrial assembly line and several slow-moving human targets in a total of 1715 frames, in exactly the same format as the TUM data set.

For the analysis of semantic segmentation results, this paper selects the mainstream statistical pixel accuracy (Pixel accuracy), class Mean accuracy, Mean IoU and Frequency weight IoU are the four mainstream semantic segmentation evaluation criteria used to evaluate pixel accuracy and region overlap [31]. The specific definitions are as follows:\begin{align*} Pixelacc &= {\textstyle {\frac{{\sum {{n_{ii}}} } }{ {\sum {t_{i}} }}}} \tag{13}\\ Meanacc &= \frac {{\sum {\frac {{{n_{ii}}}}{t_{i}}} }}{{{n_{cl}}}} \tag{14}\\ MeanIoU &= \sum {{{\frac {{{n_{ii}}}}{{t_{i} + \sum \nolimits _{j} {{n_{ji}} - {n_{ii}}} }}} \mathord {\left /{ {\vphantom {{\frac {{{n_{ii}}}}{{t_{i} + \sum \nolimits _{j} {{n_{ji}} - {n_{ii}}} }}} {{n_{cl}}}}} }\right. } {{n_{cl}}}}} \tag{15}\\ FreqweightIoU &= {{\sum \nolimits _{i} {\frac {{t_{i}{n_{ii}}}}{{t_{i} + \sum \nolimits _{j} {{n_{ji}} - {n_{ii}}} }}} } \mathord {\left /{ {\vphantom {{\sum \nolimits _{i} {\frac {{t_{i}{n_{ii}}}}{{t_{i} + \sum \nolimits _{j} {{n_{ji}} - {n_{ii}}} }}} } {\sum \nolimits _{k} {t_{k}} }}} }\right. } {\sum \nolimits _{k} {t_{k}} }} \tag{16}\end{align*} View SourceRight-click on figure for MathML and additional features.

In this paper, we calculate the relative pose error (RPE) and Absolute Pose Eror (APE) to evaluate the difference in SLAM performance between the ORB__SLAM2-transformer in a dynamic environment. The relative pose error is calculated based the difference between the estimated SLAM pose and the truth value of the camera pose at the same time, and mainly describes the accuracy of the pose difference between two key frames between a fixed time $\Delta \text{t}$ . The RPE for the $i$ -th key frame is defined as:\begin{equation*} {E_{i:}} = {(Q_{i}^{ - 1}{Q_{i + \Delta t}})^{ - 1}}(P_{i}^{ - 1}{P_{i + \Delta t}}) \tag{17}\end{equation*} View SourceRight-click on figure for MathML and additional features. where $Q_{i}$ is the real trajectory pose; $P_{i}$ is the key frame pose estimated by the system. Root Mean Squared Error (RMSE) is used to evaluate the error and is defined as follows:\begin{equation*} RMSE({E_{i:n}},\Delta t) = {\left({\frac {1}{m}{\sum \nolimits _{i = 1}^{m} {\left \|{ {trans({E_{i}})} }\right \|} ^{2}}}\right)^{1/2}} \tag{18}\end{equation*} View SourceRight-click on figure for MathML and additional features.

In addition, one typical image segmentation method is introduced as the baseline. As this work manages to explore image segmentation-based method to enhance SLAM manufacturing process. The fully convolutional networks [46], named as FCN for short, is a most typical image segmentation method in this area. Thus, the proposal in this paper is compared with the FCN-based backbone network to measure performance appearance.

B. Numerical Results and Analysis

First, the transformer network based segmentation model used in this paper was analyzed. The detail performance is shown as in Table 1, and its pixel accuracy, category average accuracy and average region overlap reached 71.101%, 89.512% and 58. 157%, respectively. The performance comparison of Figure 4 indicating that this model is significantly better than other network models, and the feature map can retain more detailed features.

TABLE 1 Display of Segment Performance Results Obtained by Different Methods
Table 1- 
Display of Segment Performance Results Obtained by Different Methods
FIGURE 4. - Display of segment performance results obtained by different methods.
FIGURE 4.

Display of segment performance results obtained by different methods.

This is evaluated because the introduction of the transformer network semantic segmentation model increases the complexity of the system, and the consequent problem is that the feature point extraction takes longer computation time, which affects the real-time performance. In Table 2, the average feature point extraction time per image reaches 0.21757 seconds due to the addition of semantic segmentation and dynamic feature point projection algorithms to the system. Although the extraction time is significantly increased compared to the ORB_SLAM2 system, the system is still able to achieve an average rate of about 5 frames per second, which basically ensures the real-time performance.

TABLE 2 Display of Running Efficiency Results Obtained By Different Methods
Table 2- 
Display of Running Efficiency Results Obtained By Different Methods

Figure 5 shows the comparison of ORB feature point extraction results of the two systems under the TUM dataset: The feature point area contains the dynamic portrait target area; the points in the dynamic portrait target area are completely eliminated, and the number of eliminated feature points increases gradually as all the portrait targets enter the picture, which achieves the expected goal. Figure 6 shows the comparison of the ORB feature point extraction results of the two systems in the real scene, the feature points in the dynamic target region are completely eliminated, and the same target is achieved as the dataset, which shows that the method is still applicable in the real scene. Table 3 shows the RMSE, maximum error(MAX), mean absolute error(MAE), and standard deviation of the relative and absolute positional errors, respectively. Absolute error, and standard deviation of the relative and absolute positional errors are presented in Table 4:

TABLE 3 Comparison Among Experimental Methods With Respect to RPE Performance Results
Table 3- 
Comparison Among Experimental Methods With Respect to RPE Performance Results
TABLE 4 Comparison Among Experimental Methods With Respect to APE Performance Results
Table 4- 
Comparison Among Experimental Methods With Respect to APE Performance Results
FIGURE 5. - Typical examples for the results of dynamic points projection on TUM data set.
FIGURE 5.

Typical examples for the results of dynamic points projection on TUM data set.

FIGURE 6. - Typical examples for the results of dynamic points projection on DataSet_Factory data set.
FIGURE 6.

Typical examples for the results of dynamic points projection on DataSet_Factory data set.

As shown in Figure 7, compared with the ORB_SLAM2, the proposed ORB_ SLAM2_Transformer has a higher maximum error than the ORB_SLAM2, but the RMSE, the MAE and the standard deviation are reduced by 11.038%, 15.257% and 2. 309%, respectively; From the Figure 8, for the absolute trajectory error, the ORB_SLAM2_Transformer has a higher maximum error than the ORB_SLAM2. The ORB_SLAM2_Transformer has a smaller error compared to the ORB_SLAM2, and the four error parameters are reduced by 18.450% 27.%, 18.177%, and 19.492%, respectively. Therefore, it is proved that the ORB_SLAM2_Transformer has an overall smaller positioning error and better relative and absolute positional errors. We believe that the ORB_SLAM2_Transformer achieves the goal of removing the dynamic feature point fraction to reduce the camera tracking localization error, thus optimizing the problem of camera tracking drift under dynamic targets.

FIGURE 7. - Main results about the RPE performances of experimental methods on TUM data set.
FIGURE 7.

Main results about the RPE performances of experimental methods on TUM data set.

FIGURE 8. - Main results about the APE performances of experimental methods on DataSet_Factory data set.
FIGURE 8.

Main results about the APE performances of experimental methods on DataSet_Factory data set.

C. Discussion

In this work, the deep learning-based image segmentation is employed to enhance the SLAM manufacturing process in terms of automobile parts. In our proposal, the Transformer is employed as backbone network for use. The performance of image segmentation methods directly determines efficiency of following additive manufacturing operations. Hence, the proposal is compared with a typical image segmentation method FCN for performance evaluation.

To better verify performance of the proposal, four aspects of evaluation metrics are introduced to visualize algorithm performance in the format of numerical values. The four aspects of metrics include: segmentation effect, time complexity, RPE performance, and APE performance. After simulative experiments on real-world scenes of SLAM-based additive manufacturing, the obtained results show that the proposal can have proper performance in terms of segmentation effect. The good image segmentation performance can well promote the following manufacturing operations.

Although the proposal can have proper performance in SLAM-based additive manufacturing process, there is still some distance to practical industrial application. The deep learning has received great development in recent years and brought much insight into many computer vision tasks. However, deep learning algorithms are mostly facing the problem of computational complexity, which requires relatively high hardware conditions [47]. Generalized into our proposal, how to improve the running efficiency and reduce computational complexity is the future direction of our work.

SECTION IV.

Conclusion

In this paper, in order to achieve robust SLAM in dynamic scenes, a transformer based visual SLAM method is proposed. The method combines the segmentation technique with the motion-consistency detection algorithm. First, the transformer network is used to semantically segment the image to establish the a prior semantic information of dynamic objects, then the feature points belonging to dynamic objects are rejected by the motion-consistency detection algorithm. Finally, the static feature points are utilized for pose estimation and point cloud overlay. Simulation experiments are conducted to test function of the proposed method. The obtained results show that the absolute trajectory error and relative estimation error can be reduced additive manufacturing of vehicular parts compared with the traditional ORB_SLAM2 system.

References

References is not available for this document.