Paper 13207-8
Utilizing synthetic data for object segmentation on autonomous heavy machinery in dynamic unstructured environments
16 September 2024 • 14:20 - 14:40 BST | Lowther
Abstract
Traditional deep learning datasets often lack representations of unstructured environments, making it difficult to acquire the ground truth data needed to train models. We therefore present a novel approach that relies on platform-specific synthetic training data. To this end, we use an excavator simulation based on the Unreal Engine to accelerate data generation for object segmentation tasks in unstructured environments. We focus on barrels, which serve as a typical example of deformable objects with different styles and shapes, which are commonly encountered in hazardous environments.
Through extensive experimentation with different SOTA models for semantic segmentation, we demonstrate the effectiveness of our approach in overcoming the limitations of small training sets and show how photorealistic synthetic data substantially improves model performance, even on corner cases such as occluded or deformed objects and different lighting conditions, which is crucial to assure the robustness in real-world applications.
In addition, we demonstrate the usefulness of this approach with a real-world instance segmentation application together with a ROS-based barrel grasping pipeline for our excavator platform.
Through extensive experimentation with different SOTA models for semantic segmentation, we demonstrate the effectiveness of our approach in overcoming the limitations of small training sets and show how photorealistic synthetic data substantially improves model performance, even on corner cases such as occluded or deformed objects and different lighting conditions, which is crucial to assure the robustness in real-world applications.
In addition, we demonstrate the usefulness of this approach with a real-world instance segmentation application together with a ROS-based barrel grasping pipeline for our excavator platform.
Presenter
Miguel Granero
Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB (Germany)
Miguel Granero completed his bachelor's degree in Electronics, Robotics and Mechatronics in 2021 at the University of Seville and received a master’s degree in Automation and Robotics from Universidad Politécnica de Madrid in 2023. He is currently working as a research associate at the Fraunhofer Institute of Optronics, System Technologies, and Image Exploitation (IOSB) in Karlsruhe, Germany. His research focuses on vision- and LIDAR-based perception and deep learning.