Loading [MathJax]/extensions/MathZoom.js
Few-Shot Semantic Segmentation for Complex Driving Scenes | IEEE Conference Publication | IEEE Xplore

Few-Shot Semantic Segmentation for Complex Driving Scenes


Abstract:

The main objective of few-shot semantic segmentation (FSSS) is to segment novel objects within query images by leveraging a limited set of support images. Being capable o...Show More

Abstract:

The main objective of few-shot semantic segmentation (FSSS) is to segment novel objects within query images by leveraging a limited set of support images. Being capable of segmenting the novel classes plays an essential role in the development of perception functions for automated vehicles. However, existing few-shot semantic segmentation work strives to improve the performance of the models on object-centric datasets. In our work, we evaluate the few-shot semantic segmentation on the more challenging driving scene understanding tasks. As a use case specific study, we give a systematic analysis of the disparity between commonly used FSSS datasets and driving datasets. Based on that, we proposed methodologies to integrate knowledge from the class hierarchy of the datasets, utilize more effective feature extraction, and choose more representative support images during inference. These approaches are evaluated extensively on the Cityscapes and Mapillary datasets to indicate their effectiveness. We point out the remaining challenges of training, evaluating, and employing FSSS models for complex road scenes in real practice.
Date of Conference: 02-05 June 2024
Date Added to IEEE Xplore: 15 July 2024
ISBN Information:

ISSN Information:

Conference Location: Jeju Island, Korea, Republic of
No metrics found for this document.

I. Introduction

Automated driving represents the forefront of future transportation technologies, promising enhanced safety and efficiency. Semantic segmentation is a pivotal task in automated driving perception. It enables vehicles to comprehend driving scenes, detect obstacles, and navigate accordingly by assigning a specific class label to each pixel in an image. Therefore, achieving robust scene understanding is crucial for safe and accurate driving decisions. However, training a reliable semantic segmentation model demands extensive data, which can be time-consuming and expensive to collect and annotate [1]. Additionally, certain objects encountered on the road may be rare or uncommon, leading to insufficient information for these classes in the training dataset. Consequently, the model may struggle to provide accurate predictions for such objects [2]. To address these challenges, researchers have been exploring approaches to complement current research, where they utilize limited annotated examples of previously unknown classes to detect those novel object classes using Few-Shot Semantic Segmentation (FSSS), initiated by the work One-Shot Learning for Semantic Segmentation (OSLSM) [3]. While significant progress has been made in FSSS research, the existing work has primarily focused on object-centric datasets like Pascal VOC [4] or COCO [5], which often contain a restricted complexity of scenes compared to the road scene datasets like Cityscapes [1] and Mapillary [6].

Usage
Select a Year
2025

View as

Total usage sinceJul 2024:77
05101520JanFebMarAprMayJunJulAugSepOctNovDec17120000000000
Year Total:29
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe

References

References is not available for this document.