Loading [MathJax]/extensions/MathMenu.js
CNN Hyperspectral Image Classification Using Training Sample Augmentation with Generative Adversarial Networks | IEEE Conference Publication | IEEE Xplore

CNN Hyperspectral Image Classification Using Training Sample Augmentation with Generative Adversarial Networks


Abstract:

A big challenge for hyperspectral image recognition is to perform pixel classification when only a few hyperspectral training labeled pixels are available. In this resear...Show More

Abstract:

A big challenge for hyperspectral image recognition is to perform pixel classification when only a few hyperspectral training labeled pixels are available. In this research we have built Generative Adversarial Networks (GANs) that generate additional virtual training hyperspectral pixels based on features extracted from the originally labeled pixels belonging to training dataset. The experiments show a better performance of pixel classification for a classifier based on Deep Convolutional Neural Networks (DCNNs) using GANs for training sample augmentation versus the performance of the DCNN classifier without GAN augmentation. The score of 95.32% correct classification using DCNN classifier with GANs versus the score of 92.94% of DCNN classifier without GANs proves the obvious advantage of the presented approach.
Date of Conference: 18-20 June 2020
Date Added to IEEE Xplore: 16 July 2020
ISBN Information:
Conference Location: Bucharest, Romania

I. Introduction

Generative Adversarial Networks (GANs) are used for more than 1,000 applications, as human face generation, face aging, text to image translation and video prediction. New elements are generated from an existing distribution of models, keeping model features. The generation is done by an ensemble of two networks, one of them called "the Generator" and the other called "the Discriminator". These networks are unsupervised networks and the accuracy of their work is evaluated by the number of errors they make.

Contact IEEE to Subscribe

References

References is not available for this document.