I. Introduction
Over the past decade, the neural networks took over both industry and academia in terms of state-of-the-art problem-solving methods for many image processing tasks. Many of the models found their way into real world applications and lately, thanks to optimized architectures even into handheld devices. In many areas, neural network can even surpass human (e.g. plant classification based on leaf photo [2], [1]). However, like with any model, the quality of output is strongly dependent on the quality of the input image [3]. Images captured in unfavorable lighting conditions, with shaky camera or from different angle may fail to be classified (or segmented, tracked etc., in the rest of this paper we focus on classification). In case of a mobile app a simple warning text can prompt user to retake the photo (in a specific way) but in many cases this is not possible (e.g., images of stellar bodies, stained microscopic images etc.). Besides the engineering solution to this problem, the obvious step is to add various, on-purpose damaged images into the training set and re-run the training process. This paper proposes an alternative approach which rivals the training dataset augmentation approach without a need to alter or touch the existing, trained network. This allows for an attractive enhancement of preexisting systems and offers an interesting new angle for designing new architectures and further research.