Generative Facial Prior Generative Adversarial Networks based Restoration of Degraded Facial images in Comparison of PSNR with Photo Upsampling via Latent Space Exploration | IEEE Conference Publication | IEEE Xplore

Generative Facial Prior Generative Adversarial Networks based Restoration of Degraded Facial images in Comparison of PSNR with Photo Upsampling via Latent Space Exploration


Abstract:

Applying a Novel Generative face Prior Generation Adversarial Network, this research aims to restore damaged face photographs. U sing Peak Signal Noise Ratio (PSNR) as a ...Show More

Abstract:

Applying a Novel Generative face Prior Generation Adversarial Network, this research aims to restore damaged face photographs. U sing Peak Signal Noise Ratio (PSNR) as a metric, we will compare the recovered picture quality using the Photo Upsampling via Latent Space Exploration (PULSE) approach to the original. Two groups, every containing 232 samples, were used to generate 464 samples for this research. One group used a novel GFPGAN, while the other utilizes a PULSE technique. The research procedure involves importing pre-trained models as well as implementing and executing the Novel GFPGAN code in Google Colab. Using the F -score from prior research and an online statistical tool (clincalc.com), the sample size is determined. The computation uses a constant value of 80% for pretest power and a value of 0.05 for alpha. According to the findings, the Novel GFPGAN achieved a greatest PSNR value of 0.32, while the PULSE PSNR value was 0.25, representing a significance level of 0.001 (P<0.05). According to the PSNR values, Novel GFPGAN outperforms PULSE technique considerably for the provided dataset.
Date of Conference: 18-19 April 2024
Date Added to IEEE Xplore: 23 May 2024
ISBN Information:
Conference Location: Chennai, India

I. Introduction

There have been issues with shooting human pictures at both official and casual occasions, despite the tremendous advancements in mobile and camera technology in the last several years. These images will be essential for future usage, however fuzzy images could be the consequence of camera shaking [1]. With the increasing need for high-quality visual material across several industries, improving face characteristics in damaged photographs has become a top priority in the field of image restoration. Everyone now expects digital media, from social networks to business apps, to include aesthetically pleasing and finely detailed depictions of faces. Consequently, academics and practitioners are focusing on ways to improve visual communication by tackling picture deterioration. This degradation might be caused by compression artefacts, noise interference, or low-resolution capture. When shooting in low light, the sun's very high exposure value might cause the camera's sensor to overexpose or create noisy photographs. Image restoration technology [2] can fix these issues. Portraits of people, and more specifically their faces, are the major subject of this study report. To restore a blind person's face, a Generative Adversarial Network (GAN) is used, which makes advantage of the diverse and rich priors of the pretrained network. This work used the Generative Facial Prior (GFP) approach for precise face reconstruction [3]. Repairing low-quality (LQ) regions of faces affected by similar issues including sound, fade compressing artifacts, etc. is the main objective of blind facial recovery. Due to a number of aspects, including intricate artifacts, numerous stances, and facial emotions, applying it to actual events is quite challenging [4]. The bulk of methods that aim to recover accurate facial features use priors that are particular to the face, including facial feature maps. Very LQ input photos are used to generate the priors. In terms of restoration-related texture data, the priors are severely lacking. Alternatively, for the purpose of producing realistic results, reference priors, high-quality (HQ) face photos, or facial dictionaries may be used. However, it is severely lacking in variety, since it is restricted to the face features included in that lexicon. These GANS can generate HQ faces with a great deal more data about the texture, colour, lighting, sharpness, etc. [3]. Implementing such priors for the restoration process is tricky. Even though previous techniques used GAN inversion, which provides realistic results, they typically resulted in images with low fidelity.

References

References is not available for this document.