Loading [MathJax]/extensions/MathMenu.js
Towards Accurate Post-Training Quantization of Vision Transformers via Error Reduction | IEEE Journals & Magazine | IEEE Xplore

Towards Accurate Post-Training Quantization of Vision Transformers via Error Reduction


Abstract:

Post-training quantization (PTQ) for vision transformers (ViTs) has received increasing attention from both academic and industrial communities due to its minimal data ne...Show More

Abstract:

Post-training quantization (PTQ) for vision transformers (ViTs) has received increasing attention from both academic and industrial communities due to its minimal data needs and high time efficiency. However, many current methods fail to account for the complex interactions between quantized weights and activations, resulting in significant quantization errors and suboptimal performance. This paper presents ERQ, an innovative two-step PTQ method specifically crafted to reduce quantization errors arising from activation and weight quantization sequentially. The first step, Activation quantization error reduction (Aqer), first applies Reparameterization Initialization aimed at mitigating initial quantization errors in high-variance activations. Then, it further mitigates the errors by formulating a Ridge Regression problem, which updates the weights maintained at full-precision using a closed-form solution. The second step, Weight quantization error reduction (Wqer), first applies Dual Uniform Quantization to handle weights with numerous outliers, which arise from adjustments made during Reparameterization Initialization, thereby reducing initial weight quantization errors. Then, it employs an iterative approach to further tackle the errors. In each iteration, it adopts Rounding Refinement that uses an empirically derived, efficient proxy to refine the rounding directions of quantized weights, complemented by a Ridge Regression solver to reduce the errors. Comprehensive experimental results demonstrate ERQ’s superior performance across various ViTs variants and tasks. For example, ERQ surpasses the state-of-the-art GPTQ by a notable 36.81% in accuracy for W3A4 ViT-S.
Page(s): 2676 - 2692
Date of Publication: 13 January 2025

ISSN Information:

PubMed ID: 40031001

Funding Agency:


I. Introduction

In the realm of computer vision, vision transformers (ViTs) [1] have emerged as the new fundamental backbone models, significantly challenging the convolutional neural networks (CNNs). By leveraging multi-head self-attention (MHSA) mechanism to capture long-range relationships, ViTs exhibit strong and flexible representation capacity, thus resulting in impressive progress in a variety of vision tasks [2], [3], [4], [5], [6], [7]. However, ViTs’ great power comes with considerable complexity. The intricate architecture and large number of parameters of ViTs result in high computational and memory demands. As a result, deploying ViTs in resource-constrained environments such as mobile phones becomes a huge challenge [8], [9], [10], [11], [12], [13], [14].

Contact IEEE to Subscribe

References

References is not available for this document.