1. INTRODUCTION
The Continuous Sign Language Recognition (CSLR) task aims to recognise a gloss1 sequence in a sign language video [1], [2], [3]. To capture the meaning of the sign expressions from a signer, recent works obtain manual and non-manual expressions by fusing RGB with other modalities such as depth [4], infrared maps [5] and optical flow [6], or by explicitly extracting multi-cue features [2], [7], [8], [9] or human keypoints [10] using off-the-shelf detectors. However, using such extra components introduce bottlenecks in both training and inference processes. In addition, most CSLR datasets only have sentence-level gloss labels without frame- or gloss- level labels [2], [11], [12]. To overcome insufficient annotations, the Connectionist Temporal Classification (CTC) [13] loss has been traditionally opted to consider all possible underlying alignments between the input and target sequence. However, using the CTC loss without true frame-level supervision produces temporally spiky attention which can make the model fail to localise important temporal segments [14].