Chung-Chi Tsai - IEEE Xplore Author Profile

Showing 1-11 of 11 results

Results

Image deblurring aims to remove undesired blurs from an image captured in a dynamic scene. Much research has been dedicated to improving deblurring performance through model architectural designs. However, there is little work on data augmentation for image deblurring. Since continuous motion causes blurred artifacts during image exposure, we aspire to develop a groundbreaking blur augmentation me...Show More
Image motion blur results from a combination of object motions and camera shakes, and such blurring effect is generally directional and non-uniform. Previous research attempted to solve non-uniform blurs using self-recurrent multi-scale, multi-patch, or multi-temporal architectures with self-attention to obtain decent results. However, using self-recurrent frameworks typically leads to a longer in...Show More
Group re-identification (G-ReID) is an important yet less-studied task. Its challenges not only lie in appearance changes of individuals, but also involve group layout and membership changes. To address these issues, the key task of G-ReID is to learn group representations robust to such changes. Nevertheless, unlike ReID tasks, there still lacks comprehensive publicly available G-ReID datasets, m...Show More
Due to the problems of power-hungry displays and limited battery life in electronic devices, the concept of “green computing,” which entails a reduction in power consumption, is proposed. One often seen green computing is the power-constrained contrast enhancement (PCCE), yet it is much more challenging because of the noticeable local intensity suppressions in images. This paper aims at developing...Show More
High dynamic range imaging requires fusing a set of low dynamic range (LDR) images at different exposure levels. Existing works combine the LDRs by either assigning each LDR a weighting map based on texture metrics at the pixel level or transferring the images into semantic space at the feature level while neglecting the fact that both texture calibration and semantic consistency are required. In ...Show More
Learning interpretable data representation has been an active research topic in deep learning and computer vision. While representation disentanglement is an effective technique for addressing this task, existing works cannot easily handle the problems in which manipulating and recognizing data across multiple domains are desirable. In this paper, we present a unified network architecture of Multi...Show More
Image co-saliency detection via fusion-based or learning-based methods faces cross-cutting issues. Fusion-based methods often combine saliency proposals using a majority voting rule. Their performance hence highly depends on the quality and coherence of individual proposals. Learning-based methods typically require ground-truth annotations for training, which are not available for co-saliency dete...Show More
This paper reviewed the 3rd NTIRE challenge on single-image super-resolution (restoration of rich details in a low-resolution image) with a focus on proposed solutions and results. The challenge had 1 track, which was aimed at the real-world single image super-resolution problem with an unknown scaling factor. Participants were mapping low-resolution images captured by a DSLR camera with a shorter...Show More
We present a novel computational model for simultaneous image co-saliency detection and co-segmentation that concurrently explores the concepts of saliency and objectness in multiple images. It has been shown that the co-saliency detection via aggregating multiple saliency proposals by diverse visual cues can better highlight the salient objects; however, the optimal proposals are typically region...Show More
We address two issues hindering existing image co-saliency detection methods. First, it has been shown that object boundaries can help improve saliency detection; But segmentation may suffer from significant intra-object variations. Second, aggregating the strength of different saliency proposals via fusion helps saliency detection covering entire object areas; However, the optimal saliency propos...Show More
Co-saliency detection aims at discovering the common and salient objects in multiple images. It explores not only intra-image but extra inter-image visual cues, and hence compensates the shortages in single-image saliency detection. The performance of co-saliency detection substantially relies on the explored visual cues. However, the optimal cues typically vary from region to region. To address t...Show More