Loading [MathJax]/extensions/MathZoom.js
Christian Micheloni - IEEE Xplore Author Profile

Showing 1-25 of 89 results

Filter Results

Show

Results

Blind image super-resolution (SR) aims to recover high-resolution (HR) images from low-resolution (LR) inputs hindered by unknown degradation. Existing blind SR methods exploit computationally demanding explicit degradation estimators hinging on the availability of ground-truth information about the degradation process, thus introducing a severe limitation in real-world scenarios where this is inh...Show More
This paper reviews the NTIRE 2024 challenge on image super-resolution (×4), highlighting the solutions proposed and the outcomes obtained. The challenge involves generating corresponding high-resolution (HR) images, magnified by a factor of four, from low-resolution (LR) inputs using prior information. The LR images originate from bicubic downsampling degradation. The aim of the challenge is to ob...Show More
Blind image super-resolution (SR) aims to recover a high-resolution (HR) image from its low-resolution (LR) counterpart under the assumption of unknown degradations. Many existing blind SR methods rely on supervising ground-truth kernels referred to as explicit degradation estimators. However, it is very challenging to obtain the ground-truths for different degradations kernels. Moreover, most of ...Show More
Skiing is a popular winter sport discipline with a long history of competitive events. In this domain, computer vision has the potential to enhance the understanding of athletes’ performance, but its application lags behind other sports due to limited studies and datasets. This paper makes a step forward in filling such gaps. A thorough investigation is performed on the task of skier tracking in a...Show More

The First Visual Object Tracking Segmentation VOTS2023 Challenge Results

Matej Kristan;Jiří Matas;Martin Danelljan;Michael Felsberg;Hyung Jin Chang;Luka Čehovin Zajc;Alan Lukežič;Ondrej Drbohlav;Zhongqun Zhang;Khanh-Tung Tran;Xuan-Son Vu;Johanna Björklund;Christoph Mayer;Yushan Zhang;Lei Ke;Jie Zhao;Gustavo Fernández;Noor Al-Shakarji;Dong An;Michael Arens;Stefan Becker;Goutam Bhat;Sebastian Bullinger;Antoni B. Chan;Shijie Chang;Hanyuan Chen;Xin Chen;Yan Chen;Zhenyu Chen;Yangming Cheng;Yutao Cui;Chunyuan Deng;Jiahua Dong;Matteo Dunnhofer;Wei Feng;Jianlong Fu;Jie Gao;Ruize Han;Zeqi Hao;Jun-Yan He;Keji He;Zhenyu He;Xiantao Hu;Kaer Huang;Yuqing Huang;Yi Jiang;Ben Kang;Jin-Peng Lan;Hyungjun Lee;Chenyang Li;Jiahao Li;Ning Li;Wangkai Li;Xiaodi Li;Xin Li;Pengyu Liu;Yue Liu;Huchuan Lu;Bin Luo;Ping Luo;Yinchao Ma;Deshui Miao;Christian Micheloni;Kannappan Palaniappan;Hancheol Park;Matthieu Paul;HouWen Peng;Zekun Qian;Gani Rahmon;Norbert Scherer-Negenborn;Pengcheng Shao;Wooksu Shin;Elham Soltani Kazemi;Tianhui Song;Rainer Stiefelhagen;Rui Sun;Chuanming Tang;Zhangyong Tang;Imad Eddine Toubal;Jack Valmadre;Joost van de Weijer;Luc Van Gool;Jash Vira;Stèphane Vujasinović;Cheng Wan;Jia Wan;Dong Wang;Fei Wang;Feifan Wang;He Wang;Limin Wang;Song Wang;Yaowei Wang;Zhepeng Wang;Gangshan Wu;Jiannan Wu;Qiangqiang Wu;Xiaojun Wu;Anqi Xiao;Jinxia Xie;Chenlong Xu;Min Xu;Tianyang Xu;Yuanyou Xu;Bin Yan;Dawei Yang;Ming-Hsuan Yang;Tianyu Yang;Yi Yang;Zongxin Yang;Xuanwu Yin;Fisher Yu;Hongyuan Yu;Qianjin Yu;Weichen Yu;YongSheng Yuan;Zehuan Yuan;Jianlin Zhang;Lu Zhang;Tianzhu Zhang;Guodongfang Zhao;Shaochuan Zhao;Yaozong Zheng;Bineng Zhong;Jiawen Zhu;Xuefeng Zhu;Yueting Zhuang;ChengAo Zong;Kunlong Zuo

2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)
Year: 2023 | Conference Paper |
Cited by: Papers (1)
The Visual Object Tracking Segmentation VOTS2023 challenge is the eleventh annual tracker benchmarking activity of the VOT initiative. This challenge is the first to merge short-term and long-term as well as single-target and multiple-target tracking with segmentation masks as the only target location specification. A new dataset was created; the ground truth has been withheld to prevent overfitti...Show More

The First Visual Object Tracking Segmentation VOTS2023 Challenge Results

Matej Kristan;Jiří Matas;Martin Danelljan;Michael Felsberg;Hyung Jin Chang;Luka Čehovin Zajc;Alan Lukežič;Ondrej Drbohlav;Zhongqun Zhang;Khanh-Tung Tran;Xuan-Son Vu;Johanna Björklund;Christoph Mayer;Yushan Zhang;Lei Ke;Jie Zhao;Gustavo Fernández;Noor Al-Shakarji;Dong An;Michael Arens;Stefan Becker;Goutam Bhat;Sebastian Bullinger;Antoni B. Chan;Shijie Chang;Hanyuan Chen;Xin Chen;Yan Chen;Zhenyu Chen;Yangming Cheng;Yutao Cui;Chunyuan Deng;Jiahua Dong;Matteo Dunnhofer;Wei Feng;Jianlong Fu;Jie Gao;Ruize Han;Zeqi Hao;Jun-Yan He;Keji He;Zhenyu He;Xiantao Hu;Kaer Huang;Yuqing Huang;Yi Jiang;Ben Kang;Jin-Peng Lan;Hyungjun Lee;Chenyang Li;Jiahao Li;Ning Li;Wangkai Li;Xiaodi Li;Xin Li;Pengyu Liu;Yue Liu;Huchuan Lu;Bin Luo;Ping Luo;Yinchao Ma;Deshui Miao;Christian Micheloni;Kannappan Palaniappan;Hancheol Park;Matthieu Paul;HouWen Peng;Zekun Qian;Gani Rahmon;Norbert Scherer-Negenborn;Pengcheng Shao;Wooksu Shin;Elham Soltani Kazemi;Tianhui Song;Rainer Stiefelhagen;Rui Sun;Chuanming Tang;Zhangyong Tang;Imad Eddine Toubal;Jack Valmadre;Joost van de Weijer;Luc Van Gool;Jash Vira;Stèphane Vujasinović;Cheng Wan;Jia Wan;Dong Wang;Fei Wang;Feifan Wang;He Wang;Limin Wang;Song Wang;Yaowei Wang;Zhepeng Wang;Gangshan Wu;Jiannan Wu;Qiangqiang Wu;Xiaojun Wu;Anqi Xiao;Jinxia Xie;Chenlong Xu;Min Xu;Tianyang Xu;Yuanyou Xu;Bin Yan;Dawei Yang;Ming-Hsuan Yang;Tianyu Yang;Yi Yang;Zongxin Yang;Xuanwu Yin;Fisher Yu;Hongyuan Yu;Qianjin Yu;Weichen Yu;YongSheng Yuan;Zehuan Yuan;Jianlin Zhang;Lu Zhang;Tianzhu Zhang;Guodongfang Zhao;Shaochuan Zhao;Yaozong Zheng;Bineng Zhong;Jiawen Zhu;Xuefeng Zhu;Yueting Zhuang;ChengAo Zong;Kunlong Zuo

Trajectories are fundamental to winning in alpine skiing. Tools enabling the analysis of such curves can enhance the training activity and enrich broadcasting content. In this paper, we propose SkiTraVis, an algorithm to visualize the sequence of points traversed by a skier during its performance. SkiTraVis works on monocular videos and constitutes a pipeline of a visual tracker to model the skier...Show More
How to combine the complementary capabilities of an ensemble of different algorithms has been of central interest in visual object tracking. A significant progress on such a problem has been achieved, but considering short-term tracking scenarios. Instead, long-term tracking settings have been substantially ignored by the solutions. In this paper, we explicitly consider long-term tracking scenario...Show More
The current existing deep image super-resolution methods usually assume that a Low Resolution (LR) image is bicubicly downscaled of a High Resolution (HR) image. However, such an ideal bicubic downsampling process is different from the real LR degradations, which usually come from complicated combinations of different degradation processes, such as camera blur, sensor noise, sharpening artifacts, ...Show More
Automatic image colourisation studies how to colourise greyscale images. Existing approaches exploit convolutional layers that extract image-level features learning the colourisation on the entire image, but miss entities-level ones due to pooling strategies. We believe that entity-level features are of paramount importance to deal with the intrinsic multimodality of the problem (i.e., the same ob...Show More
Vehicle reidentification has seen increasing interest, thanks to its fundamental impact on intelligent surveillance systems and smart transportation. The visual data acquired from monitoring camera networks come with severe challenges, including occlusions, color and illumination changes, as well as orientation issues (a vehicle can be seen from the side/front/rear due to different camera viewpoin...Show More
Vehicle re-identification (re-id) is a challenging task due to the presence of high intra-class and low inter-class variations in the visual data acquired from monitoring camera networks. Unique and discriminative feature representations are needed to overcome the existence of several variations including color, illumination, orientation, background and occlusion. The orientations of the vehicles ...Show More
At the state of the art, Capsule Networks (CapsNets) have shown to be a promising alternative to Convolutional Neural Networks (CNNs) in many computer vision tasks, due to their ability to encode object viewpoint variations. Network capsules provide maps of votes that focus on entities presence in the image and their pose. Each map is the point of view of a given capsule. To compute such votes, Ca...Show More

The Ninth Visual Object Tracking VOT2021 Challenge Results

Matej Kristan;Jiří Matas;Aleš Leonardis;Michael Felsberg;Roman Pflugfelder;Joni-Kristian Kämäräinen;Hyung Jin Chang;Martin Danelljan;Luka Čehovin Zajc;Alan Lukežič;Ondrej Drbohlav;Jani Käpylä;Gustav Häger;Song Yan;Jinyu Yang;Zhongqun Zhang;Gustavo Fernández;Mohamed Abdelpakey;Goutam Bhat;Llukman Cerkezi;Hakan Cevikalp;Shengyong Chen;Xin Chen;Miao Cheng;Ziyi Cheng;Yu-Chen Chiu;Ozgun Cirakman;Yutao Cui;Kenan Dai;Mohana Murali Dasari;Qili Deng;Xingping Dong;Daniel K. Du;Matteo Dunnhofer;Zhen-Hua Feng;Zhiyong Feng;Zhihong Fu;Shiming Ge;Rama Krishna Gorthi;Yuzhang Gu;Bilge Gunsel;Qing Guo;Filiz Gurkan;Wencheng Han;Yanyan Huang;Felix Järemo Lawin;Shang-Jhih Jhang;Rongrong Ji;Cheng Jiang;Yingjie Jiang;Felix Juefei-Xu;Yin Jun;Xiao Ke;Fahad Shahbaz Khan;Byeong Hak Kim;Josef Kittler;Xiangyuan Lan;Jun Ha Lee;Bastian Leibe;Hui Li;Jianhua Li;Xianxian Li;Yuezhou Li;Bo Liu;Chang Liu;Jingen Liu;Li Liu;Qingjie Liu;Huchuan Lu;Wei Lu;Jonathon Luiten;Jie Ma;Ziang Ma;Niki Martinel;Christoph Mayer;Alireza Memarmoghadam;Christian Micheloni;Yuzhen Niu;Danda Paudel;Houwen Peng;Shoumeng Qiu;Aravindh Rajiv;Muhammad Rana;Andreas Robinson;Hasan Saribas;Ling Shao;Mohamed Shehata;Furao Shen;Jianbing Shen;Kristian Simonato;Xiaoning Song;Zhangyong Tang;Radu Timofte;Philip Torr;Chi-Yi Tsai;Bedirhan Uzun;Luc Van Gool;Paul Voigtlaender;Dong Wang;Guangting Wang;Liangliang Wang;Lijun Wang;Limin Wang;Linyuan Wang;Yong Wang;Yunhong Wang;Chenyan Wu;Gangshan Wu;Xiao-Jun Wu;Fei Xie;Tianyang Xu;Xiang Xu;Wanli Xue;Bin Yan;Wankou Yang;Xiaoyun Yang;Yu Ye;Jun Yin;Chengwei Zhang;Chunhui Zhang;Haitao Zhang;Kaihua Zhang;Kangkai Zhang;Xiaohan Zhang;Xiaolin Zhang;Xinyu Zhang;Zhibin Zhang;Shaochuan Zhao;Ming Zhen;Bineng Zhong;Jiawen Zhu;Xue-Feng Zhu

2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)
Year: 2021 | Conference Paper |
Cited by: Papers (63)
The Visual Object Tracking challenge VOT2021 is the ninth annual tracker benchmarking activity organized by the VOT initiative. Results of 71 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in recent years. The VOT2021 challenge was composed of four sub-challenges focusing on different tracking domains: (i) VOT-ST2021 challen...Show More

The Ninth Visual Object Tracking VOT2021 Challenge Results

Matej Kristan;Jiří Matas;Aleš Leonardis;Michael Felsberg;Roman Pflugfelder;Joni-Kristian Kämäräinen;Hyung Jin Chang;Martin Danelljan;Luka Čehovin Zajc;Alan Lukežič;Ondrej Drbohlav;Jani Käpylä;Gustav Häger;Song Yan;Jinyu Yang;Zhongqun Zhang;Gustavo Fernández;Mohamed Abdelpakey;Goutam Bhat;Llukman Cerkezi;Hakan Cevikalp;Shengyong Chen;Xin Chen;Miao Cheng;Ziyi Cheng;Yu-Chen Chiu;Ozgun Cirakman;Yutao Cui;Kenan Dai;Mohana Murali Dasari;Qili Deng;Xingping Dong;Daniel K. Du;Matteo Dunnhofer;Zhen-Hua Feng;Zhiyong Feng;Zhihong Fu;Shiming Ge;Rama Krishna Gorthi;Yuzhang Gu;Bilge Gunsel;Qing Guo;Filiz Gurkan;Wencheng Han;Yanyan Huang;Felix Järemo Lawin;Shang-Jhih Jhang;Rongrong Ji;Cheng Jiang;Yingjie Jiang;Felix Juefei-Xu;Yin Jun;Xiao Ke;Fahad Shahbaz Khan;Byeong Hak Kim;Josef Kittler;Xiangyuan Lan;Jun Ha Lee;Bastian Leibe;Hui Li;Jianhua Li;Xianxian Li;Yuezhou Li;Bo Liu;Chang Liu;Jingen Liu;Li Liu;Qingjie Liu;Huchuan Lu;Wei Lu;Jonathon Luiten;Jie Ma;Ziang Ma;Niki Martinel;Christoph Mayer;Alireza Memarmoghadam;Christian Micheloni;Yuzhen Niu;Danda Paudel;Houwen Peng;Shoumeng Qiu;Aravindh Rajiv;Muhammad Rana;Andreas Robinson;Hasan Saribas;Ling Shao;Mohamed Shehata;Furao Shen;Jianbing Shen;Kristian Simonato;Xiaoning Song;Zhangyong Tang;Radu Timofte;Philip Torr;Chi-Yi Tsai;Bedirhan Uzun;Luc Van Gool;Paul Voigtlaender;Dong Wang;Guangting Wang;Liangliang Wang;Lijun Wang;Limin Wang;Linyuan Wang;Yong Wang;Yunhong Wang;Chenyan Wu;Gangshan Wu;Xiao-Jun Wu;Fei Xie;Tianyang Xu;Xiang Xu;Wanli Xue;Bin Yan;Wankou Yang;Xiaoyun Yang;Yu Ye;Jun Yin;Chengwei Zhang;Chunhui Zhang;Haitao Zhang;Kaihua Zhang;Kangkai Zhang;Xiaohan Zhang;Xiaolin Zhang;Xinyu Zhang;Zhibin Zhang;Shaochuan Zhao;Ming Zhen;Bineng Zhong;Jiawen Zhu;Xue-Feng Zhu

Understanding human-object interactions is fundamental in First Person Vision (FPV). Tracking algorithms which follow the objects manipulated by the camera wearer can provide useful cues to effectively model such interactions. Visual tracking solutions available in the computer vision literature have significantly improved their performance in the last years for a large variety of target objects a...Show More
Person re-identification (re-id) aims to retrieve images of same identities across different camera views. Resolution mismatch occurs due to varying distances between person of interest and cameras, this significantly degrades the performance of re-id in real world scenarios. Most of the existing approaches resolve the re-id task as low resolution problem in which a low resolution query image is s...Show More
Recently, most of state-of-the-art single image super-resolution (SISR) methods have attained impressive performance by using deep convolutional neural networks (DCNNs). The existing SR methods have limited performance due to a fixed degradation settings, i.e. usually a bicubic downscaling of low-resolution (LR) image. However, in real-world settings, the LR degradation process is unknown which ca...Show More
This paper reviews the NTIRE2021 challenge on burst super-resolution. Given a RAW noisy burst as input, the task in the challenge was to generate a clean RGB image with 4 times higher resolution. The challenge contained two tracks; Track 1 evaluating on synthetically generated data, and Track 2 using real-world bursts from mobile camera. In the final testing phase, 6 teams submitted results using ...Show More
Image colourisation is an ill-posed problem, with multiple correct solutions which depend on the context and object instances present in the input datum. Previous approaches attacked the problem either by requiring intense user-interactions or by exploiting the ability of convolutional neural networks (CNNs) in learning image-level (context) features. However, obtaining human hints is not always f...Show More
Recent research has shown promising results for person re-identification by focusing on several trends. One is designing efficient metric learning loss functions such as triplet loss family to learn the most discriminative representations. The other is learning local features by designing part based architectures to form an informative descriptor from semantically coherent parts. Some efforts adju...Show More
A more stationary and discriminative embedding is necessary for robust classification of images. We focus our attention on the newel CapsNet model and we propose the angular margin loss function in composition with margin loss. We define a fixed classifier implemented with fixed weights vectors obtained by the vertex coordinates of a simplex polytope. The advantage of using simplex polytope is tha...Show More
Deep convolutional neural networks (CNNs) have recently achieved great success for single image super-resolution (SISR) task due to their powerful feature representation capabilities. The most recent deep learning based SISR methods focus on designing deeper / wider models to learn the non-linear mapping between low-resolution (LR) inputs and high-resolution (HR) outputs. These existing SR methods...Show More
Deep regression trackers are among the fastest tracking algorithms available, and therefore suitable for real-time robotic applications. However, their accuracy is inadequate in many domains due to distribution shift and overfitting. In this letter we overcome such limitations by presenting the first methodology for domain adaption of such a class of trackers. To reduce the labeling effort we prop...Show More
Recent progress in the field of person re-identification have shown promising improvement by designing neural networks to learn most discriminative features representations. Some efforts utilize similar parts from different locations to learn better representation with the help of soft attention, while others search for part based learning methods to enhance consecutive regions relationships in th...Show More
We conducted real proof-of-concept demonstrations of an auto-organizing sensor network composed of UAVs and ground cameras, for urban surveillance. We adopted a decentralised paradigm with tightly coupled perception and tactical behaviour algorithms. The network would reconfigure when cameras are added or removed so that high priority tasks are always served. Tracked targets could be handed over f...Show More
To capture robust person features, learning discriminative, style and view invariant descriptors is a key challenge in person Re-Identification (re-id). Most deep Re-ID models learn single scale feature representation which are unable to grasp compact and style invariant representations. In this paper, we present a multi branch Siamese Deep Neural Network with multiple classifiers to overcome the ...Show More