Abstract:
In recent years, hashing based cross-modal retrieval methods have attracted considerable attention for the high retrieval efficiency and low storage cost. However, most o...Show MoreMetadata
Abstract:
In recent years, hashing based cross-modal retrieval methods have attracted considerable attention for the high retrieval efficiency and low storage cost. However, most of the existing methods neglect the high-order relationship among data samples. In addition, most of them can only deal with two modalities, e.g., image and text, without discussing the scenario of multiple modalities. To address these issues, in this paper, we propose a novel cross-modal hashing method, named Hypergraph Based Discrete Matrix Factorization Hashing (HDMFH), for multimodal retrieval. Different from most previous approaches, our method based on hypergraph regularization and matrix factorization can handle the cross-modal retrieval of more than two modalities, which is known as multimodal retrieval. Extensive experiments demonstrate that HDMFH outperforms the state-of-the-art cross-modal hashing methods.
Published in: ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Date of Conference: 04-08 May 2020
Date Added to IEEE Xplore: 09 April 2020
ISBN Information: