IEEE Xplore Search Results

Showing 1-25 of 5,537 resultsfor

Filter Results

Show

Results

Unsupervised learning has grown in popularity because of the difficulty of collecting annotated data and the development of modern frameworks that allow us to learn from unlabeled data. Existing studies, however, either disregard variations at different levels of similarity or only consider negative samples from one batch. We argue that image pairs should have varying degrees of similarity, and th...Show More
Recently, contrastive learning has emerged as a successful method for unsupervised graph representation learning, where the design of the contrastive approach plays a pivotal role in guiding encoders to learn high-performance node representations. Most existing graph contrastive learning methods adopt a single-scale contrastive approach, limiting their ability to learn knowledge at both the node a...Show More
Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphol...Show More
Ahstract- Thanks to commercially available wearable devices, unobtrusive physiological signals monitoring in real life is feasible. This allows us to collect vast amounts of unlabelled physiological data. Due to the lack of emotionally-annotated datasets with physiological signals collected in the wild, machine learning models for physiology-based emotion recognition still need improvement. Theref...Show More
We present CURLKG: CURL [1] enhanced with Knowledge Graph method. Inspired by knowledge graph embedding, environment states are represented as entities and actions are relations between knowledge entities in CURLKG. This knowledge embedding auxiliary task gives “similarity” a more precise measure compared with previous contrastive image representation learning models. Experiments are conducted in ...Show More
In this research, we proposed a model of a hierarchical three-layered perceptron, in which the middle layer contains a two dimensional map where the topological relationship of the high dimensional input data (external world) are internally represented. The proposed model executes a two-phase learning algorithm where the supervised learning of the output layer is proceeded by a self-organization u...Show More
Deep learning-based clustering methods usually regard feature extraction and feature clustering as two independent steps. In this way, the features of all images need to be extracted before feature clustering, which consumes a lot of calculation. Inspired by the self-organizing map network, a self-supervised self-organizing clustering network ( $\text{S}^{3}$ OCNet) is proposed to jointly learn fe...Show More
Unsupervised learning of feature representations is a challenging yet important problem for analyzing a large collection of multimedia data that do not have semantic labels. Recently proposed neural network-based unsupervised learning approaches have succeeded in obtaining features appropriate for classification of multimedia data. However, unsupervised learning of feature representations adapted ...Show More
Self-supervised learning with a contrastive batch approach has become a powerful tool for representation learning in computer vision. The performance of downstream tasks is proportional to the quality of visual features learned while self-supervised pre-training. The existing contrastive batch approaches heavily depend on data augmentation to learn latent information from unlabelled datasets. We a...Show More
Tool condition monitoring (TCM) is crucial to ensure good quality products and avoid downtime. Machine learning has proven to be vital for TCM. However, existing works are predominately based on supervised learning, which hinders their applicability in real-world manufacturing settings, where data labeling is cumbersome and costly with in-service machines. Additionally, the existing unsupervised s...Show More
Unsupervised visual representation learning aims to learn general features from unlabelled data. Early methods design intra-image pretext tasks as learning targets and can be achieved with low computational overhead but unsatisfactory performance. Recent methods introduce contrastive learning and achieve surprising performance, but multiple views of training data are required in one batch, resulti...Show More
The aim of this paper is to present a novel cov-RBM model on the basis of Restricted Boltzmann Machine (RBM) and the coefficient of variation (CoV), where the learning process is instructed by the CoV eigenvalues with respect to the hidden layer. Further, a deep unsupervised representation learning architecture (DURLA) is proposed to explore the capacity for representation learning with continuous...Show More
Unsupervised learning is becoming more and more important recently. As one of its key components, the autoencoder (AE) aims to learn a latent feature representation of data which is more robust and discriminative. However, most AE based methods only focus on the reconstruction within the encoder-decoder phase, which ignores the inherent relation of data, i.e., statistical and geometrical dependenc...Show More
Deep learning methods contribute to improve the estimation accuracy in human activity recognition (HAR) using sensor data. In general, the dataset used in HAR consists of accelerometer data and activity labels. Because of the widespread use of mobile devices, large amount of accelerometer sensor data without activity labels can be easily collected. The problem of annotation needs a large amount of...Show More
Point cloud data have been widely explored due to its superior accuracy and robustness under various adverse situations. Meanwhile, deep neural networks (DNNs) have achieved very impressive success in various applications such as surveillance and autonomous driving. The convergence of point cloud and DNNs has led to many deep point cloud models, largely trained under the supervision of large-scale...Show More
Graph contrastive learning has gained significant attention for its effectiveness in leveraging unlabeled data and achieving superior performance. However, prevalent graph contrastive learning methods often resort to graph augmentation, typically involving the removal of anchor graph structures. This strategy may compromise the essential graph information, constraining the adaptability of contrast...Show More
Risk stratification (characterization) of tumors from radiology images can be more accurate and faster with computer-aided diagnosis (CAD) tools. Tumor characterization through such tools can also enable non-invasive cancer staging, prognosis, and foster personalized treatment planning as a part of precision medicine. In this paper, we propose both supervised and unsupervised machine learning stra...Show More
Contrastive learning has gained great prominence recently, which achieves excellent performance by simple augmentation invariance. However, the simple contrastive pairs suffer from lacking of diversity due to the mechanical augmentation strategies. In this paper, we propose Disturbed Augmentation Invariance (DAI for abbreviation), which constructs disturbed contrastive pairs by generating appropri...Show More
Graph auto-encoder is considered a framework for unsupervised learning on graph-structured data by representing graphs in a low dimensional space. It has been proved very powerful for graph analytics. In the real world, complex relationships in various entities can be represented by heterogeneous graphs that contain more abundant semantic information than homogeneous graphs. In general, graph auto...Show More
In this paper, we provided an unsupervised contrastive representation learning method which uses contrastive views in which both spatial and temporal similarity-contrast is balanced. The balanced views are created by taking pixels from the anchor sample and any randomly selected negative sample and balancing the ratio of number of pixels taken from the anchor and the negative. Then these balanced ...Show More
Unsupervised representation learning has achieved outstanding performances using centralized data available on the Internet. However, the increasing awareness of privacy protection limits sharing of decentralized unlabeled image data that grows explosively in multiple parties (e.g., mobile phones and cameras). As such, a natural problem is how to leverage these data to learn visual representations...Show More
Learning representations in widely available time series is of great significance, and representations in time series often represent some characteristic of this phase and play an irreplaceable role in fault detection in real industrial tasks, prediction in financial models, and diagnosis in medical data, while learning appropriate representations from unlabeled time series data is a challenging t...Show More
Unsupervised learning of disentangled representations is a core task for discovering interpretable factors of variation in an image dataset. We propose a novel method that can learn disentangled representations with semantic explanations on natural image datasets. In our method, we guide the representation learning of a variational autoencoder (VAE) via reconstruction in a visual-semantic embeddin...Show More
The success of machine learning algorithms generally depends on data representation. So far there has been a great deal of literature on unsupervised feature learning and joint training of deep learning. There is little specific guidance, however, on combining hand-designed features or the operations on them with features which are learned from unsupervised learning. In this paper, using MNIST (“M...Show More
Recent works have advanced the performance of self-supervised representation learning by a large margin. The core among these methods is intra-image invariance learning. Two different transformations of one image instance are considered as a positive sample pair, where various tasks are designed to learn invariant representations by comparing the pair. Analogically, for video data, representations...Show More