videopred
videopred copied to clipboard
Unsupervised Learning of Spatiotemporally Coherent Metrics
related paper
摘要 |
---|
Current state-of-the-art classification and detection algorithms train deep convolutional networks using labeled data. In this work we study unsupervised feature learning with convolutional networks in the context of temporally coherent unlabeled data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity priors. We establish a connection between slow feature learning and metric learning. Using this connection we define ”temporal coherence”–a criterion which can be used to set hyper-parameters in a principled and automated manner. In a transfer learning experiment, we show that the resulting encoder can be used to define a more semantically coherent metric without the use of labels. |
概述
其中提到的视频相邻帧语义相似性、慢特征学习和度量学习都是视频中需要补充的知识。