Low-rank Bilinear Pooling for Fine-grained Classification
Last update: March 23, 2017.
To further compress the model, we propose classifier co-decomposition that factorizes the collection of bilinear classifiers into a common factor and compact per-class terms. The co-decomposition idea can be deployed through two convolutional layers and trained in an end-to-end architecture. We suggest a simple yet effective initialization that avoids explicitly first training and factorizing the larger bilinear classifiers. Through extensive experiments, we show that our model achieves state-of-the-art performance on several public datasets for fine-grained classification trained with only category labels. Importantly, our final model is an order of magnitude smaller than the recently proposed compact bilinear model, and three orders smaller than the standard bilinear CNN model.
keywords: weakly supervised learning, fine-grained classification, bilinear model, bilinear classifier, low-rank, compact model, decomposition, tensorial data, second order statistics, covariance matrix, pooling, etc.
Reference
-
S. Kong, C. Fowlkes, "Low-rank Bilinear Pooling for Fine-Grained Classification", CVPR, 2017.
[project page] [technical report] [abstract] [demo] [model] [poster] [slides]
Update checklist
-
creating github page;[available] -
quick training using caffe, including matlab files for initialization;[available] -
hyperparameter study by low-rank and co-decomposition on the classifier parameters;[available] -
three methods of visualization;[available] -
fine-tuning the network using matconvnet; [TODO]
-
others...