Signature Matching Distance for Content-based Image Retrieval
We propose a simple yet effective approach to content-based image retrieval: the signature matching distance. While recent approaches to content-based image retrieval utilize the bag-of-visual-words model, where image descriptors are matched through a common visual vocabulary, signature-based approaches use a distance between signatures, i.e. between image-specific bags of locally aggregated descriptors, in order to quantify image dissimilarity. In this paper, we focus on the signature-based approach to content-based image retrieval and propose a novel distance function, the signature matching distance. This distance matches coincident visual properties of images based on their signatures. In particular, by investigating different descriptor matching strategies and their suitability to match signatures, we show that our approach is able to outperform other signature-based approaches to content-based image retrieval. Moreover, in combination with a simple color and texture-based image descriptor, our approach is able to compete with the majority of bag-of-visual-words approaches.
|Authors:||Beecks C., Kirchhoff S., Seidl T.|
|Published in:||Proc. ACM International Conference on Multimedia Retrieval (ICMR 2013), Dallas, Texas, USA|
|Publisher:||ACM - New York, NY, USA|
|Forschungsgebiet:||Exploration of Multimedia Databases|