Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

On the coupled use of signal and semantic concepts to bridge the semantic and user intention gaps for visual content retrieval

On the coupled use of signal and semantic concepts to bridge the semantic and user intention gaps... The effectiveness of image indexing and retrieval systems has been negatively impacted by both the semantic and user intention gaps. The first is related to the challenge of characterizing the visual semantic information through low-level extracted features (color, texture...) while the second highlights the difficulty for human users to convey their search intents using the traditional relevance feedback or query-by-example visual query mechanisms. We address both issues by highlighting vocabularies of visual concepts that are mapped to extracted low-level features through an automated learning paradigm. These are then instantiated within a semantic indexing and retrieval framework based on a Bayesian model considering the joint distribution of visual and semantic concepts. To address the user intention gap and enrich the expressiveness of the retrieval module, visual and semantic concepts can be coupled within text-based queries. We are therefore able to process both single-concept queries such as the state-of-the-art solutions but also topic-based queries, i.e. non-trivial queries involving multiple characterizations of the visual content. We evaluate our proposal in a precision/recall based evaluation framework on the IAPR-TC 12 benchmark dataset. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Journal of Multimedia Information Retrieval Springer Journals

On the coupled use of signal and semantic concepts to bridge the semantic and user intention gaps for visual content retrieval

Loading next page...
 
/lp/springer-journals/on-the-coupled-use-of-signal-and-semantic-concepts-to-bridge-the-EAfCkZrxwx

References (7)

Publisher
Springer Journals
Copyright
Copyright © 2016 by Springer-Verlag London
Subject
Computer Science; Multimedia Information Systems; Information Storage and Retrieval; Information Systems Applications (incl. Internet); Data Mining and Knowledge Discovery; Image Processing and Computer Vision; Computer Science, general
ISSN
2192-6611
eISSN
2192-662X
DOI
10.1007/s13735-016-0101-z
Publisher site
See Article on Publisher Site

Abstract

The effectiveness of image indexing and retrieval systems has been negatively impacted by both the semantic and user intention gaps. The first is related to the challenge of characterizing the visual semantic information through low-level extracted features (color, texture...) while the second highlights the difficulty for human users to convey their search intents using the traditional relevance feedback or query-by-example visual query mechanisms. We address both issues by highlighting vocabularies of visual concepts that are mapped to extracted low-level features through an automated learning paradigm. These are then instantiated within a semantic indexing and retrieval framework based on a Bayesian model considering the joint distribution of visual and semantic concepts. To address the user intention gap and enrich the expressiveness of the retrieval module, visual and semantic concepts can be coupled within text-based queries. We are therefore able to process both single-concept queries such as the state-of-the-art solutions but also topic-based queries, i.e. non-trivial queries involving multiple characterizations of the visual content. We evaluate our proposal in a precision/recall based evaluation framework on the IAPR-TC 12 benchmark dataset.

Journal

International Journal of Multimedia Information RetrievalSpringer Journals

Published: Jul 14, 2016

There are no references for this article.