Image/Video Reranking
Improve search quality by exploiting the context cues in an unsupervised fashion

Motivation   Reranking   Proposed Ordinal Reranking   Related Projects  



Proposed Ordinal Reranking

  • Apply learning to rank algorithms (e.g., ListNet, RankSVM) to learn the contextual patterns.
  • Learning to rank algorithms are in nature more effective for this task than previous classification- or burst- based reranking methods.
  • Thanks to the linear kernel of ListNet, we are able to rerank the result of a query in less than 0.1 second, a query-time efficiency.
  • When evaluated on TRECVID'05 data, offer significant performance improvement over text-only methods (40% relative improvement for video search and 12% for concept detection).
  • Advantages: Unsupervised, on-line(efficient), effective, non-intrusive context fusion

Related Projects

  • Ordinal reranking for consumer photos
    • We apply ordinal reranking to the emerging domain of community-contributed consumer photos.
    • The contextual cues utilized include visual content (visual words), geo-tag, time metadata.
    • Publication
      • [mm08] ContextSeer: Context search and recommendation at query time for shared consumer photos (full paper)   pdf
    • Project page:
    • Data:
      • Flickr550 image id   link
      • Flickr550 query and ground truth   link
      • Flickr550 baseline results   link
      • Sample perl script to fetch Flickr photos   link

  • Ordinal reranking for broadcast videos
    • We propose ordinal reranking, and apply it to improve the search result of TRECVID'05, TRECVID'07.
    • The contextual cue utilized is LSCOM concept detection scores.
    • Publication
      • [tv07] The NTU toolkit and framework for high-level feature detection at TRECVID 2007   pdf
    • Data:
      • tv05 video search baseline ("text-okapi" [2])   link
      • tv05 concept detection baseline (provided by Columbia University [10])   link

Algorithmic Description of Ordinal Reranking

The proposed ordinal reranking represents one of such first attempts to incoporate learning to rank algorithms to the reranking framework.
Ordinal reranking is largely unsupervised, and more effective and efficient in mining ordinal information then existing classification-based reranking methods [1]-[3].

INPUT A list of objects D, and the corresponding relevance scores Y assigned by a baseline model (either for visual search or concept detection). We assume that feature extractions (e.g., concept detections [4]) for each visual object are computed in advance. For these N objects in D, the corresponding M-dimensional features can form an NM feature matrix X.
STEP 1. Concept selection: wc-tf-idf, an improved feature selection measurement of c-tf-idf [5].
STEP 2. Employment of ranking algorithms: Randomly partition the data set into F folds. Hold out one fold of data as test set and train the ranking algorithm (such as ListNet [6] or RankSVM[7], [8]) using the remaining data. Predict the new relevance scores of the test set. Repeat until each fold is held out for testing once.
STEP 3. Rank aggregation: Linearly fuse the initial relevance scores and newly predicted scores.
OUTPUT Sort the fused scores to output a new ranked list for the target semantics.


[1]  W. Hsu, L. Kennedy, and S.-F. Chang, "Video search reranking through random walk over document-level context graph," ACM Multimedia, 2007.
[2]  W. Hsu, L. Kennedy, and S.-F. Chang, "Video search reranking via information bottleneck principle," ACM Multimedia, 2006.
[3]  L. Kennedy and S.-F. Chang, "A reranking approach for context-based concept fusion in video indexing and retrieval," ACM CIVR, pp. 333V340, 2007.
[4]  M. Naphade et al, "Large-scale concept ontology for multimedia," IEEE Multimedia Magazine, pp. 86V91, 2006.
[5]  X. Li, D. Wang, J. Li, and B. Zhang, "Video search in concept subspace: a text like paradigm," ACM CIVR, pp. 603V610, 2007.
[6]  Z. Cao, T. Qin, T.-Y. Liu, M.-F. Tsai, and H. Li, "Learning to rank: from pairwise approach to listwise approach," IEEE ICML, pp. 129V136, 2007.
[7]  R. Herbrich, T. Graepel, and K. Obermayer, "Support vector learning for ordinal regression," ICANN, pp. 97V102, 1999.
[8]  T. Joachims, "Making large-scale SVM learning practical," Advances in Kernel Methods - Support Vector Learning, MIT-Press, 1999.
[9]  NIST TREC Video Retrieval Evaluation. [online]
[10]  S.-F. Chang et al, "Columbia University TRECVID-2005 video search and high-level feature extraction", NIST TRECVID workshop, 2005.

Any feedbacks or comments are welcomed!

(last update: 2008/9/15)