1. Introduction
Determining user intent from a visual search query remains an open challenge, particularly in sketch based image retrieval (SBIR) over millions of images where a sketched shape can yield plausible yet unexpected matches. For example, a user’s sketch of a dog might return a map of the United States that ostensibly resembles the shape (structure) drawn, but is not relevant. Free-hand sketches are often incomplete and ambiguous descriptions of desired image content [8]. This limits the ability of sketch to communicate search intent, particularly over large image datasets.