Upcoming Talks

Monday, April 02, 2018
Elena Sizikova

Monday, April 23, 2018
Kyle Genova

Monday, April 30, 2018
Linguang Zhang / Yifei Shi

Previous Talks

Monday, February 05, 2018
Uncovering Perceptual Priors Using Automated Serial Reproduction Chains
Thomas Langlois

Human memory can be understood in terms of an inferential Bayesian process, where memories are the product of noisy sensory information combined with knowledge drawn from prior experience. Expectations from prior knowledge can introduce biases in the encoding of sensory information into internal representations. In this work, we used automated serial reproduction chains to simulate the transmission of information from one observer to the next, by arranging participants on amazon mechanical turk into a large number of carefully curated transmission chains as they completed a spatial memory task. While confirming some previous findings, we demonstrate that our approach paints a much more nuanced picture of spatial memory biases, revealing that spatial memory priors are often far more intricate and complex than previously thought.

Monday, February 19, 2018
ToonCap: A Layered Deformable Model for Capturing Poses From Cartoon Characters
Xinyi Fan

Characters in traditional artwork such as children’s books or cartoon animations are typically drawn once, in fixed poses, with little opportunity to change the characters’ appearance or re-use them in a different animation. To enable these applications one can fit a consistent parametric deformable model—a puppet —to different images of a character, thus establishing consistent segmentation, dense semantic correspondence, and deformation parameters across poses. In this work we argue that a layered deformable puppet is a natural representation for hand-drawn characters, providing an effective way to deal with the articulation, expressive deformation, and occlusion that are common to this style of artwork. Our main contribution is an automatic pipeline for fitting these models to unlabeled images depicting the same character in various poses. We demonstrate that the output of our pipeline can be used directly for editing and re-targeting animations.

Monday, March 12, 2018
Nora Willett