Princeton > CS Dept > PIXL > Lunch Local Access 


The PIXL lunch meets every Monday during the semester at noon in room 402 of the Computer Science building. To get on the mailing list to receive announcements, sign up for the "pixl-talks" list at lists.cs.princeton.edu.

Upcoming Talks


No talks scheduled yet.

Previous Talks


Monday, February 04, 2019
Yuke Zhu, Stanford University

Abstract
Robots and autonomous systems have been playing a significant role in the modern economy. Custom-built robots have remarkably improved productivity, operational safety, and product quality. These robots are usually programmed for specific tasks in well-controlled environments, but unable to perform diverse tasks in the real world. In this talk, I will present my work on building more powerful and generalizable robot autonomy by closing the perception-action loop. I will discuss examples of my research that establish a tighter coupling between perception and action at three levels of abstraction, including learning primitive motor skills from raw sensory data, transferring knowledge between sequential tasks in visual environments, and learning compositional task structures from video demonstrations.


Monday, February 11, 2019
Presenting three works: From potential bias in image-to-image translation models to semi-supervised and unsupervised techniques for landmark localization.
Sina Honari

Abstract
In the first part of the talk, I discuss how image-to-image translation models, such as CycleGAN, can inject bias due to matching the data distribution in the target domain, regardless of the critical information in the source domain.

In the second part of the talk, I present a semi-supervised model for landmark localization. The model leverages weaker labels (e.g. head pose, hand gesture) to guide the landmark localization network. It also uses an unsupervised technique by proposing any transformation applied to the image should modify the landmarks equivariantly.

In the last part of the talk, I present an unsupervised technique for depth estimation over landmarks. This is done by transforming one face onto another unpaired face using an affine transformation and creating a bottleneck where only the depth is unknown, hence giving the depth to us for free. We then leverage the estimated depth for other tasks such as face rotation and face replacement.

Bio
Sina Honari is a PhD student at Mila and University of Montreal under supervision of Pascal Vincent and Christopher Pal. His broad domains of research are deep learning and computer vision. His past works have focused on unsupervised and semi-supervised learning techniques, generative models, and representation learning in computer vision.


Monday, February 25, 2019
Everyone

Abstract
Everyone gives a 5 minute talk about what they are doing.


Monday, March 04, 2019
From relations to distributions: combining graph reasoning with generative models.
Zhiwei Deng, Simon Fraser University

Abstract
Graphs are a general formalism to represent data and are prevalent in many applications including knowledge graphs, natural language semantics, and visual understanding. In the first part of the talk, I’ll present my work on how to combine graph relational reasoning with deep neural networks and apply them in computer vision tasks. In the second part, I’ll discuss my work on combining relational models, and reusable functions with generative models. I will show that incorporating high-level modules can help to build models with stronger generalization. Simultaneously, bringing generative models into graph can lead to a general learning to reason framework in graph neural networks.

Bio
Zhiwei Deng is a PhD student at Simon Fraser University supervised by professor Greg Mori. He’s broadly interested in computer vision and machine learning techniques. His main research focuses are video analysis, graphical model with deep learning and generative modeling.


Monday, March 25, 2019
Deep Metric Learning for Visual Matching and Recognition
Yueqi Duan, Tsinghua University

Abstract
Deep metric learning is an important subarea of deep learning, which can learn the similarity of samples in a nonlinear manner and demonstrate strong effectiveness in various vision tasks. In this talk, I will present some of my research progresses in deep learning from two aspects: Mahalanobis deep metric learning and Hamming deep metric learning. For the first part, I will introduce some sampling methods which exploit hard negative samples to improve the discriminative power of the learned metrics. For the second part, I will present several quantization methods which can enhance the reliability of the learned representation in the binary feature space. Finally, I will show the effectiveness of deep metric learning techniques in several visual matching and recognition applications.

Bio
Yueqi Duan received the B.S. degree in the Department of Automation, Tsinghua University, China, in 2014. He is currently a Ph.D Candidate with the Department of Automation, Tsinghua University, China. His current research interests include unsupervised learning, metric learning, binary representation learning and 3D vision. In these areas, he has published 12 scientific papers as the first author, which include top journals and conferences such as TPAMI (2), TIP (1) and CVPR (6). He serves as a regular reviewer member for a number of journals and conferences, e.g. TPAMI, TIP, TIFS, TCSVT, CVPR, ICCV, ICME and ICIP. He has awarded Outstanding Reviewer of ICME in 2018, and obtained the National Scholarship of Tsinghua in 2017 and 2018, respectively.


Monday, April 22, 2019
Felix Yu, Ethan Tseng


Monday, April 29, 2019
Amir Rosenfeld