Everyday Robots spun out of X in November 2021. Mountain View, California.
Vision-language model lead at Everyday Robots. Introduced and deployed first vision and language model (CLIP) in production for robot visual question answering (VQA). Scaled diffusion models to create synthetic data for CLIP. Landed on-robot open-vocabulary object detector to detect novel objects.
Designed and built a multi-sensor (camera and lidar), open-vocabulary panoptic segmentation model.
Full-stack ML engineer: end-to-end ownership of entire ML flywheel from data collection to inference. Built model automation pipeline for data collection, training, evaluation, and on-robot deployment for all perception models.
Worked on experimental augmented reality. Created environmental lighting system allowing more photorealistic lighting and reflections in augmented reality for Tango SDK. Published Google Developer Blog post with tutorial for usage. Also experimented with video stabilization. Experience in computer vision, computer graphics, and computational photography. Worked with C++, Unity, and Java. Mountain View, California.
Google — Software Engineering Intern, Chrome for Android[May 2015 – Aug 2015]
Served as an intern on tools and infrastructure for Chrome for Android. All code is open source as part of Chromium. Wrote test infrastructure for sign-in authentication test. Also created parametrizable testing framework. All my code is open source as part of Chromium! Worked with Java, Python, and C++. Mountain View, California.
Accordion Health — Software Engineer[Aug 2014 – Jan 2015]
Used machine learning for health care data analytics. Clustered co-morbidity for several sets of patients. Experience in data visualization. Worked with R, Python, and D3.js. Austin, Texas.
Wenhao Yu, Nimrod Gileadi, Chuyuan Fu, Sean Kirmani, Kuang-Huei Lee, Montse Gonzalez Arenas, Hao-Tien Lewis Chiang, Tom Erez, Leonard Hasenclever, Jan Humplik, Brian Ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa Sadigh, Jie Tan, Yuval Tassa, Fei Xia
Alexander Herzog, Kanishka Rao, Karol Hausman, Yao Lu, Paul Wohlhart, Mengyuan Yan, Jessica Lin, Montserrat Gonzalez Arenas, Ted Xiao, Daniel Kappler, Daniel Ho, Jarek Rettinghouse, Yevgen Chebotar, Kuang-Huei Lee, Keerthana Gopalakrishnan, Ryan Julian, Adrian Li, Chuyuan Kelly Fu, Bob Wei, Sangeetha Ramesh, Khem Holden, Kim Kleiven, David Rendleman, Sean Kirmani, Jeff Bingham, Jon Weisz, Ying Xu, Wenlong Lu, Matthew Bennice, Cody Fong, David Do, Jessica Lam, Yunfei Bai, Benjie Holson, Michael Quinlan, Noah Brown, Mrinal Kalakrishnan, Julian Ibarz, Peter Pastor, Sergey Levine
In the proceedings of Robotics: Science and Systems (RSS), 2023.
Research in human robot interaction in the Personal Autonomous Robotics Lab (PeARL) and Socially Intelligent Machines (SiM) Lab. Experience in behavior architectures, perception, manipulation, and machine learning. Austin, Texas.