Everyday Robots spun out of X in November 2021. Mountain View, California.
Vision-language model lead at Everyday Robots. Introduced and deployed first vision and language model (CLIP) in production for robot visual question answering (VQA). Scaled diffusion models to create synthetic data for CLIP. Landed on-robot open-vocabulary object detector to detect novel objects.
Designed and built a multi-sensor (camera and lidar), open-vocabulary panoptic segmentation model.
Full-stack ML engineer: end-to-end ownership of entire ML flywheel from data collection to inference. Built model automation pipeline for data collection, training, evaluation, and on-robot deployment for all perception models.
Early ML engineer at The Everyday Robot Project. Mountain View, California.
Early engineer on the perception team. Expert in bringing research to production in real world systems.
Created the lidar panoptic segmentation model and RGB-D camera panoptic segmentation model (with associated automation flywheel) and deployed to robot fleet.
Trained multimodal vision and action models, resulting in publication at ICRA.
Worked on experimental augmented reality. Created environmental lighting system allowing more photorealistic lighting and reflections in augmented reality for Tango SDK. Published Google Developer Blog post with tutorial for usage. Also experimented with video stabilization. Experience in computer vision, computer graphics, and computational photography. Worked with C++, Unity, and Java. Mountain View, California.
Google — Software Engineering Intern, Chrome for Android[May 2015 – Aug 2015]
Served as an intern on tools and infrastructure for Chrome for Android. All code is open source as part of Chromium. Wrote test infrastructure for sign-in authentication test. Also created parametrizable testing framework. All my code is open source as part of Chromium! Worked with Java, Python, and C++. Mountain View, California.
Accordion Health — Software Engineer[Aug 2014 – Jan 2015]
Used machine learning for health care data analytics. Clustered co-morbidity for several sets of patients. Experience in data visualization. Worked with R, Python, and D3.js. Austin, Texas.
Set up Unix servers and configured SQL databases. Developed over 20 websites in the summer. Managed and maintained cloud servers. Worked with HTML, CSS, PHP, JavaScript, and jQuery. Las Vegas, Nevada.
Jiayuan Gu, Sean Kirmani, Paul Wohlhart, Yao Lu, Montserrat Gonzalez Arenas, Kanishka Rao, Wenhao Yu, Chuyuan Fu, Keerthana Gopalakrishnan, Zhuo Xu, Priya Sundaresan, Peng Xu, Hao Su, Karol Hausman, Chelsea Finn, Quan Vuong, Ted Xiao
Abhishek Padalkar, Acorn Pooley, Ajinkya Jain, Alex Bewley, Alex Herzog, ..., Ryan Julian, Samuel Bustamante, Sean Kirmani, Sergey Levine, ..., Zhuo Xu, Zichen Jeff Cui
Wenhao Yu, Nimrod Gileadi, Chuyuan Fu, Sean Kirmani, Kuang-Huei Lee, Montse Gonzalez Arenas, Hao-Tien Lewis Chiang, Tom Erez, Leonard Hasenclever, Jan Humplik, Brian Ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa Sadigh, Jie Tan, Yuval Tassa, Fei Xia
In the proceedings of The Conference on Robot Learning (CoRL), 2023.
Austin Stone, Ted Xiao, Yao Lu, Keerthana Gopalakrishnan, Kuang-Huei Lee, Quan Vuong, Paul Wohlhart, Sean Kirmani, Brianna Zitkovich, Fei Xia, Chelsea Finn, Karol Hausman
In the proceedings of The Conference on Robot Learning (CoRL), 2023.
Alexander Herzog, Kanishka Rao, Karol Hausman, Yao Lu, Paul Wohlhart, Mengyuan Yan, Jessica Lin, Montserrat Gonzalez Arenas, Ted Xiao, Daniel Kappler, Daniel Ho, Jarek Rettinghouse, Yevgen Chebotar, Kuang-Huei Lee, Keerthana Gopalakrishnan, Ryan Julian, Adrian Li, Chuyuan Kelly Fu, Bob Wei, Sangeetha Ramesh, Khem Holden, Kim Kleiven, David Rendleman, Sean Kirmani, Jeff Bingham, Jon Weisz, Ying Xu, Wenlong Lu, Matthew Bennice, Cody Fong, David Do, Jessica Lam, Yunfei Bai, Benjie Holson, Michael Quinlan, Noah Brown, Mrinal Kalakrishnan, Julian Ibarz, Peter Pastor, Sergey Levine
In the proceedings of Robotics: Science and Systems (RSS), 2023.
Research in human robot interaction in the Personal Autonomous Robotics Lab (PeARL) and Socially Intelligent Machines (SiM) Lab. Experience in behavior architectures, perception, manipulation, and machine learning. Austin, Texas.
Selected by Professor Joydeep Ghosh in the University of Texas Electrical and Computer Engineering department in the Intelligent Data Exploration and Analysis Laboratory (IDEAL). Lab focuses on machine learning and data mining. Research on making self-driving cars a safe reality using distributed machine learning through wireless mmWave communication in collaboration with Dr. Robert Heath. [In the news] Austin, Texas.