Hello, I am a Ph.D. student advised by Prof. Iman Soltani at LARA, University of California, Davis.
As a researcher, I am passionate about bridging the gap between robotics and human behavior. Specifically, I am interested in (i) leveraging insights from human behavior for robust robot learning and (ii) developing human-like active perception strategies for robots.
Most recently, I have been exploring how Large Vision-Language Models (LVLMs) can be leveraged to guide robots in learning human-like behaviors.
I am actively looking for internship opportunities for 2025! If you are interested in my research, please feel free to reach out to me.
Ian Chuang*, Andrew Lee*, Dechen Gao, Iman Soltani (* equal contribution)
Workshop on Whole-body Control and Bimanual Manipulation @ CoRL 2024
International Conference on Robotics and Automation (ICRA) 2025
We introduce AV-ALOHA, a new bimanual teleoperation robot system that extends the ALOHA 2 robot system with Active Vision. This system provides an immersive teleoperation experience, with bimanual first-person control, enabling the operator to dynamically explore and search the scene and simultaneously interact with the environment. We conduct imitation learning experiments and our results show significant improvements over fixed cameras in tasks with limited visibility.
[project page] [arXiv] [code] [code (VR)] [video] [poster (workshop)] [BibTeX]
Ian Chuang*, Andrew Lee*, Dechen Gao, Iman Soltani (* equal contribution)
Workshop on Whole-body Control and Bimanual Manipulation @ CoRL 2024
International Conference on Robotics and Automation (ICRA) 2025
We introduce AV-ALOHA, a new bimanual teleoperation robot system that extends the ALOHA 2 robot system with Active Vision. This system provides an immersive teleoperation experience, with bimanual first-person control, enabling the operator to dynamically explore and search the scene and simultaneously interact with the environment. We conduct imitation learning experiments and our results show significant improvements over fixed cameras in tasks with limited visibility.
[project page] [arXiv] [code] [code (VR)] [video] [poster (workshop)]
Andrew Lee, Ian Chuang, Ling-Yuan Chen, Iman Soltani
Conference on Robot Learning (CoRL) 2024
InterACT is an imitation learning model that captures and extracts inter-dependencies between dual-arm joint positions and visual inputs. By doing so, InterACT guides the two arms to perform bimanual tasks with precision—independently yet in seamless coordination.
[project page] [arXiv] [code] [poster] [BibTeX]
Andrew Lee, Ian Chuang, Ling-Yuan Chen, Iman Soltani
Conference on Robot Learning (CoRL) 2024
InterACT is an imitation learning model that captures and extracts inter-dependencies between dual-arm joint positions and visual inputs. By doing so, InterACT guides the two arms to perform bimanual tasks with precision—independently yet in seamless coordination.