roBot Intelligence Group (BIG) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The main theme of research in Bot Intelligence Group (BIG) is to develop robotic intelligence ranging from the low-level autonomy to the high-level cognitive abilities. We aim to develop robots that can cooperate or collaborate with humans in shared environments, learning to improve themselves over time through continual on- and off-line training, exploration, and interactions with humans and/or environments. Towards this general goal, we strive to answer research questions on how to make robots to understand various semantic contexts of physical and/or social environments, act in both task-effective and socially-compliant manners, and communicate their internal states with other agents in intuitive ways. Research areas: semantic robot navigation, social robot navigation, human-robot interaction/collaboration, vision language planning, creative AI, arts and robotics, simulation-to-real adaptation, robotic intelligence, cognitive robotics
Research projects: Social Robot Navigation | Creative AI | Semantic Robot Navigation | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
MEMBERS & COLLABORATORS | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
BOTS | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Meet our robots that can follow natural language commands. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
RESEARCH | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Social Navigation | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
May 27, 2022. Jean Oh will give a talk at the Social Robot Navigation: Advances and Evaluation Workshop at IEEE International Conference on Robotics and Automation (ICRA'22) in Philadelphia, PA. March 22-24, 2021. Jean Oh co-organized a symposium, Machine Learning for Mobile Robot Navigation in the Wild (ML4NAV), at the AAAI 2021 Spring Symposium Series. Jay Patrikar, Brady Moon, Jean Oh, Sebastian Scherer. Predicting Like A Pilot: Dataset and Method to Predict Socially-Aware Aircraft Trajectories in Non-Towered Terminal Airspace. IEEE International Conference on Robotics and Automation (ICRA), 2022. [ Arxiv:2109.15158] C. Mavrogiannis, P. Trautman, A. Steinfeld, D. Zhao, A. Wang, F. Baldini, and J. Oh. Core Challenges of Social Navigation: A Survey, 2021. [ Arxiv:2103.05668] D. Zhao and J. Oh. Noticing Motion Patterns: Temporal CNN with a Novel Convolution Operator for Human Trajectory Prediction. In: IEEE Robotics and Automation Letters (RA-L), Special Issue on Long-Term Human Motion Prediction (2020). [Arxiv] T.-E. Tsai and J. Oh. A Generative Approach for Socially Compliant Navigation, In: IEEE Conference on Robotics and Automation (ICRA). 2020. [Arxiv] X. Yao, J. Zhang, and J. Oh. Autonomous Human-Aware Navigation in Dense Crowds, In: Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). (Late breaking results). 2019. X. Yao, J. Zhang, and J. Oh. Following Social Groups: Socially-Compliant Autonomous Navigation in Dense Crowds, In: Cognitive Vehicles Workshop at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2019. A. Vemula, K. Muelling, J. Oh. Social Attention: Modeling Attention in Human Crowds. In Proc. of IEEE Conference on Robotics and Automation (ICRA), 2018. *Best Paper Award in Cognitive Robotics* [ArXiv]. A. Vemula, K. Muelling, J. Oh. Modeling cooperative navigation in dense human crowds. In Proc. of IEEE Conference on Robotics and Automation (ICRA), 2017 [pdf]. A. Vemula, K. Muelling, J. Oh. Path Planning in Dynamic Environments with Adaptive Dimensionality. In Proc. of International Symposium on Combinatorial Search (SoCS) 2016. [pdf] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Creating Digital Twin | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
H. Yu and J. Oh, "Anytime 3D Object Reconstruction Using Multi-Modal Variational Autoencoder," In: IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 2162-2169, April 2022, doi: 10.1109/LRA.2022.3142439. (to be presented at ICRA'22) Chao Cao, Hongbiao Zhu, Fan Yang, Yukun Xia, Howie Choset, Jean Oh, Ji Zhang. Autonomous Exploration Development Environment and the Planning Algorithms. IEEE International Conference on Robotics and Automation (ICRA), 2022. [Arxiv] H. Yu and J. Oh Anchor Distance for 3D Multi-Object Distance Estimation from 2D Single Shot, IEEE Robotics and Automation Letters (RA-L), 2021 (to be presented at the IEEE Conference on Robotics and Automation (ICRA'21)). [Arxiv] H. Yu and J. Oh A Missing Data Imputation Method for 3D Object Reconstruction using Multi-modal Variational Autoencoder, 2021. [Arxiv] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Creative AI, Arts & Robots | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
We have been co-organizing a series of workshops on the topic of Creative AI to be held at multidisciplinary venues including arts, AI, machine learning, graphics, and robotics. June 2021. Peter Schaldenbrand and Jean Oh co-organized the second Creative AI workshop, Computational Measurements of Machine Creativity (CMMC): Bridging the Gap between Subjective and Computational Measurements of Machine Creativity at Conference on Computer Vision and Pattern Recognition (CVPR 2021). October 2020. Jean Oh co-organized the first Creative AI workshop: Measuring Computational Creativity: Collaboratively Designing Metrics to Evaluate Creative Machines at Inter-Society for Electronic Arts (ISEA 2020). |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
P. Schaldenbrand, Z. Liu, and J. Oh. StyleCLIPDraw: Coupling content and style in text-to-drawing translation. In proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI'22) (to appear). [Code repo][Demo] P. Schaldenbrand and J. Oh. Content Masked Loss: Human-Like Brush Stroke Planning in a Reinforcement Learning Painting Agent. In: Proc. of AAAI Conference on Artificial Intelligence (AAAI). 2021. [Arxiv] [Code repo] Peter Schaldenbrand, Zhixuan Liu, Jia Chen Xu, Heera Sekhr, Jesse Ding, James McCann, and J. Oh. Frida: A Narrative Robot Artist with Versatile Styles. Live demo at IJCAI-ECAI 2022 Robot Exhibition, July 2022.
A. Bidgoli, M. L. De Guevara, C. Hsiung, J. Oh, and E. Kang. Artistic Style in Robotic Painting; a Machine Learning Approach to Learning Brushstroke from Human Artists. In: The 29th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN). 2020. [Arxiv] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Semantic Autonomous Navigation in Unknown Environments | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Jonathan Francis, Nariaki Kitamura, Felix Labelle, Xiaopeng Lu, Ingrid Navarro,and Jean Oh. Core Challenges in Embodied Vision-Language Planning, 2021. [ Arxiv:2106.13948] F. Labelle, X. Lu, N. Kitamura, and J. Oh. Modular Pretraining for Vision Language Navigation. In: Embodied Vision, Actions & Language (EVAL) Workshop at European Conference on Computer Vision (ECCV). 2020. J. Tian and J. Oh Image Captioning with Compositional Neural Module Networks, International Joint Conference on Artificial Intelligence (IJCAI) 2019 [pdf]. T.-H. Lin, T. Bui, D. S. Kim, and J. Oh. A Multimodal Dialogue System for Conversational Image Editing. In The Second Workshop on Conversational AI at the Thirty-second Conference on Neural Information Processing Systems (NeurIPS), 2018. S.-R. Shiang, A. Gershman, and J. Oh A Generalized Model for Multimodal Perception. AAAI Fall Symposium, November 2017 [pdf]. J. Hu, D. Fan, S. Yao, and J. Oh Answer-Aware Attention on Grounded Question Answering in Images. AAAI Fall Symposium, November 2017 [pdf]. S. Shiang, S. Rosenthal, A. Gershman, J. Carbonell, J. Oh. Vision-Language Fusion for Object Recognition. In Proc. of AAAI Conference on Artificial Intelligence (AAAI), 2017. [pdf] J. Hu, J. Oh, A. Gershman. Learning Lexical Entries for Robotic Commands using Crowdsourcing. In Proc. of AAAI Conference on Human Computation (HCOMP), 2016 (short paper).[pdf] J. Oh, M. Zhu, S. Park, T.M. Howard, M.R. Walter, D. Barber, O. Romero, A. Suppe, L. Navarro-Serment, F. Duvallet, A. Boularias, J. Vinokurov, T. Keegan, R. Dean, C. Lennon, B. Bodt, M. Childers, J. Shi, K. Daniilidis, N. Roy, C. Lebiere, M. Hebert, and A. Stentz. Integrated intelligence for human-robot teams. In Proc. of International Symposium on Experimental Robotics (ISER) 2016 [pdf]. A. Boularias, F. Duvallet, J. Oh, and A. Stentz. Learning Qualitative Spatial Relations for Robotic Navigation. in Proc. of International Joint Conference on Artificial Intelligence (IJCAI), 2016 [pdf]. J. Oh, L. Navarro-Serment, A. Suppe, A. Stentz, M. Hebert, Inferring door locations from a teammate's trajectory in stealth human-robot team operations. In Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015 [pdf]. J. Oh, A. Suppe, F. Duvallet, A. Boularias, J. Vinokurov, L. Navarro-Serment, O. Romero, R. Dean, C. Lebiere, M. Hebert, and A. Stentz. Toward mobile robots reasoning like humans. In Proc. of AAAI Conference on Artificial Intelligence (AAAI), 2015. [pdf] A. Boularias, F. Duvallet, J. Oh, and A. Stentz. Learning to ground spatial relations for outdoor robot navigation. In Proc. of IEEE Conference on Robotics and Automation (ICRA), 2015. [pdf] *Best Cognitive Robotics Paper Award* | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Team pictures | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
ALUMNI | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Acknowledgement: Our research is funded by U.S. Army Research Lab., Air Force Office of Scientific Research, Defense Advanced Research Projects Agency, DiDi Chuxing, U.S. Army AI Hub, US Army Ground Vehicle Systems Center (GVSC) and Software Engineering Institute (SEI). |