“Integrating Perception, Learning, and Control for Full Autonomy“
Monday, May 13th, 2024
📍Pacifico North Area, Room: G8
@ Yokohama, Japan
Objectives
Pushing the boundaries towards robotic home assistants and flexible production requires robots to act in large, human-centered environments.
Mobile Manipulation acts at the intersection of static manipulation and pure navigation. Combining them promises to vastly broaden the range of applicable tasks, including hospitality, logistics, gastronomy, retail, and agriculture. This requires overcoming novel challenges in reasoning, acting, and perception:
- Control of the whole body, leading to a combinatorially larger planning and action space.
- Performing tasks in large, diverse, human-centered environments containing a multitude of known and unknown objects.
The objective of the workshop is to facilitate the discussion on how to build truly closed-loop systems. This raises questions such as:
- How can we combine the advantages of learning-based methods with the benefits of planning and control approaches?
- How can we integrate high-level reasoning with low-level motion execution?
- What are scalable and flexible scene representations?
- How can recent advancements in foundational methods complement well-proven methods?
This workshop will give researchers in all related domains a platform to exchange and discuss new trends and ideas.
Scope
- Mobile Manipulation
- Embodied AI
- Hierarchical abstractions
- Long-horizon planning & reasoning
- Scalable scene representations
- Hybrid learning and control methods
- Whole-body motion generation
- Articulated object perception and interaction
- Active perception, unexplored and partially observable environments
- Human-robot interaction
- Safe mobile manipulation in human-centered environments
Speakers
Organizers
- Snehal Jauhri (TU Darmstadt)
- Georgia Chalvatzaki (TU Darmstadt)
- Yuqian Jiang (UT Austin)
- Weiwei Wan (Osaka University)
- Kensuke Harada (Osaka University)
- Nick Heppert (University of Freiburg)
- Daniel Honerkamp (University of Freiburg)
- Roberto Martin-Martin (UT Austin)
- Tim Welschehold (University of Freiburg)
- Abhinav Valada (University of Freiburg)
Call for Papers
We encourage participants to submit their research related to the topics of interest of this workshop in a single PDF. Submissions may be up to 4 pages long, including figures but excluding references and supplementary material. Please use the IEEE conference latex template below.
Any type of contributions, either already published or works in progress, are welcome.
Accepted papers will be presented in a poster session, and selected papers as spotlight talks. All submitted contributions will go through a single-blind review process. The contributed papers will be made available on this website. However, this does not constitute an archival publication, and no formal workshop proceedings will be made available, meaning contributors are free to publish their work in archival journals or conferences.
The best paper among the spotlight presentations will be rewarded with a Best Paper Award sponsored by TOYOTA!
The best poster presented during the poster session will be rewarded with a Best Poster Award sponsored by UBITECH!
Submission Deadline: 13th March 2024, 23:59 PT 31st March 2024, 23:59 PT
Notification of Acceptance: 31st March 2024 16th April 2024
Submission Website: https://openreview.net/group?id=IEEE.org/2024/ICRA/Workshop/MoMa
LaTeX Template: http://ras.papercept.net/conferences/support/tex.php
Student travel grant sponsored by the TC on Mobile
Manipulation
The MoMa TC together with the Robot Learning TC offer a travel grant for ICRA 2024 in conjunction with this workshop, partially covering expenses for attending the conference and joining the workshop. The award refers to students who want to show their active interest in the mobile manipulation field, particularly those from underrepresented groups. Submit a 2-page motivational letter, an up-to-date CV, and your final paper submission as a single PDF with the filename TCMM_StudentTravelGrant_Surname.pdf to moma24@googlegroups.com with the subject as follows: “[STG] <LastName> <OpenReview Paper ID>”.
Submission deadline for the travel grant: 23rd April 2024
Program (Monday, May 13th, 2024):
Posters
10:00-10:30 Poster Session 1:
- Scaling Robot Policy Learning via Zero-Shot Labeling with Foundation Models
- MobileAfford: Mobile Robotic Manipulation through Differentiable Affordance Learning
- What Do We Learn from a Large-Scale Study of Pre-Trained Visual Representations in Sim and Real Environments?
- Active Object Recognition with Trained Multi-view Based 3D Object Recognition Network
- Toward a Plug-and-Play Vision-Based Grasping Module for Robotics
- Evolutionary Reward Design and Optimization with Multimodal Large Language Models
- Whole-body motion planning of dual-arm mobile manipulator for compensating for door reaction force
- Robi Butler: Multimodal Remote Interaction with Household Robotic Assistants
- OK-Robot: What Really Matters in Integrating Open-Knowledge Models for Robotics
- A Task Restricted Hierarchical Control Scheme Facilitating Small Logistics
- Generating Multi-hierarchy Scene Graphs for Human-instructed Manipulation Tasks in Open-world Settings
15:30-16:00 Poster Session 2:
- MOSAIC: A Modular System for Assistive and Interactive Cooking
- OpenEQA: Embodied Question Answering in the Era of Foundation Models
- Spot-Compose: A Framework for Open-Vocabulary Object Retrieval and Drawer Manipulation in Point Clouds
- Active-Perceptive Motion Generation for Mobile Manipulation
- Fast Nonprehensile Object Transportation on a Mobile Manipulator
- KinScene: Model-Based Mobile Manipulation of Articulated Scenes
- Navigation Among Movable Obstacles with Mobile Manipulator using Learned Robot-Obstacle Interaction Model
- Boosting Robot Behavior Generation with Large Language Models and Genetic Programming
- Extremum-Seeking Active Object Recognition in Clutter Using Topological Descriptors
- TeleMoMa: A Modular and Versatile Teleoperation System for Mobile Manipulation
Sponsors
Supported by the IEEE RAS TC on Mobile Manipulation and the IEEE TC for Robot Learning