Embodied Mobile Agents Group

“We bring intelligence into the physical world (Physical AI) by enabling robots and machines to move, act, and learn through interaction. Our research explores mobile robot learning, reinforcement learning, and is at the intersection of Generative AI and Robotics with Agentic Systems for Embodied Intelligence in Mobile Robotics.”
Services
Team
Projects
Publications
Fields of Expertise
- Reinforcement Learning
- Mobile Robot Learning
- Foundation Models in Mobile Robotics (e.g., Vision-Language-Action models)
- Reinforcement learning and adaptive control for mobile and embodied robots
- Learning from demonstration and sim-to-real transfer
- Vision-Language-Action (VLA) models and natural language robot interfaces
- Human-centric and agentic AI for intuitive human–robot collaboration
The Embodied Mobile Agents group focuses on advancing mobile robot learning and embodied intelligence, with an emphasis on human-centric and agentic AI systems. Our expertise spans reinforcement learning, learning from demonstration, and sim-to-real transfer for quadrupeds, humanoids, and mobile manipulators. We develop Vision-Language-Action (VLA) models and natural language task interfaces that enable intuitive collaboration between humans and robots. Applications range from rehabilitation robotics and assistive mobility to construction training and digital twinning, and to safe and adaptive humanoid systems in healthcare workflows. By combining cutting-edge machine learning with applied, interdisciplinary research, we aim to create embodied AI that is intuitive, adaptive, and socially relevant.
Services
- Insight: keynotes, trainings
- AI consultancy: workshops, expert support, advise, technology assessment
- Research and development: small to large-scale collaborative projects, third party-funded research, student projects, commercially applicable prototypes