vision-language-action topic
recogdrive
ReCogDrive: A Reinforced Cognitive Framework for End-to-End Autonomous Driving
awesome-vla-for-ad
🌐 Vision-Language-Action Models for Autonomous Driving: Past, Present, and Future
track2
Track 2: Social Navigation
mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to generate actions
ShowUI
[CVPR 2025] Open-source, End-to-end, Vision-Language-Action model for GUI Agent & Computer Use.
AutoVLA
[NeurIPS 2025] AutoVLA: A Vision-Language-Action Model for End-to-End Autonomous Driving with Adaptive Reasoning and Reinforcement Fine-Tuning
UniAct
[CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"
BridgeVLA
✨✨【NeurIPS 2025】Official implementation of BridgeVLA
WholebodyVLA
Towards Unified Latent VLA for Whole-body Loco-manipulation Control
Efficient-VLAs-Survey
🔥This is a curated list of "A survey on Efficient Vision-Language Action Models" research. We will continue to maintain and update the repository, so follow us to keep up with the latest developments...