tue_robocup
tue_robocup copied to clipboard
RoboCup challenge implementations
**1. In SMACH states** The mechanism to query people from ED is something like (from `human_interaction/find_person_in_room.py`): ```python image_data = robot.perception.get_rgb_depth_caminfo() success, found_people_ids = robot.ed.detect_people(*image_data) found_people = [self._robot.ed.get_entity(id) for id in...
The diagnostics are currently not used anywhere, so it is always assumed that all hardware is operational (which is definitely not the case). Toyota did develop a diagnostics aggregator, but...
**Stage 1** - [ ] Add downloading of released models in tue-env-targets - [ ] Add **closed grammar** functionality to yapykaldi (Refer to zamia speech JSGF parser and adaptation). For...
Deduplicate assigning position and orientation in the grasp computation.
First step of arm splitting, move the handover function to its own class.
Addresses #911 @jlunenburg A first small step
In the find smach state. It is possible that the CheckIfEntityFound state evaluates the entities in the worldmodel before the updaterequest from the preceding Segment state is processed. This results...
See https://github.com/tue-robotics/tue_robocup/blob/12d87e7857d0fb1c09f775a6840f9f7efd06bcf8/robot_skills/src/robot_skills/world_model_ed.py#L380