RTC: Is it a mistake in eval_with_real_robot.py
System Info
- lerobot version: 0.4.1
- Platform: Linux-5.15.0-139-generic-x86_64-with-glibc2.31
- Python version: 3.10.19
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- PyTorch version: 2.7.1+cu126
- Is PyTorch built with CUDA support?: True
- Cuda version: 12.6
- GPU model: NVIDIA GeForce RTX 3090
- Using GPU in script?: <fill in>
Information
- [x] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
Reproduction
while not shutdown_event.is_set():
if action_queue.qsize() <= get_actions_threshold:
current_time = time.perf_counter()
action_index_before_inference = action_queue.get_action_index()
prev_actions = action_queue.get_left_over()
inference_latency = latency_tracker.max()
inference_delay = math.ceil(inference_latency / time_per_chunk)
- inference_delay computation is not right?
- By the way, inference_latency will be very big value when using pi05 that has a cold start process?
Expected behavior
inference_latency = latency_tracker.max()
inference_delay = math.ceil(inference_latency)
@xianglunkai Hello, thanks for your comment.
Regarding the first question - inference_delay computation is not right? - why so? The inference delay is calculated in the context the inference steps, so it should be right. Check the RTC paper
Regrading By the way, inference_latency will be very big value when using pi05 that has a cold start process? - latency tracker has p95 method, it stores sliding window of latencies and gives you 95% percentile of that. Such approach less efficient than just taking max, but perfectly fit for your case.
People use super diverse setups, if u have stable connection to robot, good enough processor and the right GPU, the max approach will work fine.
In case more basic env setup - p95 should work better.
@helper2424 TXS!