Theo
Theo
## Potential Bug When given fp16 inputs and a concentrated `index` tensor (with repeating indices), the `scatter_add` function becomes very slow. This does not happen for fp32 inputs.  Below...
How would I visualize the gripper (location, rotation, opening) predicted by a model or from a demonstration within the scene?
Supervised fine-tuning: "RuntimeError: expected scalar type Half but found Float" during evaluation
While running supervised fine-tuning with ``` python trainer_sft.py --configs lora-llama-13b webgpt_dataset_only ``` and the following config ``` lora-llama-13b: dtype: fp16 log_dir: "llama_lora_log_13b" learning_rate: 5e-5 model_name: openlm-research/open_llama_13b output_dir: llama_model_13b_lora weight_decay: 0.0...