MrClownC

Results 10 issues of MrClownC

Hi,I want to produce some RLBench data for my current research, I run this file to only get some png images, why no action data?I think most imitation learning tasks...

Hi, I think you need to produce a more detailed description for your data, such as the range of data, what is the maximum and minimum value? This is necessary...

I added EMAs and codebook reset techniques for VQ-VAE models, which enhanced the trained models.

Great work, and I'd like to use your data set to conduct my own research. Can you add a data note explaining what the produced data includes and how it...

Hi, first of all I want to thank you for the excellent code that facilitated my research. I want to migrate the code I trained on the Trainer to SFTTrainer...

Hi, thanks for your great work! I'm using the code in the llama-recipes repository to fine-tune my own data and have a question for you: I read the examples/quickstart.ipynb code,...

triaged

I noticed that the predicted 3D position Action was not output during the training process, so why not directly use the Action of the expert trajectory and the Action of...

There seems to be no mention in the paper about the time you need to train the model, could you please provide me with a reference? Thank you very much!

motion_string.append( (f'' + ''.join([f'' for i in motion_list]) + f'')) Why add it? ;

Great job! I have a question about rewarding learning,Is the reward calculated directly by S(φ(on), ψ(l)) or S(φ(on+1), ψ(l)) − S(φ(on), ψ(l))? I noticed that the get_reward function of the...