TextRL
TextRL copied to clipboard
Problems in the inference process
Nice repo!!
I completed the training using code examples and now make predictions on the test set. But I found that using actor. predict
to obtain the output of the generated model on test set is unusually slow. I tried using the dump
method you provided to convert the saved model into a huggingface model and then perform inference. This is very fast, but the effect is much worse than using the actor.predict
.
I want to know why there is such a difference? How should I operate?