torchchat
torchchat copied to clipboard
[Feature request] Support CPU+GPU mixed execution
Assumption right now, it's only needed when there is not enough GPU memory, but perhaps sometimes it's just faster this way
Right now we only doing tokenization on CPU and inference can run on either CPU or GPU