ontogpt
ontogpt
copied to clipboard
Published
20 hours ago
•
monarch-initiative
Reame
Issues
Is there an option to use GPU for the LLM, the inference speed is a little too slow. My GPU utilization is almost zero.
Open
doubleplusplus
opened this issue 7 months ago
• 4 comments
on local model
Nov 17 '23 08:11
doubleplusplus