LongChat
LongChat copied to clipboard
Can inference be run on consumer hardware?
AMD? CPU? Single GPU?
Is this all possible via FastChat?