nishanth-k-10
Results
2
comments of
nishanth-k-10
llm = GPT4All(model=model_path, n_threads=24, n_ctx=model_n_ctx, backend='gptj', n_batch=model_n_batch, callbacks=callbacks, verbose=False) In the above line what are the values for n_ctx and n_batch that you guys are using?
Try to increase the value for n_thread parameter. For example if you have 8 cores and 2 threads per core then you can put max value up to 8*2=16 threads....