langchain
langchain copied to clipboard
Add verbose parameter for llamacpp
Title: Add verbose parameter for llamacpp
Description: This pull request adds a 'verbose' parameter to the llamacpp module. The 'verbose' parameter, when set to True, will enable the output of detailed logs during the execution of the Llama model. This added parameter can aid in debugging and understanding the internal processes of the module.
The verbose parameter is a boolean that prints verbose output to stderr when set to True. By default, the verbose parameter is set to True but can be toggled off if less output is desired. This new parameter has been added to the validate_environment method of the LlamaCpp class which initializes the llama_cpp.Llama API:
class LlamaCpp(LLM):
...
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
...
model_param_names = [
...
"verbose", # New verbose parameter added
]
...
values["client"] = Llama(model_path, **model_params)
...
Issue: Not applicable (If there is an issue that this PR resolves, please link to it here)
Dependencies: No new dependencies introduced.
Maintainer: Tagging @hinthornw, as this change relates to Tools / Toolkits.
Twitter handle: (If you want a shout-out on Twitter and have a Twitter handle, mention it here.)
This change does not introduce any new features or integrations, so no new tests or notebooks are provided. However, existing tests should cover this new parameter.
Maintainers, please review at your earliest convenience. Thank you for considering this contribution!
The latest updates on your projects. Learn more about Vercel for Git ↗︎
1 Ignored Deployment
| Name | Status | Preview | Comments | Updated (UTC) |
|---|---|---|---|---|
| langchain | ⬜️ Ignored (Inspect) | Jul 6, 2023 6:42pm |
thanks @teleprint-me!