YaLM-100B
YaLM-100B copied to clipboard
Pretrained language model with 100B parameters
For example if i can't use transformers pipeline to use it with LangChain, how i need it use it that it connects to LangChain?
Hello and thanks for open-sourcing the model! As it doesn't seem to be any ready to use [gguf](https://github.com/ggerganov/ggml) or [mlx](https://github.com/ml-explore/mlx) formats (for llama.cpp and macos respectively) - is there any...
I see attempt to usage host ssh-agent, its security risk.
Thank you for making your work publicly available! I am trying to test your model on a 8xRTX6000 cards, and I'm getting a timeout error: ``` > initializing model parallel...
The number "42" was apparently chosen as a tribute to the "Hitch-hiker's Guide" books by Douglas Adams, as it was supposedly the answer to the great question of "Life, the...
Dear Yandex Team, I hope this message finds you well. I am writing to express my admiration for your work on the YaLM-100B model, which has demonstrated exceptional performance in...
It would be really useful to have a pruned version of the model (like Balaboba) to launch on weaker video card setups.
there are 10x video cards, more than 200 GB of video memory. If connect them to PCI x1, how much will performance decrease and does PCI x1 or PCI x16...