PaLM icon indicating copy to clipboard operation
PaLM copied to clipboard

An open-source implementation of Google's PaLM models

Results 10 PaLM issues
Sort by recently updated
recently updated
newest added

Will instruction fine tuned models be made available as well for this

``` >>> import torch >>> model = torch.hub.load("conceptofmind/PaLM", "palm_410m_8k_v0") /Users/sebastianperalta/simply/dev/PaLM/venv/lib/python3.11/site-packages/torch/hub.py:294: UserWarning: You are about to download and run code from an untrusted repository. In a future release, this won't be...

actually use the args, also added data_dir arg and added set_seed

Is there any workaround for running inference on CPU or my arm based Mac M1. Currently trying to run on Mac m1 and I am getting the following error ```...

I would like to finetune this model for a specific task. Is there finetuning script available for this model?

When I run the inference logic using the following script, I get `RuntimeError: No available kernel. Aborting execution.` error: ``` A100 GPU detected, using flash attention if input tensor is...

i want to train my model,can you guide me?

The readme suggests use of [GPT-neox-20b tokenizer](https://huggingface.co/EleutherAI/gpt-neox-20b/blob/main/config.json). This tokenizer has a BoS and EoS token mapped to token id 0. However when I look at the model implementation in PaLM-rlhf-pytorch,...

Hello! I was wondering if there was anything extra that needed to be done to get training with Hidet compiler working. Out of the box I seem to be running...