DOC: Add more specific GPU examples to System Requirements in README.md
The current README.md for the gemma repository provides GPU VRAM recommendations for the 2B and 7B checkpoints. However, it only states the required memory (e.g., "8GB+ RAM" and "24GB+ RAM") without providing any examples of common hardware that meets these specifications.
This can make it difficult for new users to quickly assess if their machine is capable of running the models, requiring them to search for external information on GPU memory specifications.
Proposed Solution:
I suggest updating the System Requirements section to include a few examples of popular GPUs that meet the VRAM recommendations. This would make the requirements more concrete and immediately understandable for users.
Example of proposed text: ... For GPU, we recommend 8GB+ RAM on GPU for The 2B checkpoint and 24GB+ RAM on GPU are used for the 7B checkpoint. For reference, GPUs with 8GB+ VRAM include models like the NVIDIA RTX 3060, while models with 24GB+ VRAM include the NVIDIA RTX 4090 and AMD Radeon RX 7900 XTX.
I believe this small addition would significantly improve the user experience. I'm happy to create a pull request to implement this change once a maintainer confirms this is a welcome contribution.
Hi @monis-codes ,
Thank you so much for your valuable analysis, suggestions and we're glad for your interest in contributing to Gemma models. Yes, Gemma model repos are open models and accepts the community contributions. Please feel free to open a PR for your suggested changes.
Thanks once again for your contribution.
Thanks.
Thank you !! Working on it.
Hi @Balakrishna-Chennamsetti , I've created a pull request to address this issue. It includes the proposed changes to the README.md for clearer GPU specifications.
You can find the PR here: https://github.com/google-deepmind/gemma/pull/402
I look forward to your feedback.
Thank you.
Hi @Balakrishna-Chennamsetti, Noting that PR #403 was created prior to this comment. It implements an improved description of system requirements in README.md, specifically regarding GPU models for different Gemma checkpoints. Happy to adjust and collaborate based on feedback.
Hi @Balakrishna-Chennamsetti ,
I'm writing to provide some context on my contribution. I opened my pull request, Docs: Added hardware recommendations for running Gemma #402 , after getting approval on this issue.
I noticed that PR #403 was opened afterward. Could you please let me know what the next steps are? I am committed to making any necessary improvements to my PR and seeing this through to completion while adhering to the project's guidelines.
Thank you!
Hi @monis-codes and @Roaimkhan ,
Thank you for your valuable contribution to the Gemma models. Please note that there are no pending actions required from your end at this time.
Thanks.