gemma
gemma copied to clipboard
docs: Add GPU memory requirements table (Fixes #167)
Description
Adds detailed GPU memory requirements table to README.md to help users select appropriate hardware configurations (Addresses Issue #167).
Changes
- Added comprehensive VRAM specifications for:
- Gemma 2B, 7B, and 12B model variants
- Both bf16 and int4 quantizations
- Included:
- Minimum required VRAM
- Recommended VRAM for optimal performance
- Notes about framework overhead
Impact
- Helps users avoid OOM errors by selecting compatible hardware
- Provides clear guidance for different model sizes and precisions
- Complements existing system requirements documentation
Fixes #167
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
View this failed invocation of the CLA check for more information.
For the most up to date status, view the checks section at the bottom of the pull request.
@googlebot I signed it!
@googlebot I signed it!
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
View this failed invocation of the CLA check for more information.
For the most up to date status, view the checks section at the bottom of the pull request.