Python dependencies and conflicts fine-tune gemma 3 1b
Hugging Face Gemma/TRL stack (requires PyTorch, transformers<4.51, protobuf==4.x) is incompatible with TensorFlow/MediaPipe/AI-Edge stack (which requires protobuf>=5.x and often overwrites dependencies), making them unsafe to use together in a single Colab runtime.
Is there a working version of all Python dependencies from here https://colab.research.google.com/github/google-ai-edge/mediapipe-samples/blob/main/codelabs/litert_inference/Gemma3_1b_fine_tune.ipynb
Hi @kartmpk ,
Welcome to Google Gemma family of open source models, thanks for your patience. Please find the attached gist file, which is having the all the necessary dependencies installation without any conflicts.
Please let me know if you required any additional assistance.
Thanks.
Hi @kartmpk,
I've run into this exact protobuf nightmare before. It's basically HuggingFace wanting protobuf 4.x while TensorFlow insists on 5.x+. Really frustrating when you just want to fine-tune a model.
Here's what actually worked for me:
Quick Fix (if you just need it working)
# This seems to work for most people
pip install protobuf==4.25.3
pip install transformers==4.50.2
pip install tensorflow==2.15.0
pip install gemma
The key is pinning protobuf to 4.25.3 - it's the sweet spot that both stacks can live with.
Better approach for serious work
I ended up using separate environments because this conflict keeps coming back:
# Create environment for HF stack
conda create -n gemma-hf python=3.11
conda activate gemma-hf
pip install torch transformers==4.50.2 trl protobuf==4.25.3
# Separate environment for TF stuff
conda create -n gemma-tf python=3.11
conda activate gemma-tf
pip install tensorflow protobuf gemma
Yeah it's annoying to switch environments, but at least things actually work.
Docker version (if you're deploying this)
FROM python:3.11-slim
# Install the versions that don't fight each other
RUN pip install \
protobuf==4.25.3 \
torch \
transformers==4.50.2 \
tensorflow==2.15.0 \
gemma \
trl
COPY . /app
WORKDIR /app
Debugging tips
If you're still getting conflicts:
# Check what's actually installed
pip show protobuf transformers tensorflow
# Nuclear option - start fresh
pip freeze | grep -v "^-e" | xargs pip uninstall -y
pip install <your packages>
The @Balakrishna-Chennamsetti gist you mentioned - I couldn't find it, but if you have the link that would be helpful.
This protobuf thing is honestly one of the most annoying parts of the ML ecosystem right now. Google really needs to sort this out between their teams.
Let me know if you're still hitting issues with specific error messages.
Thanks @Balakrishna-Chennamsetti for sharing the gist!
I see you're using the bleeding-edge transformers from GitHub main - that's a clever way to sidestep the protobuf conflicts entirely. The AI Edge stack integration looks solid too.
For anyone finding this thread: both approaches work depending on your needs:
- Use the gist above if you need the latest features and MediaPipe integration
- Use the pinned versions I mentioned if you need stability and reproducible builds
Appreciate the follow-up! 👍
@anivar Pinned versions doesn't work for the conversion step.