Add Phi-4 Backbone
Description of the change
This is the first PR in contributing the Phi-4 model to KerasHub, and includes the backbone and its test file.
Reference
Colab Notebook
I've had some trouble getting this part to work, so I need some help. This is my Colab notebook, but the HF model has been pretty annoying to run. On CPU machines, it seems to constantly allocate all available memory (I gave up after giving it 280GB), and on an H200 on Modal, I couldn't get an output after 15 minutes. In the notebook, this line:
hf_output = pt_model(**hf_sample_input)
at the bottom is the one I have trouble with.
Checklist
- [X] I have added all the necessary unit tests for my change.
- [X] I have verified that my change does not break existing code and works with all backends (TensorFlow, JAX, and PyTorch).
- [X] My PR is based on the latest changes of the main branch (if unsure, rebase the code).
- [X] I have followed the Keras Hub Model contribution guidelines in making these changes.
- [X] I have followed the Keras Hub API design guidelines in making these changes.
- [X] I have signed the Contributor License Agreement.
I uploaded the output from the KerasHub model, could someone upload a HF version that I can compare and add to the Colab?
P.S. A lot of this code is based on the existing code for Phi-3 (the technical report states it mostly follows the Phi-3-Medium architecture; I simply made the changes from the report and the reference implementation). Should I refactor it to inherit from Phi3Backbone?
P.S. A lot of this code is based on the existing code for Phi-3 (the technical report states it mostly follows the Phi-3-Medium architecture; I simply made the changes from the report and the reference implementation). Should I refactor it to inherit from
Phi3Backbone?
How much similar is Phi-4 compared to Phi-3? What is the approx percentage of code we can reuse?
I guess still Tokenizer and CausalLM and preset file with necessary test files still needs to be added?
Actually, I think we might get away with directly subclassing Phi3Backbone and changing the defaults. If you do
diff src/models/phi3/phi3_backbone.py src/models/phi4/phi4_backbone.py
The only differences are the model name and the defaults; initially I did this copy anticipating architectural changes, but it seems the only ones are in the attention. From the paper's Section 3:
The architecture closely follows phi-3-medium, except that we now use the tiktoken tokenizer (for better multilingual support) with a padded vocabulary size of 100,352 (including unused tokens) and we use full attention over the 4K context length, rather than a 2K sliding window used in phi-3-medium.
I could not find this sliding window attention in the code, however, so that also remained unchanged, and the tokenizer would be part of the third PR (based on the contributing guidelines). Do you think it's better if I just did that instead?
Yeah this is an interesting question if the only thing that is really changing is the tokenizer. Would it work to just subclass all the phi3 classes as stubs that only update the classes? Like this
@keras_hub_export("keras_hub.models.Phi4CausalLM")
class Phi4CausalLM(Phi3CausalLM):
backbone_cls = Phi4Backbone
preprocessor_cls = Phi4CausalLMPreprocessor
And then define a new tokenizer? I don't think we need to worry about switchin the defaults, that we can just reflect in the preset configs we upload to kaggle.
Might be worth trying all of that on a single PR and trying to convert a model to see if everything works. Since we'd mostly be dealing with stub classes it shouldn't be too much code.
Yes, that makes sense to me. I'll try getting it out over the weekend.
@yrahul3910 - are you still working on this?
Yes, so sorry for the delay! I'll push some commits soon.
/gemini review
Sorry for the delays! I've taken @mattdangerw's advice and used subclasses instead. I'm a little less confident about the tokenizer bit--the paper mentions tiktoken, but since that's just a BPE tokenizer, I used the one we have in KerasHub and then followed the Llama-3 and BART model files (because Phi-4 ultimately is a modified version of Llama models, iirc, and these two use BPE as well).
ETA: I'm also not sure if I'll need to subclass and modify the attention layers, because I'm not quite sure where this part comes in (from the paper):
The architecture closely follows phi-3-medium, except that [...] we use full attention over the 4K context length, rather than a 2K sliding window used in phi-3-medium.
Fixed the tests! For some reason, the test_early_stopping test in Phi4CausalLMTest wouldn't work unless the pad token was at position 0, even if the pad_token_id was explicitly set correctly. This meant the padding mask was set to all True, when it should've had False at the last few positions (for example, Llama-3 does this correctly). So when we create the config files, we'll want to make sure this is true in the vocabulary
@yrahul3910 , Let us know if this PR is ready for review.
Yes, it is!
@yrahul3910, let us know once comments are addressed.
Will do! Sorry for the delay, I should get it done this week.
Done--apologies for the long delays!