keras-nlp icon indicating copy to clipboard operation
keras-nlp copied to clipboard

Add Phi-4 Backbone

Open yrahul3910 opened this issue 6 months ago • 18 comments

Description of the change

This is the first PR in contributing the Phi-4 model to KerasHub, and includes the backbone and its test file.

Reference

Colab Notebook

I've had some trouble getting this part to work, so I need some help. This is my Colab notebook, but the HF model has been pretty annoying to run. On CPU machines, it seems to constantly allocate all available memory (I gave up after giving it 280GB), and on an H200 on Modal, I couldn't get an output after 15 minutes. In the notebook, this line:

hf_output = pt_model(**hf_sample_input)

at the bottom is the one I have trouble with.

Checklist

  • [X] I have added all the necessary unit tests for my change.
  • [X] I have verified that my change does not break existing code and works with all backends (TensorFlow, JAX, and PyTorch).
  • [X] My PR is based on the latest changes of the main branch (if unsure, rebase the code).
  • [X] I have followed the Keras Hub Model contribution guidelines in making these changes.
  • [X] I have followed the Keras Hub API design guidelines in making these changes.
  • [X] I have signed the Contributor License Agreement.

yrahul3910 avatar May 27 '25 04:05 yrahul3910

I uploaded the output from the KerasHub model, could someone upload a HF version that I can compare and add to the Colab?

yrahul3910 avatar May 27 '25 04:05 yrahul3910

P.S. A lot of this code is based on the existing code for Phi-3 (the technical report states it mostly follows the Phi-3-Medium architecture; I simply made the changes from the report and the reference implementation). Should I refactor it to inherit from Phi3Backbone?

yrahul3910 avatar May 27 '25 04:05 yrahul3910

P.S. A lot of this code is based on the existing code for Phi-3 (the technical report states it mostly follows the Phi-3-Medium architecture; I simply made the changes from the report and the reference implementation). Should I refactor it to inherit from Phi3Backbone?

How much similar is Phi-4 compared to Phi-3? What is the approx percentage of code we can reuse?

sachinprasadhs avatar May 30 '25 20:05 sachinprasadhs

I guess still Tokenizer and CausalLM and preset file with necessary test files still needs to be added?

sachinprasadhs avatar May 30 '25 20:05 sachinprasadhs

Actually, I think we might get away with directly subclassing Phi3Backbone and changing the defaults. If you do

diff src/models/phi3/phi3_backbone.py src/models/phi4/phi4_backbone.py

The only differences are the model name and the defaults; initially I did this copy anticipating architectural changes, but it seems the only ones are in the attention. From the paper's Section 3:

The architecture closely follows phi-3-medium, except that we now use the tiktoken tokenizer (for better multilingual support) with a padded vocabulary size of 100,352 (including unused tokens) and we use full attention over the 4K context length, rather than a 2K sliding window used in phi-3-medium.

I could not find this sliding window attention in the code, however, so that also remained unchanged, and the tokenizer would be part of the third PR (based on the contributing guidelines). Do you think it's better if I just did that instead?

yrahul3910 avatar Jun 01 '25 19:06 yrahul3910

Yeah this is an interesting question if the only thing that is really changing is the tokenizer. Would it work to just subclass all the phi3 classes as stubs that only update the classes? Like this

@keras_hub_export("keras_hub.models.Phi4CausalLM")
class Phi4CausalLM(Phi3CausalLM):
    backbone_cls = Phi4Backbone
    preprocessor_cls = Phi4CausalLMPreprocessor

And then define a new tokenizer? I don't think we need to worry about switchin the defaults, that we can just reflect in the preset configs we upload to kaggle.

Might be worth trying all of that on a single PR and trying to convert a model to see if everything works. Since we'd mostly be dealing with stub classes it shouldn't be too much code.

mattdangerw avatar Jun 13 '25 19:06 mattdangerw

Yes, that makes sense to me. I'll try getting it out over the weekend.

yrahul3910 avatar Jun 13 '25 20:06 yrahul3910

@yrahul3910 - are you still working on this?

abheesht17 avatar Jul 10 '25 19:07 abheesht17

Yes, so sorry for the delay! I'll push some commits soon.

yrahul3910 avatar Jul 10 '25 20:07 yrahul3910

/gemini review

divyashreepathihalli avatar Jul 10 '25 23:07 divyashreepathihalli

Sorry for the delays! I've taken @mattdangerw's advice and used subclasses instead. I'm a little less confident about the tokenizer bit--the paper mentions tiktoken, but since that's just a BPE tokenizer, I used the one we have in KerasHub and then followed the Llama-3 and BART model files (because Phi-4 ultimately is a modified version of Llama models, iirc, and these two use BPE as well).

ETA: I'm also not sure if I'll need to subclass and modify the attention layers, because I'm not quite sure where this part comes in (from the paper):

The architecture closely follows phi-3-medium, except that [...] we use full attention over the 4K context length, rather than a 2K sliding window used in phi-3-medium.

yrahul3910 avatar Jul 17 '25 02:07 yrahul3910

Fixed the tests! For some reason, the test_early_stopping test in Phi4CausalLMTest wouldn't work unless the pad token was at position 0, even if the pad_token_id was explicitly set correctly. This meant the padding mask was set to all True, when it should've had False at the last few positions (for example, Llama-3 does this correctly). So when we create the config files, we'll want to make sure this is true in the vocabulary

yrahul3910 avatar Jul 20 '25 23:07 yrahul3910

@yrahul3910 , Let us know if this PR is ready for review.

sachinprasadhs avatar Aug 13 '25 04:08 sachinprasadhs

Yes, it is!

yrahul3910 avatar Aug 13 '25 23:08 yrahul3910

@yrahul3910, let us know once comments are addressed.

sachinprasadhs avatar Sep 08 '25 18:09 sachinprasadhs

Will do! Sorry for the delay, I should get it done this week.

yrahul3910 avatar Sep 11 '25 02:09 yrahul3910

Done--apologies for the long delays!

yrahul3910 avatar Sep 24 '25 03:09 yrahul3910