LLMs-from-scratch icon indicating copy to clipboard operation
LLMs-from-scratch copied to clipboard

Add Habana Gaudi (HPU) Support

Open BartoszBLL opened this issue 9 months ago • 2 comments

This pull request adds support for running inference on Habana Gaudi (HPU) processors by introducing a new directory dedicated to Gaudi-specific implementation. It includes setup instructions, scripts for downloading GPT-2 models, a Jupyter notebook for running inference, and necessary supporting files.

Changes Introduced

  • New directory: setup/05_accelerator_processors/01_habana_processing_unit/
  • Documentation:
    • README.md: Instructions for setting up and running GPT-2 inference on Habana Gaudi.
  • Notebook:
    • inference_on_gaudi.ipynb: Jupyter notebook demonstrating how to run inference on Gaudi, including performance comparisons against CPU.

Key Features

  • Provides setup instructions for installing necessary drivers and libraries.
  • Links to Habana documentation for further reading.
  • Implements inference workflow optimized for Habana Gaudi.
  • Includes performance monitoring tools for CPU vs. HPU comparisons.

Testing

  • Verified inference runs successfully on Gaudi HPU.

BartoszBLL avatar Mar 17 '25 22:03 BartoszBLL

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Hi @BartoszBLL , thanks for the PR, and sorry for the slow response. I haven't used Habana Gaudi accelerators, yet, and wanted to think about this carefully.

So, it looks like this PR supplies alternative code files for people who want to use Gaudi accelerators. I think this may be a better fit for an external GitHub repo you can set up and then we can suggest it in the GitHub Discussion's Show and Tell section.

In addition, rather than just supplying the codes, I think what would be interesting is to explain how the existing code can be/needs to be adjusted to work on Habana Gaudi chips. This could be done for e.g. Chapter 5. What I have in mind is something similar to how this is structured: https://github.com/rasbt/LLMs-from-scratch/tree/main/ch05/10_llm-training-speed

Specifically, the bonus section folder could contain the original code and the Habana-modified code of Chapter 5 so that readers can check out the changes side-by-side and understand what changes they need to make to use Habana accelerators. Then, this could also include a side-by-side comparison running the code on CPU, HPU, and GPU. What do you think?

rasbt avatar Apr 04 '25 17:04 rasbt

I am closing this for now due to inactivity. But please let me know if you want to revisit this.

rasbt avatar Jun 13 '25 15:06 rasbt

Hi @rasbt sorry for not replying after submission. I created this as part of my collaboration work with Intel I did earlier (which, btw, was not a very fruitful one). Thanks for your reply and feedback, just wanted to say that I'm a fan of your book (THE book, seriously building LLM from scratch is phenomenal) and your work in general.

Thanks again, BR

brgsk avatar Jun 13 '25 23:06 brgsk

Thanks for the comment and no worries! I've actually never used Habana processors and know only very little about them. It's a bummer that the internship wasn't a more fruitful one.

PS: thanks for the kind words about the book :)

rasbt avatar Jun 14 '25 12:06 rasbt

Thanks for the comment and no worries! I've actually never used Habana processors and know only very little about them. It's a bummer that the internship wasn't a more fruitful one.

PS: thanks for the kind words about the book :)

rasbt avatar Jun 14 '25 12:06 rasbt