llamafile icon indicating copy to clipboard operation
llamafile copied to clipboard

Feature Request: Add support for Raspberry Pi Ai Kit

Open beingminimal opened this issue 1 year ago • 5 comments

Prerequisites

  • [X] I am running the latest code. Mention the version if possible as well.
  • [X] I carefully followed the README.md.
  • [X] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [X] I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

Add support for raspberry pi ai kit to run llamafile.

Motivation

It will be much faster to run LLM on such a small and cheaper device.

Possible Implementation

Not aware

beingminimal avatar Aug 20 '24 17:08 beingminimal

llamafile already works great on Raspberry Pi 5. It goes very fast with CPU alone. We put a lot of work into making that happen.

Their AI accelerator module is cool, but support it isn't on our roadmap. I'm not even sure if it has functionality which would help transformer models. If anyone here understands its capabilities and could explain how it could potentially help make our matrix multiplications go faster, then I'm all ears. But I'm willing to bet we're already offering the best RPI support to you today, that's possible now.

jart avatar Aug 20 '24 17:08 jart

@jart just little effort from my side and so posting ai reply only. i am not that much technical person into this.

The Raspberry Pi 5 AI Kit can potentially help make your matrix multiplications go faster for LLMs in several ways, leveraging its hardware and software capabilities:

Hardware Acceleration

  1. RP2040 Microcontroller: While not as powerful as a GPU, the RP2040's dual-core architecture can be used to parallelize certain parts of the matrix multiplication process, offloading some compute from the main CPU.
  2. Neural Compute Stick: If included in the AI Kit, the Neural Compute Stick can provide dedicated hardware acceleration for certain types of matrix operations, especially those commonly found in neural networks.

Software Optimization

  1. TensorFlow Lite: The Raspberry Pi 5 is optimized to run TensorFlow Lite, a lightweight version of TensorFlow designed for edge devices. TensorFlow Lite includes optimized kernels for matrix operations that can significantly improve performance on the Raspberry Pi 5.
  2. Python Libraries: NumPy and other Python libraries offer optimized functions for matrix operations that can take advantage of the Raspberry Pi 5's hardware.

Specific Techniques

  1. Quantization: Quantization reduces the precision of the numbers used in the matrix calculations, allowing for faster computations and reduced memory usage. TensorFlow Lite supports quantization, which can be particularly effective on the Raspberry Pi 5.
  2. Pruning: Pruning removes less important connections within the neural network, leading to smaller matrices and faster calculations.
  3. Sparsity: Sparsity leverages the fact that many matrices in LLMs contain a large number of zero values. Optimized sparse matrix operations can significantly speed up calculations.

beingminimal avatar Aug 21 '24 03:08 beingminimal

So what you're telling me is that it's got a 32-bit ARM CPU on it with 2 cores. I doubt there's much advantage offloading to that. Plus having to use a proprietary SDK and drag and drop a special executable and reboot the thing for a program to run. It'd be simpler and platform-agnostic to just plug a second Raspberry Pi into your Raspberry Pi over the ethernet and we'll give you software that lets you cluster llamafile instances. Wouldn't you rather have that instead?

jart avatar Aug 21 '24 03:08 jart

So what you're telling me is that it's got a 32-bit ARM CPU on it with 2 cores. I doubt there's much advantage offloading to that. Plus having to use a proprietary SDK and drag and drop a special executable and reboot the thing for a program to run. It'd be simpler and platform-agnostic to just plug a second Raspberry Pi into your Raspberry Pi over the ethernet and we'll give you software that lets you cluster llamafile instances. Wouldn't you rather have that instead?

ok got it, thanks for clarity.

beingminimal avatar Aug 21 '24 04:08 beingminimal

So what you're telling me is that it's got a 32-bit ARM CPU on it with 2 cores. I doubt there's much advantage offloading to that. Plus having to use a proprietary SDK and drag and drop a special executable and reboot the thing for a program to run. It'd be simpler and platform-agnostic to just plug a second Raspberry Pi into your Raspberry Pi over the ethernet and we'll give you software that lets you cluster llamafile instances. Wouldn't you rather have that instead?

Ok that sounds awesome! I'm not the OP but I'm VERY interested in that!

devlux76 avatar Aug 24 '24 05:08 devlux76

@cjpais So support for rpi5 ai kit released or dropped and closed?

beingminimal avatar Mar 15 '25 02:03 beingminimal

dropped. if llama.cpp supports, there will be a chance we do. until then, no chance.

cjpais avatar Mar 15 '25 04:03 cjpais

dropped. if llama.cpp supports, there will be a chance we do. until then, no chance.

ok thanks for reply.

beingminimal avatar Mar 15 '25 04:03 beingminimal