web-llm
web-llm copied to clipboard
Support for LiquidAI/LFM2-1.2B
Hi! It would be amazing to have support for LiquidAI, their models are very good in relation to their hardware requirements and their responses as an AI model.
I wonder if we can just convert weights from LFM2
LFM2-8B-A1B is my favorite. It is a Mixture-of-Experts (MoE) with 8.3B total parameters and 1.5B active parameters per token. The answers are of good quality, similar to qwen3:8b, but much faster. It runs fast even in CPU, I wonder how it would go with webllm in a smartphone.