transformers.js icon indicating copy to clipboard operation
transformers.js copied to clipboard

Certain ONNX models ignore the system prompt

Open RonanKMcGovern opened this issue 9 months ago • 8 comments
trafficstars

System Info

Here's a model that follows the system prompt:

  • HuggingFaceTB/SmolLM2-1.7B-Instruct

Here are two that do not:

  • onnx-community/Llama-3.2-3B-Instruct-onnx-web-gqa
  • onnx-community/Qwen2.5-Coder-1.5B-Instruct

Is this intentional or accidental?

Environment/Platform

  • [x] Website/web-app
  • [ ] Browser extension
  • [ ] Server-side (e.g., Node.js, Deno, Bun)
  • [ ] Desktop app (e.g., Electron)
  • [ ] Other (e.g., VSCode extension)

Description

I'm running these models in q4f16 with webgpu

Reproduction

I'm following the examples provided for smollm in the examples, but swapping the model.

RonanKMcGovern avatar Jan 29 '25 13:01 RonanKMcGovern

System Info

Here's a model that follows the system prompt:

  • HuggingFaceTB/SmolLM2-1.7B-Instruct

Here are two that do not:

  • onnx-community/Llama-3.2-3B-Instruct-onnx-web-gqa
  • onnx-community/Qwen2.5-Coder-1.5B-Instruct

Is this intentional or accidental?

Environment/Platform

  • [x] Website/web-app[ ] Browser extension[ ] Server-side (e.g., Node.js, Deno, Bun)[ ] Desktop app (e.g., Electron)[ ] Other (e.g., VSCode extension)

Description

I'm running these models in q4f16 with webgpu

Reproduction

I'm following the examples provided for smollm in the examples, but swapping the model.

Where/How do you set your system prompt?

d4ndr4d3 avatar Jan 31 '25 17:01 d4ndr4d3

Could you provide more information about the problem you are facing? Is the model producing incorrect results?

xenova avatar Feb 08 '25 11:02 xenova

It has no awareness of the system prompt.

Possibly could be due to quantising.

Smollm was quantized with some calibration samples and seems not to have the issue

On Sat 8 Feb 2025 at 12:00, Joshua Lochner @.***> wrote:

Could you provide more information about the problem you are facing? Is the model producing incorrect results?

— Reply to this email directly, view it on GitHub https://github.com/huggingface/transformers.js/issues/1172#issuecomment-2645231269, or unsubscribe https://github.com/notifications/unsubscribe-auth/ASVG6CXNMBE22AP7GEXTLML2OXWUDAVCNFSM6AAAAABWC4S33OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNBVGIZTCMRWHE . You are receiving this because you authored the thread.Message ID: @.***>

RonanKMcGovern avatar Feb 08 '25 18:02 RonanKMcGovern

Can you please provide an example of input/output that you are seeing? It may be that the model itself doesn't support a system role (which you can check by looking at the chat template in the tokenizer_config.json file)

xenova avatar Feb 08 '25 19:02 xenova

Sure, here is a full repo: https://github.com/TrelisResearch/llama-system-prompt-issue

BTW, yes good point on checking the tokeniser. Indeed the system prompt is in there.

RonanKMcGovern avatar Feb 09 '25 16:02 RonanKMcGovern

This may just be a limitation of the model itself. Are you able to get good performance with the python library? It may be good to use that as a benchmark for the model's capabilities.

xenova avatar Feb 09 '25 16:02 xenova

Yeah the models themselves work fine with transformers and instruction follow

On Sun 9 Feb 2025 at 16:18, Joshua Lochner @.***> wrote:

This may just be a limitation of the model itself. Are you able to get good performance with the python library? It may be good to use that as a benchmark for the model's capabilities.

— Reply to this email directly, view it on GitHub https://github.com/huggingface/transformers.js/issues/1172#issuecomment-2646381800, or unsubscribe https://github.com/notifications/unsubscribe-auth/ASVG6CVQ6LOKAX5C3L7LZ7L2O55ULAVCNFSM6AAAAABWC4S33OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNBWGM4DCOBQGA . You are receiving this because you authored the thread.Message ID: @.***>

RonanKMcGovern avatar Feb 10 '25 09:02 RonanKMcGovern

Hey, folks.

I'm seeing the same issues here.

On onnxruntime via Python it follows the system prompt, but not on Android's onnxruntime. Both run the same model.

My hypothesis is that the attention mask is not being properly set on the Android's version.

drlima avatar Feb 20 '25 18:02 drlima

Closing as this is related to the model and not to the transformers.js library

xenova avatar Oct 13 '25 04:10 xenova