transformers.js icon indicating copy to clipboard operation
transformers.js copied to clipboard

PlanTL-GOB-ES/roberta-base-bne-sqac and other spanish models

Open iagovar opened this issue 2 years ago • 6 comments

Model description

Hi, I tried to export PlanTL-GOB-ES/roberta-base-bne-sqac to ONNX with the conversion script in the repo, but although the model works with transformers + pytorch in python, I always get this error when using it in JS:

RangeError: Invalid array length
    at Function._call (/home/iagovar/MEGA/MEGAsync/web/mywebsite.com/repo/calendar/node_modules/@xenova/transformers/src/tokenizers.js:2528:78)
    at Function.closure [as tokenizer] (/home/iagovar/MEGA/MEGAsync/web/mywebsite.com/repo/calendar/node_modules/@xenova/transformers/src/utils/core.js:62:28)
    at Function._call (/home/iagovar/MEGA/MEGAsync/web/mywebsite.com/repo/calendar/node_modules/@xenova/transformers/src/pipelines.js:356:27)
    at closure (/home/iagovar/MEGA/MEGAsync/web/mywebsite.com/repo/calendar/node_modules/@xenova/transformers/src/utils/core.js:62:28)
    at main (/home/iagovar/MEGA/MEGAsync/web/mywebsite.com/repo/calendar/sandbox/ai.js:35:22)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {stack: 'RangeError: Invalid array length
    at Funct…ions (node:internal/process/task_queues:95:5)', message: 'Invalid array length'}

In theory, given the lists you provided, this model should be no problem.

Prerequisites

  • [X] The model is supported in Transformers (i.e., listed here)
  • [X] The model can be exported to ONNX with Optimum (i.e., listed here)

Additional information

No response

Your contribution

I don't think so, I'm a very junior coder.

iagovar avatar Dec 20 '23 17:12 iagovar

Hi there 👋 Can you share the code you used that produces this error? Also, have you possibly uploaded the models to the HF Hub? If so, I can do some testing myself.

xenova avatar Dec 20 '23 19:12 xenova

Hi Xenova, first of all, thanks for your work! Without you it would be much harder.

Here's the pastebin of my code, just a test: https://pastebin.com/1GBxePEP

Here's the HF repo with the model converted to onnx using the script you guys provided: https://huggingface.co/iagovar/roberta-base-bne-sqac-onnx

iagovar avatar Dec 21 '23 13:12 iagovar

Thanks for the additional context. Unfortunately, I am unable to reproduce the issue. Which version of transformers.js are you using? Running the following code:

import {pipeline} from '@xenova/transformers';

const answerer = await pipeline('question-answering', 'roberta-base-bne-sqac', { quantized: false })

const question = "¿Dónde se celebra el concierto?";
const context = "El concierto de juanes se celebrará en el palacio de la ópera, a las 8 de la tarde";
const output = await answerer(question, context);
console.log(output);

produces:

{ answer: ' palacio de la ópera', score: 0.5542509822824329 }

Also, could you update the repo structure of this model so that the files in this folder are in the main directory? https://huggingface.co/iagovar/roberta-base-bne-sqac-onnx/tree/main/roberta-base-bne-sqac

xenova avatar Dec 21 '23 22:12 xenova

Hmmm, could the node version affect? I'm using 18+. Also, I'm using let { pipeline, env } = await import('@xenova/transformers'); as I'm using commonJS through the application I want to embbed this in. I wonder if this might have something to do with it.

And yes, I will update the repo, I uploaded it in a hurry through web interface, sorry.

iagovar avatar Dec 22 '23 12:12 iagovar

Odd indeed, as neither of those should be an issue. Would you mind putting together a minimal repository/application for me to inspect in closer detail?

xenova avatar Dec 23 '23 15:12 xenova

Hi Xenova, a bit late, but here it is. I was using Transformers.js 2.12.0 according to its package.json

I did a little test script, for better readability here's the pastebin: https://pastebin.com/VxibUaW9

This script uses two different QA Models. Both fail. Here's the output: https://pastebin.com/0VPTJBfZ

The same input works perfectly fine in python: https://pastebin.com/mCF3vUX0


I noticed that if cut the cleanContext variable in about half the size, the first model does make the inference, while the second still fails.

The problem is that I can't really do this, as the intention is to use QA models to precisely extract information from messy dumps of code, with the intention of avoiding to use APIs (I will loop through thousands of records from a database).

My intuition is that there's some data structure that overflows in some way. Can't say really, just speculating, because it works fine in python.

I translated the models to ONNX using the provided script. Here are the files, in case it helps: https://files.fm/u/db4mfn8p94

iagovar avatar Jan 12 '24 12:01 iagovar