Wagtail
Wagtail
Could you check the rq and redis version? Your error log is very hard to read, but it seems to be a job queue problem.
It seems like the xlm-roberta-model is hardcoded into the codebase. Have a look at spacy for individual models: https://explosion.ai/blog/ud-benchmarks-v3-2
There are no Lemmas in the training data. So there can't be lemmatizer?! Can't i use the the other parts of the pipeline? When i run ``` from trankit import...
Have a look at the training documentation: https://trankit.readthedocs.io/en/latest/training.html You could try to train a customized pipeline.
Hey Ph.D. Oleg Polivin, you can decide whether you freeze the pretained xlm-r-base or xlm-r-large model and fine-tune only the adapters for your new tasks. The pretrained model and the...
@bre7 datepicker without JQuery is a whole new project. So we should first focus on supporting bootstrap 5 with JQuery as extra dependency.
What would a custom converter look like? Is there a good starting point for one?
Okay thanks, I had a typo and missed the "google" in the model path "google/flan-t5-base". It would be lovely to add it to the usage support like: `# load (supports...
@akesh1235 you probably mean [flan-t5-large-grammar-synthesis](https://huggingface.co/pszemraj/flan-t5-large-grammar-synthesis). This model is only trained on the [JFLEG](https://paperswithcode.com/dataset/jfleg) dataset. You can add more datasets for the english-centric Flan-T5: * https://paperswithcode.com/datasets?task=grammatical-error-correction * https://huggingface.co/datasets?other=grammatical-error-correction Also, you can...
How much RAM did nano-gpt use before the change? Have you tried to use the smallest setting of nano-gpt?