Justin Zhao
Justin Zhao
v0.5.3 is now released.
Largely a duplicate of #2158
@anneholler FYI
I'm not necessarily opposed to this change, but this would be a rather significant backwards compatibility layer that we'd need to maintain for a long time, so definitely want to...
I think this is a great idea -- similarly, I filed #1934 requesting the same. Including this in 0.6 SGTM.
Hi, You can use `auto_transformer`, i.e.: ```python import pandas as pd import yaml from ludwig.api import LudwigModel config = """ input_features: - name: text type: text encoder: auto_transformer pretrained_model_name_or_path: 'roberta-large'...
> Is the specified encoder (for example, roberta-large) the model used for training after encoding the text sequence **_or_** the encoder used to obtain the sequence embeddings, which is later...
Potentially related: #1655
Hi @Jeffwan, I'm not able to reproduce the issue you are seeing. Here's my output from your repro commands: [rotten_tomatoes_output.txt](https://github.com/ludwig-ai/ludwig/files/9420171/rotten_tomatoes_output.txt) One hypothesis is that the config that gets generated from...
@Jeffwan Not yet, thanks for the ping. I'll plan to look at this tomorrow.