cbert_aug icon indicating copy to clipboard operation
cbert_aug copied to clipboard

Results 12 cbert_aug issues
Sort by recently updated
recently updated
newest added

**Main changes in this PR** - Added python requirements - Added docker image with required cuda dependencies - Added instructions to readme

Hello, Can cbert be used for other language model ? For exemple , camembert or Flaubert for french language ?

Hi ,when I run your code: python cbert_finetune.py I got the following problem: Traceback (most recent call last): File "cbert_finetune.py", line 168, in main() File "cbert_finetune.py", line 151, in main...

forward() got an unexpected keyword argument 'masked_lm_labels',do you know why's that?

@1024er I want to ask some question about cbert_augdata.py How to choise better to the best temperature_value for text augmentation? Hope can answer this problem. i will thanks a lot.

Hello, When I run cbert_augdata.py, I met an error related to chainer. Maybe my chianer version is too old. I tried Chainer 7.7, but still I got an error related...

Traceback (most recent call last): File "cbert_augdata.py", line 187, in main() File "cbert_augdata.py", line 147, in main shutil.copy(origin_train_path, save_train_path) File "/data/Anaconda3/envs/lyc/lib/python3.6/shutil.py", line 245, in copy copyfile(src, dst, follow_symlinks=follow_symlinks) File "/data/Anaconda3/envs/lyc/lib/python3.6/shutil.py",...

The SST-2 dataset included in the repo contains 6,228 training samples, 692 validation samples, and 1821 test samples. But the official SST-2 dataset (which can be access via `torchtext`) contains...

Huggingface's BertForMaskedLM requires -100 for masked tokens (where no loss is to be computed/backproped) - https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L881 if set to -1, loss.backward() returns the error described here - https://github.com/pytorch/pytorch/issues/1204 and log...

Not supported by latest transformers library. See issue - https://github.com/huggingface/transformers/issues/2082 If warmup is required, we can instead use `scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=WARMUP_STEPS, num_training_steps = -1)` instead of `scheduler = WarmupLinearSchedule(optimizer,...