TTS
TTS copied to clipboard
xtts demo not saving checkpoints, thus can't run inference[Bug]
Describe the bug
Hi, I ran the demo yesterday with no isses on colab. Today I try again with the same conditions but this time the best model checkpoints just aren't being saved and aren't being recorded anywhere.
To Reproduce
Just running the colab normally, going through the instructions.
Expected behavior
No response
Logs
2023-12-12 17:44:51.739884: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-12-12 17:44:51.739947: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-12-12 17:44:51.739989: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-12-12 17:44:52.978077: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Running on local URL: http://0.0.0.0:5003
Running on public URL: https://931000ac23620825be.gradio.live
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
>> DVAE weights restored from: /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/XTTS_v2.0_original_model_files/dvae.pth
| > Found 283 files in /content/drive/MyDrive/Hexagram/xtts_test/emo/dataset
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
> Training Environment:
| > Backend: Torch
| > Mixed precision: False
| > Precision: float32
| > Current device: 0
| > Num. of GPUs: 1
| > Num. of CPUs: 8
| > Num. of Torch Threads: 1
| > Torch seed: 1
| > Torch CUDNN: True
| > Torch CUDNN deterministic: False
| > Torch CUDNN benchmark: False
| > Torch TF32 MatMul: False
> Start Tensorboard: tensorboard --logdir=/content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000
> Model has 518442047 parameters
> EPOCH: 0/10
--> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000
> Sampling by language: dict_keys(['en'])
> TRAINING (2023-12-12 17:46:37)
--> TIME: 2023-12-12 17:46:50 -- STEP: 0/71 -- GLOBAL_STEP: 0
| > loss_text_ce: 0.018399003893136978 (0.018399003893136978)
| > loss_mel_ce: 4.352884769439697 (4.352884769439697)
| > loss: 4.371284008026123 (4.371284008026123)
| > grad_norm: 0 (0)
| > current_lr: 5e-06
| > step_time: 1.165 (1.1649720668792725)
| > loader_time: 11.6082 (11.608171463012695)
--> TIME: 2023-12-12 17:47:23 -- STEP: 50/71 -- GLOBAL_STEP: 50
| > loss_text_ce: 0.02171478606760502 (0.022096751555800438)
| > loss_mel_ce: 2.6930222511291504 (3.0739004755020143)
| > loss: 2.7147369384765625 (3.0959972238540647)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.3295 (0.3330649137496948)
| > loader_time: 0.0097 (0.013716611862182617)
> Filtering invalid eval samples!!
> Total eval samples after filtering: 49
> EVALUATION
--> EVAL PERFORMANCE
| > avg_loader_time: 0.09351348876953125 (+0)
| > avg_loss_text_ce: 0.02092768841733535 (+0)
| > avg_loss_mel_ce: 2.629672129948934 (+0)
| > avg_loss: 2.6505998174349465 (+0)
> EPOCH: 1/10
--> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000
> TRAINING (2023-12-12 17:48:01)
--> TIME: 2023-12-12 17:48:23 -- STEP: 29/71 -- GLOBAL_STEP: 100
| > loss_text_ce: 0.022052044048905373 (0.021474947831754028)
| > loss_mel_ce: 2.5578114986419678 (2.534214406177916)
| > loss: 2.5798635482788086 (2.5556893348693848)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.367 (0.35631815318403565)
| > loader_time: 0.009 (0.010482763421946558)
> EVALUATION
--> EVAL PERFORMANCE
| > avg_loader_time: 0.09097927808761597 (-0.002534210681915283)
| > avg_loss_text_ce: 0.02052511600777507 (-0.00040257240956028187)
| > avg_loss_mel_ce: 2.565500199794769 (-0.06417193015416522)
| > avg_loss: 2.5860253175099692 (-0.06457449992497732)
> EPOCH: 2/10
--> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000
> TRAINING (2023-12-12 17:48:58)
--> TIME: 2023-12-12 17:49:06 -- STEP: 8/71 -- GLOBAL_STEP: 150
| > loss_text_ce: 0.021225377917289734 (0.02238078275695443)
| > loss_mel_ce: 2.4822535514831543 (2.544594407081604)
| > loss: 2.50347900390625 (2.566975176334381)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.3563 (0.38175302743911743)
| > loader_time: 0.008 (0.009030967950820923)
--> TIME: 2023-12-12 17:49:43 -- STEP: 58/71 -- GLOBAL_STEP: 200
| > loss_text_ce: 0.022638365626335144 (0.022184390446235394)
| > loss_mel_ce: 2.3525304794311523 (2.437411641252452)
| > loss: 2.375168800354004 (2.459596029643354)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.3568 (0.3823215591496435)
| > loader_time: 0.0081 (0.008994587536515863)
> EVALUATION
--> EVAL PERFORMANCE
| > avg_loader_time: 0.08928944667180379 (-0.001689831415812179)
| > avg_loss_text_ce: 0.02045490018402537 (-7.021582374969887e-05)
| > avg_loss_mel_ce: 2.5211500922838845 (-0.04435010751088431)
| > avg_loss: 2.541605015595754 (-0.04442030191421509)
> EPOCH: 3/10
--> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000
> TRAINING (2023-12-12 17:49:55)
--> TIME: 2023-12-12 17:50:25 -- STEP: 37/71 -- GLOBAL_STEP: 250
| > loss_text_ce: 0.020332181826233864 (0.02113649906036822)
| > loss_mel_ce: 1.8743358850479126 (2.3317640826508796)
| > loss: 1.8946681022644043 (2.3529005759471167)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.3379 (0.37295331826081146)
| > loader_time: 0.0082 (0.008913181923531197)
> EVALUATION
--> EVAL PERFORMANCE
| > avg_loader_time: 0.09068983793258667 (+0.0014003912607828822)
| > avg_loss_text_ce: 0.0204750030922393 (+2.0102908213932152e-05)
| > avg_loss_mel_ce: 2.5089246233304343 (-0.012225468953450225)
| > avg_loss: 2.529399593671163 (-0.012205421924591064)
> EPOCH: 4/10
--> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000
> TRAINING (2023-12-12 17:50:52)
--> TIME: 2023-12-12 17:51:05 -- STEP: 16/71 -- GLOBAL_STEP: 300
| > loss_text_ce: 0.019207967445254326 (0.02070502028800547)
| > loss_mel_ce: 2.0052595138549805 (2.263710305094719)
| > loss: 2.0244674682617188 (2.284415304660797)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.329 (0.3692135661840439)
| > loader_time: 0.0089 (0.008511707186698914)
--> TIME: 2023-12-12 17:51:42 -- STEP: 66/71 -- GLOBAL_STEP: 350
| > loss_text_ce: 0.020322196185588837 (0.021516976680493717)
| > loss_mel_ce: 2.071322441101074 (2.198253160173242)
| > loss: 2.091644525527954 (2.2197701371077327)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.4131 (0.3744211774883848)
| > loader_time: 0.0076 (0.00866986404765736)
> EVALUATION
--> EVAL PERFORMANCE
| > avg_loader_time: 0.09137902657190959 (+0.0006891886393229213)
| > avg_loss_text_ce: 0.02040953344355027 (-6.54696486890316e-05)
| > avg_loss_mel_ce: 2.5105610688527427 (+0.0016364455223083496)
| > avg_loss: 2.5309706330299377 (+0.001571039358774673)
> EPOCH: 5/10
--> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000
> TRAINING (2023-12-12 17:51:49)
--> TIME: 2023-12-12 17:52:23 -- STEP: 45/71 -- GLOBAL_STEP: 400
| > loss_text_ce: 0.015302835032343864 (0.021413246169686317)
| > loss_mel_ce: 2.448807716369629 (2.1535098923577203)
| > loss: 2.4641106128692627 (2.174923133850098)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.4952 (0.3692637019687229)
| > loader_time: 0.0092 (0.009017451604207357)
> EVALUATION
--> EVAL PERFORMANCE
| > avg_loader_time: 0.08962543805440266 (-0.0017535885175069266)
| > avg_loss_text_ce: 0.020435296464711428 (+2.5763021161157723e-05)
| > avg_loss_mel_ce: 2.5211901466051736 (+0.010629077752430938)
| > avg_loss: 2.541625459988912 (+0.01065482695897435)
> EPOCH: 6/10
--> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000
> TRAINING (2023-12-12 17:52:44)
--> TIME: 2023-12-12 17:53:03 -- STEP: 24/71 -- GLOBAL_STEP: 450
| > loss_text_ce: 0.021666206419467926 (0.021072693557168048)
| > loss_mel_ce: 2.113767147064209 (2.0653036634127298)
| > loss: 2.1354334354400635 (2.086376359065374)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.3649 (0.37011505166689557)
| > loader_time: 0.0085 (0.00866664449373881)
> EVALUATION
--> EVAL PERFORMANCE
| > avg_loader_time: 0.09099972248077393 (+0.0013742844263712611)
| > avg_loss_text_ce: 0.02036033229281505 (-7.496417189637936e-05)
| > avg_loss_mel_ce: 2.5216694871584573 (+0.0004793405532836914)
| > avg_loss: 2.5420298178990683 (+0.00040435791015625)
> EPOCH: 7/10
--> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000
> TRAINING (2023-12-12 17:53:41)
--> TIME: 2023-12-12 17:53:44 -- STEP: 3/71 -- GLOBAL_STEP: 500
| > loss_text_ce: 0.02116883173584938 (0.02059762179851532)
| > loss_mel_ce: 1.8480390310287476 (1.9163380066553752)
| > loss: 1.8692078590393066 (1.9369356234868367)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.4097 (0.40784867604573566)
| > loader_time: 0.0092 (0.00968782107035319)
--> TIME: 2023-12-12 17:54:21 -- STEP: 53/71 -- GLOBAL_STEP: 550
| > loss_text_ce: 0.021277613937854767 (0.02116718796907731)
| > loss_mel_ce: 2.1292479038238525 (1.9866992774999361)
| > loss: 2.1505255699157715 (2.007866465820457)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.3498 (0.37555610908652254)
| > loader_time: 0.0084 (0.009038628272290499)
> EVALUATION
--> EVAL PERFORMANCE
| > avg_loader_time: 0.08700668811798096 (-0.003993034362792969)
| > avg_loss_text_ce: 0.020313929611196123 (-4.640268161892544e-05)
| > avg_loss_mel_ce: 2.548041800657908 (+0.026372313499450684)
| > avg_loss: 2.5683557589848838 (+0.02632594108581543)
> EPOCH: 8/10
--> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000
> TRAINING (2023-12-12 17:54:37)
--> TIME: 2023-12-12 17:55:02 -- STEP: 32/71 -- GLOBAL_STEP: 600
| > loss_text_ce: 0.020227529108524323 (0.021138300595339384)
| > loss_mel_ce: 1.7593525648117065 (1.95420765504241)
| > loss: 1.7795801162719727 (1.9753459803760054)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.3255 (0.3742501512169838)
| > loader_time: 0.0094 (0.008944958448410036)
> EVALUATION
--> EVAL PERFORMANCE
| > avg_loader_time: 0.08889577786127727 (+0.00188908974329631)
| > avg_loss_text_ce: 0.020309025421738625 (-4.90418945749832e-06)
| > avg_loss_mel_ce: 2.6121634244918823 (+0.06412162383397435)
| > avg_loss: 2.6324724356333413 (+0.06411667664845755)
> EPOCH: 9/10
--> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000
> TRAINING (2023-12-12 17:55:33)
--> TIME: 2023-12-12 17:55:43 -- STEP: 11/71 -- GLOBAL_STEP: 650
| > loss_text_ce: 0.02275952324271202 (0.020174486901272427)
| > loss_mel_ce: 1.6004739999771118 (1.8832634904167869)
| > loss: 1.6232335567474365 (1.9034379612315784)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.3979 (0.39226111498746)
| > loader_time: 0.0078 (0.00890627774325284)
--> TIME: 2023-12-12 17:56:20 -- STEP: 61/71 -- GLOBAL_STEP: 700
| > loss_text_ce: 0.020576462149620056 (0.020974190233916532)
| > loss_mel_ce: 2.1084694862365723 (1.8571273733357914)
| > loss: 2.1290459632873535 (1.8781015579817726)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.3559 (0.38042967436743563)
| > loader_time: 0.0077 (0.008889374185781014)
> EVALUATION
--> EVAL PERFORMANCE
| > avg_loader_time: 0.08895562092463176 (+5.984306335449219e-05)
| > avg_loss_text_ce: 0.020284986589103937 (-2.4038832634687424e-05)
| > avg_loss_mel_ce: 2.6419233083724976 (+0.029759883880615234)
| > avg_loss: 2.6622082789738974 (+0.029735843340556123)
Model training done!
Environment
XTTS_FT.ipynb
Additional context
No response
I guess that this is because there was a new Trainer release that starts saving the best model only after 10k steps. You could set save_best_after
or save_step
to a lower value.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.
Describe the bug
Hi, I ran the demo yesterday with no isses on colab. Today I try again with the same conditions but this time the best model checkpoints just aren't being saved and aren't being recorded anywhere.
To Reproduce
Just running the colab normally, going through the instructions.
Expected behavior
No response
Logs
2023-12-12 17:44:51.739884: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2023-12-12 17:44:51.739947: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2023-12-12 17:44:51.739989: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2023-12-12 17:44:52.978077: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Running on local URL: http://0.0.0.0:5003 Running on public URL: https://931000ac23620825be.gradio.live This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces) >> DVAE weights restored from: /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/XTTS_v2.0_original_model_files/dvae.pth | > Found 283 files in /content/drive/MyDrive/Hexagram/xtts_test/emo/dataset fatal: not a git repository (or any of the parent directories): .git fatal: not a git repository (or any of the parent directories): .git > Training Environment: | > Backend: Torch | > Mixed precision: False | > Precision: float32 | > Current device: 0 | > Num. of GPUs: 1 | > Num. of CPUs: 8 | > Num. of Torch Threads: 1 | > Torch seed: 1 | > Torch CUDNN: True | > Torch CUDNN deterministic: False | > Torch CUDNN benchmark: False | > Torch TF32 MatMul: False > Start Tensorboard: tensorboard --logdir=/content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000 > Model has 518442047 parameters > EPOCH: 0/10 --> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000 > Sampling by language: dict_keys(['en']) > TRAINING (2023-12-12 17:46:37) --> TIME: 2023-12-12 17:46:50 -- STEP: 0/71 -- GLOBAL_STEP: 0 | > loss_text_ce: 0.018399003893136978 (0.018399003893136978) | > loss_mel_ce: 4.352884769439697 (4.352884769439697) | > loss: 4.371284008026123 (4.371284008026123) | > grad_norm: 0 (0) | > current_lr: 5e-06 | > step_time: 1.165 (1.1649720668792725) | > loader_time: 11.6082 (11.608171463012695) --> TIME: 2023-12-12 17:47:23 -- STEP: 50/71 -- GLOBAL_STEP: 50 | > loss_text_ce: 0.02171478606760502 (0.022096751555800438) | > loss_mel_ce: 2.6930222511291504 (3.0739004755020143) | > loss: 2.7147369384765625 (3.0959972238540647) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.3295 (0.3330649137496948) | > loader_time: 0.0097 (0.013716611862182617) > Filtering invalid eval samples!! > Total eval samples after filtering: 49 > EVALUATION --> EVAL PERFORMANCE | > avg_loader_time: 0.09351348876953125 (+0) | > avg_loss_text_ce: 0.02092768841733535 (+0) | > avg_loss_mel_ce: 2.629672129948934 (+0) | > avg_loss: 2.6505998174349465 (+0) > EPOCH: 1/10 --> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000 > TRAINING (2023-12-12 17:48:01) --> TIME: 2023-12-12 17:48:23 -- STEP: 29/71 -- GLOBAL_STEP: 100 | > loss_text_ce: 0.022052044048905373 (0.021474947831754028) | > loss_mel_ce: 2.5578114986419678 (2.534214406177916) | > loss: 2.5798635482788086 (2.5556893348693848) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.367 (0.35631815318403565) | > loader_time: 0.009 (0.010482763421946558) > EVALUATION --> EVAL PERFORMANCE | > avg_loader_time: 0.09097927808761597 (-0.002534210681915283) | > avg_loss_text_ce: 0.02052511600777507 (-0.00040257240956028187) | > avg_loss_mel_ce: 2.565500199794769 (-0.06417193015416522) | > avg_loss: 2.5860253175099692 (-0.06457449992497732) > EPOCH: 2/10 --> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000 > TRAINING (2023-12-12 17:48:58) --> TIME: 2023-12-12 17:49:06 -- STEP: 8/71 -- GLOBAL_STEP: 150 | > loss_text_ce: 0.021225377917289734 (0.02238078275695443) | > loss_mel_ce: 2.4822535514831543 (2.544594407081604) | > loss: 2.50347900390625 (2.566975176334381) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.3563 (0.38175302743911743) | > loader_time: 0.008 (0.009030967950820923) --> TIME: 2023-12-12 17:49:43 -- STEP: 58/71 -- GLOBAL_STEP: 200 | > loss_text_ce: 0.022638365626335144 (0.022184390446235394) | > loss_mel_ce: 2.3525304794311523 (2.437411641252452) | > loss: 2.375168800354004 (2.459596029643354) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.3568 (0.3823215591496435) | > loader_time: 0.0081 (0.008994587536515863) > EVALUATION --> EVAL PERFORMANCE | > avg_loader_time: 0.08928944667180379 (-0.001689831415812179) | > avg_loss_text_ce: 0.02045490018402537 (-7.021582374969887e-05) | > avg_loss_mel_ce: 2.5211500922838845 (-0.04435010751088431) | > avg_loss: 2.541605015595754 (-0.04442030191421509) > EPOCH: 3/10 --> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000 > TRAINING (2023-12-12 17:49:55) --> TIME: 2023-12-12 17:50:25 -- STEP: 37/71 -- GLOBAL_STEP: 250 | > loss_text_ce: 0.020332181826233864 (0.02113649906036822) | > loss_mel_ce: 1.8743358850479126 (2.3317640826508796) | > loss: 1.8946681022644043 (2.3529005759471167) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.3379 (0.37295331826081146) | > loader_time: 0.0082 (0.008913181923531197) > EVALUATION --> EVAL PERFORMANCE | > avg_loader_time: 0.09068983793258667 (+0.0014003912607828822) | > avg_loss_text_ce: 0.0204750030922393 (+2.0102908213932152e-05) | > avg_loss_mel_ce: 2.5089246233304343 (-0.012225468953450225) | > avg_loss: 2.529399593671163 (-0.012205421924591064) > EPOCH: 4/10 --> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000 > TRAINING (2023-12-12 17:50:52) --> TIME: 2023-12-12 17:51:05 -- STEP: 16/71 -- GLOBAL_STEP: 300 | > loss_text_ce: 0.019207967445254326 (0.02070502028800547) | > loss_mel_ce: 2.0052595138549805 (2.263710305094719) | > loss: 2.0244674682617188 (2.284415304660797) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.329 (0.3692135661840439) | > loader_time: 0.0089 (0.008511707186698914) --> TIME: 2023-12-12 17:51:42 -- STEP: 66/71 -- GLOBAL_STEP: 350 | > loss_text_ce: 0.020322196185588837 (0.021516976680493717) | > loss_mel_ce: 2.071322441101074 (2.198253160173242) | > loss: 2.091644525527954 (2.2197701371077327) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.4131 (0.3744211774883848) | > loader_time: 0.0076 (0.00866986404765736) > EVALUATION --> EVAL PERFORMANCE | > avg_loader_time: 0.09137902657190959 (+0.0006891886393229213) | > avg_loss_text_ce: 0.02040953344355027 (-6.54696486890316e-05) | > avg_loss_mel_ce: 2.5105610688527427 (+0.0016364455223083496) | > avg_loss: 2.5309706330299377 (+0.001571039358774673) > EPOCH: 5/10 --> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000 > TRAINING (2023-12-12 17:51:49) --> TIME: 2023-12-12 17:52:23 -- STEP: 45/71 -- GLOBAL_STEP: 400 | > loss_text_ce: 0.015302835032343864 (0.021413246169686317) | > loss_mel_ce: 2.448807716369629 (2.1535098923577203) | > loss: 2.4641106128692627 (2.174923133850098) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.4952 (0.3692637019687229) | > loader_time: 0.0092 (0.009017451604207357) > EVALUATION --> EVAL PERFORMANCE | > avg_loader_time: 0.08962543805440266 (-0.0017535885175069266) | > avg_loss_text_ce: 0.020435296464711428 (+2.5763021161157723e-05) | > avg_loss_mel_ce: 2.5211901466051736 (+0.010629077752430938) | > avg_loss: 2.541625459988912 (+0.01065482695897435) > EPOCH: 6/10 --> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000 > TRAINING (2023-12-12 17:52:44) --> TIME: 2023-12-12 17:53:03 -- STEP: 24/71 -- GLOBAL_STEP: 450 | > loss_text_ce: 0.021666206419467926 (0.021072693557168048) | > loss_mel_ce: 2.113767147064209 (2.0653036634127298) | > loss: 2.1354334354400635 (2.086376359065374) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.3649 (0.37011505166689557) | > loader_time: 0.0085 (0.00866664449373881) > EVALUATION --> EVAL PERFORMANCE | > avg_loader_time: 0.09099972248077393 (+0.0013742844263712611) | > avg_loss_text_ce: 0.02036033229281505 (-7.496417189637936e-05) | > avg_loss_mel_ce: 2.5216694871584573 (+0.0004793405532836914) | > avg_loss: 2.5420298178990683 (+0.00040435791015625) > EPOCH: 7/10 --> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000 > TRAINING (2023-12-12 17:53:41) --> TIME: 2023-12-12 17:53:44 -- STEP: 3/71 -- GLOBAL_STEP: 500 | > loss_text_ce: 0.02116883173584938 (0.02059762179851532) | > loss_mel_ce: 1.8480390310287476 (1.9163380066553752) | > loss: 1.8692078590393066 (1.9369356234868367) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.4097 (0.40784867604573566) | > loader_time: 0.0092 (0.00968782107035319) --> TIME: 2023-12-12 17:54:21 -- STEP: 53/71 -- GLOBAL_STEP: 550 | > loss_text_ce: 0.021277613937854767 (0.02116718796907731) | > loss_mel_ce: 2.1292479038238525 (1.9866992774999361) | > loss: 2.1505255699157715 (2.007866465820457) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.3498 (0.37555610908652254) | > loader_time: 0.0084 (0.009038628272290499) > EVALUATION --> EVAL PERFORMANCE | > avg_loader_time: 0.08700668811798096 (-0.003993034362792969) | > avg_loss_text_ce: 0.020313929611196123 (-4.640268161892544e-05) | > avg_loss_mel_ce: 2.548041800657908 (+0.026372313499450684) | > avg_loss: 2.5683557589848838 (+0.02632594108581543) > EPOCH: 8/10 --> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000 > TRAINING (2023-12-12 17:54:37) --> TIME: 2023-12-12 17:55:02 -- STEP: 32/71 -- GLOBAL_STEP: 600 | > loss_text_ce: 0.020227529108524323 (0.021138300595339384) | > loss_mel_ce: 1.7593525648117065 (1.95420765504241) | > loss: 1.7795801162719727 (1.9753459803760054) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.3255 (0.3742501512169838) | > loader_time: 0.0094 (0.008944958448410036) > EVALUATION --> EVAL PERFORMANCE | > avg_loader_time: 0.08889577786127727 (+0.00188908974329631) | > avg_loss_text_ce: 0.020309025421738625 (-4.90418945749832e-06) | > avg_loss_mel_ce: 2.6121634244918823 (+0.06412162383397435) | > avg_loss: 2.6324724356333413 (+0.06411667664845755) > EPOCH: 9/10 --> /content/drive/MyDrive/Hexagram/xtts_test/emo/run/training/GPT_XTTS_FT-December-12-2023_05+46PM-0000000 > TRAINING (2023-12-12 17:55:33) --> TIME: 2023-12-12 17:55:43 -- STEP: 11/71 -- GLOBAL_STEP: 650 | > loss_text_ce: 0.02275952324271202 (0.020174486901272427) | > loss_mel_ce: 1.6004739999771118 (1.8832634904167869) | > loss: 1.6232335567474365 (1.9034379612315784) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.3979 (0.39226111498746) | > loader_time: 0.0078 (0.00890627774325284) --> TIME: 2023-12-12 17:56:20 -- STEP: 61/71 -- GLOBAL_STEP: 700 | > loss_text_ce: 0.020576462149620056 (0.020974190233916532) | > loss_mel_ce: 2.1084694862365723 (1.8571273733357914) | > loss: 2.1290459632873535 (1.8781015579817726) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.3559 (0.38042967436743563) | > loader_time: 0.0077 (0.008889374185781014) > EVALUATION --> EVAL PERFORMANCE | > avg_loader_time: 0.08895562092463176 (+5.984306335449219e-05) | > avg_loss_text_ce: 0.020284986589103937 (-2.4038832634687424e-05) | > avg_loss_mel_ce: 2.6419233083724976 (+0.029759883880615234) | > avg_loss: 2.6622082789738974 (+0.029735843340556123) Model training done!
Environment
XTTS_FT.ipynb
Additional context
No response
I can't download the finetune dataset and models,do you have it?Can you share it?Thanks,buddy