Sangchun Ha

Results 35 comments of Sangchun Ha

Also, because `log_softmax(log_softmax(x)) = log_softmax(x)`, the result is the same.

Thanks for suggesting great ideas. Can you open the PR with the test code?

@OleguerCanal We did so because it's good to understand the tendency when it's accumulated as a whole. Is there a reason why you want to do point-wise wer?

@jun-danieloh @sooftware It doesn't seem like a wandb problem, but how about changing the logger to tensorboard and trying it? [[link]](https://github.com/openspeech-team/openspeech/blob/main/openspeech/dataclass/configurations.py#L199-L200)

This is a bottleneck. [[link]](https://github.com/openspeech-team/openspeech/blob/main/openspeech/data/sampler.py#L88-L89). When I used RandomSampler, it was immediately executed. [[link]](https://github.com/openspeech-team/openspeech/blob/main/openspeech/dataclass/configurations.py#L212-L215) ```python sampler: str = field( default="else", metadata={"help": "smart: batching with similar sequence length." "else: random batch"}...

@tand22 Would you like to try changing the sampler from `smart` to `else`? In the current code, the default is `smart`, so you will have to modify it.

@tand22 You can reduce the batch size, but the 2080ti will run out of memory a bit. [[LINK]](https://github.com/openspeech-team/openspeech/blob/main/openspeech/dataclass/configurations.py#L190-L192)

@rkskekzzz 혹시 저장된 checkpoint가 있다면 해당 옵션으로 다시 학습 진행하시면 됩니다! https://github.com/openspeech-team/openspeech/blob/main/openspeech/utils.py#L325-L339

I think the training is not done correctly, can I see the loss graph?

I'm really sorry for the late reply. I trained the contextnet model with ctc, and it was confirmed that the training worked well. ```python python ./openspeech_cli/hydra_train.py \ dataset=ksponspeech \ tokenizer=kspon_character...