Vedant Roy

Results 96 comments of Vedant Roy

**Edit:** Figured out how torchmetrics works. Below stuff is irrelevant. @hanlint I'm confused as to how this interface works. For example: I want to log 2 different losses from my...

@hanlint This is the thing I want to log for my validation batch https://github.com/vedantroy/improved-ddpm-pytorch/blob/d2d6954f19b7b850bb45aff815f1329df3f2a5f4/diffusion/diffusion.py#L293 (Irrelevant, figured it out; I need to implement some custom metrics)

Fwiw, the specific thing I was confused about in the documentation was (for some reason), I assumed that the validate method was probably going to be something along the lines...

Also, it seems like normally EMA is set with a rate (e.g EMA rate = 0.999). Is there a way to set the EMA rate like that? It would make...

@coryMosaicML, any update on this?

> Thanks @vedantroy for reporting these, we'll take a look at each carefully. > > For #5 above, you should see something in the logs (e.g.): > > ``` >...

@hanlint My trainer has this: ```python def loss(self, out, micro_batch): mse_loss, vb_loss = self.diffusion.training_losses( out.model_out, x_0=out.x_0, x_t=out.x_t, t=out.t, noise=out.noise ) return mse_loss + vb_loss ``` I want to log both...

@abhi-mosaic, I turned on `ProgressBarLogger`, but I did not find any difference in the console output.

> Thanks for bringing this up! Feedback is super helpful. > > > (Bug) Nothing is printed indicated that composer is restarting the forward method when grad_accum="auto" is set to...

I don't care about logging things at a micro batch level. I was saying that the parameter name `batch` in the `loss` method should be changed to `micro_batch` to be...