Bowen Zheng
Bowen Zheng
> This has been addressed already in https://bugzilla.redhat.com/show_bug.cgi?id=2026299 . > > Using a newer release from https://github.com/billziss-gh/winfsp/releases, in the Windows guest, open regedit, add HKEY_LOCAL_MACHINE\SOFTWARE\VirtIO-FS\CaseInsensitive as a DWORD with value...
Is there any new ideas? Following the design of gym wrappers (eg: `AtariPreprocessing.noop_max`), maybe we can manually apply several steps of random actions after the fake reset in `AutoResetWrapper`?
> Brax PPO now supports a param `num_resets_per_eval` if you want to randomize your init states multiple times during training: > > https://github.com/google/brax/blob/main/brax/training/agents/ppo/train.py#L90 > > We generally don't use this...
Have you tested PSP-R18 model? Based on the code in this repo, I got mIoU=75.53, 75.37 in two independent runs with DIST KD, which is lower than the mIoU in...
I think the calculation of `other` is still incorrect, which neglects that output logits could be negative numbers. [cwl2.py#L146](https://github.com/rikonaka/adversarial-attacks-pytorch/blob/628b82958d7be65c4105d14e6cf226414d2cf62c/torchattacks/attacks/cwl2.py#L146)
> > I think the calculation of `other` is still incorrect, which neglects that output logits could be negative numbers. [cwl2.py#L146](https://github.com/rikonaka/adversarial-attacks-pytorch/blob/628b82958d7be65c4105d14e6cf226414d2cf62c/torchattacks/attacks/cwl2.py#L146) > > Thank you very much for your advice,...
> > Thanks for the quick response. I think you misunderstand the issue. A quick fix of [cwl2.py#L146](https://github.com/rikonaka/adversarial-attacks-pytorch/blob/628b82958d7be65c4105d14e6cf226414d2cf62c/torchattacks/attacks/cwl2.py#L146) would be like: > > ```python > > other = torch.max((1 -...
> > However, there is no such guarantee that the output logits must be non-negtive in pytorch, for arbitrary models under any training methods. > > 😵💫 The same, there...
> I remeber that .mean(1) is equal to reduction='batch_mean‘ ? Here is the source code of `F.kl_div`: https://github.com/pytorch/pytorch/blob/defa0d3a2d230e5d731d5c443c1b9beda2e7fd93/torch/nn/functional.py#L2949-L2958 And the problem here is that the `kd_loss` is subsequently averaged by...
> So batch_mean equals .mean(0)? No. "batchmean" means .sum()/batch_size, i.e., .sum(1).mean()