RF loss
fape_kwargs = { 'masks_1d' : masks_1d,
'pred_in' : pred_in,
'indep' : indep,
'mask_crds' : fape_mask,
'fi_dev' : self.fi_dev,
'atom_frames' : atom_frames,
'sm_Ls' : sm_Ls,
'conf' : self.conf,
'diffusion_mask' : ~masks_1d['was_noised_in_3d']
}
# some FAPE losses will often be NaN coming out of this function, b.c.
# some fapes are undefined if e.g., no motif (unconditional)
fape_loss_dict, rmsd_dict = compute_fape_losses(**fape_kwargs)
Dear Developers, Greetings, and thank you for your excellent work! While studying the RF series, I encountered a few questions I would like to ask. During training, before computing the loss via self.calc_loss(indep, ...), the code runs: indep.xyz = inddiffuser_out = diffuser.forward_marginal(indep.xyz, ...).xyz.to(gpu) At this point, has indep been noised from the original structure? Since FAPE is supposed to compare the clean (ground-truth) structure with the model-predicted structure, why is this indep—now potentially noised—still used in FAPE? Also, in the training loop:
for loader_out in train_loader: indep, rfi, ... = loader_out
Is the indep obtained here already noised? If so, why is it still used for FAPE computation? I would greatly appreciate your insights on this. Thank you very much, and wishing you all the best in both your work and life!
As far as I can tell, FAPE loss is not used in RFDiffusion. Instead, they use a Frame Loss which is described in the supplement.