zhangrentu
zhangrentu
Thank you very much for your response. I added the data attribute (torch.complex64) and performed calculations with PyTorch, but I found that the results are not converging. Do you have...
Thanks. The following is the new version, with the added dtype constraints (torch.complex64) import torch import theseus as th def y_model(x, a, b, c): return a * torch.exp((-1j * b...
In the process of attempting parallel network(torch.nn.DataParallel) computations, the following error was encountered. A model composed of th.TheseusLayer and the rest of the network structures is placed on the same...
Thank you very much for your response. However, when we use TheseusLayer in the model, it still returns NoneType. Additionally, optim_vars and aux_vars data are automatically placed on Cuda:0, but...
We tried changing the error function to the real part, for example, real(exp(ia)) = cos a, but the error did not converge after the modification, or the matrix is non-positive...
Yes, I called it from a branch. The main process of parallel computing is as follows: data is placed on the primary GPU (cuda:0), and the model is distributed to...
> @zhangrentu Regarding the convergence, in the script I shared above one thing I did was to increase the damping to a really large value (I used 100.0), and the...
> Ah, I see. We have never tested this inside a DataParallel model, so I don't have a lot of insight yet. Could you share a short repro script? I...
I'm sorry for providing an inappropriate example. Thank you again for your prompt responses every time. I currently have two main questions: 1. When my solving system encounters an indefinite...
> > > @zhangrentu Regarding the convergence, in the script I shared above one thing I did was to increase the damping to a really large value (I used 100.0),...