Casey Jao
Casey Jao
Upstream [issue](https://github.com/pytorch/pytorch/issues/81186) and [proposed patch](https://github.com/pytorch/pytorch/pull/81188). The above snippet seems to work with the patch: ``` tensor([2.], requires_grad=True) tensor([4.]) tensor([2.], requires_grad=True) tensor([4.]) ```
`__name__` and `workflow_function_string` were [previously used](https://github.com/AgnostiqHQ/covalent/blob/develop/covalent_dispatcher/_db/dispatchdb.py#L137-L166) by the UI and could introduce regressions if they are missing. They are actually already in the DB, just not retrieved by `result_from`. The...
@santoshkumarradha @FyzHsn should this be fixed before release?
@mshkanth is your team independently retrieving `lattice.__name__` and `lattice.workflow_function_string` in your data layer?
I've also seen this with the Quantum Gravity tutorial (all executors).
The loss is flat because the [gradients computed](https://github.com/PennyLaneAI/pennylane/blob/e7d52079499508ebf204e4d8678be2fe2f20983e/pennylane/optimize/gradient_descent.py#L59) in `optimizer.step_and_cost()` are empty. Pennylane is using [autograd](https://github.com/PennyLaneAI/pennylane/blob/e7d52079499508ebf204e4d8678be2fe2f20983e/pennylane/_grad.py#L1) to compute the gradient function, and somehow that fails. A possibly related discussion: https://github.com/dask/distributed/issues/2581#issuecomment-478151764
The key to this case lies in the following line: **`cost_function,init_angles=initialize_parameters(p=p,qubits=qubits,prob=prob,seed=s)`** When the `initialize_parameters` electron is executed using Dask, the returned `init_angles` is merely a `numpy.ndarray` even though `initialize_parameters` constructs...
I wonder which of the other training problems (not necessarily using Dask) can also be traced to serialization/deserialization errors.
@santoshkumarradha
The MNIST bug (which uses torch) has a different explanation. Let's discuss in a separate issue.