PaddleFL
PaddleFL copied to clipboard
Calculate metrics in mpc
Good afternoon!
How I can calculate loss in this example?
I tried next:
-
In training process add
loss = exe.run(feed=sample, fetch_list=[avg_loss])
and try to decrypt byload_decrypt_data(loss_file, (1,), decrypt_file_loss)
Have next: -
Also I tried to calculate loss during the infer ( or testing) by adding
prediction, loss = exe.run(program=infer_program, feed=sample, fetch_list=[softmax, avg_loss])
and I get Error:
`NotFoundError: Input variable(mean_0.tmp_0) cannot be found in scope for operator 'Fetch'.Confirm that you have used the fetch `Variable` format instead of the string literal('mean_0.tmp_0') in `fetch_list` parameter when using `executor.run` method. In other words, the format of `executor.run(fetch_list=[fetch_var])`(fetch_var is a Variable) is recommended.
[Hint: fetch_var should not be null.] at (/paddle/paddle/fluid/operators/controlflow/fetch_op.cc:82)
[operator < fetch > error]`
What is the right way ?
Best wishes!
Good afternoon!
How I can calculate loss in this example?
I tried next:
- In training process add
loss = exe.run(feed=sample, fetch_list=[avg_loss])
and try to decrypt byload_decrypt_data(loss_file, (1,), decrypt_file_loss)
Have next:![]()
- Also I tried to calculate loss during the infer ( or testing) by adding
prediction, loss = exe.run(program=infer_program, feed=sample, fetch_list=[softmax, avg_loss])
and I get Error:`NotFoundError: Input variable(mean_0.tmp_0) cannot be found in scope for operator 'Fetch'.Confirm that you have used the fetch `Variable` format instead of the string literal('mean_0.tmp_0') in `fetch_list` parameter when using `executor.run` method. In other words, the format of `executor.run(fetch_list=[fetch_var])`(fetch_var is a Variable) is recommended. [Hint: fetch_var should not be null.] at (/paddle/paddle/fluid/operators/controlflow/fetch_op.cc:82) [operator < fetch > error]`
What is the right way ?
Best wishes!
This example uses sigmoid_cross_entropy_with_logits (for efficiency, we do not calculate loss in current version).
sigmoid_cross_entropy_with_logits: forward: out = sigmoid(x) backward: dx = sigmoid(x) - label