ivy
ivy copied to clipboard
added log_poisson_loss
Hello, can you please check the code and comment on it? Thank you
Close #16743
hi @MuhammadSaeedBatikh Can you please give me some feedback? thank you
If you are working on an open task, please edit the PR description to link to the issue you've created.
For more information, please check ToDo List Issues Guide.
Thank you :hugs:
Hi @rullo16, thanks a lot for your PR. Apologies for not getting to you earlier.
Everything looks good. Just some minor modifications.
Since TensorFlow supports float16
for log_poission_loss
, there seems to be a problem with this particular dtype conversion for paddle backend.
E AssertionError: the results from backend paddle and ground truth framework tensorflow do not match
E [2.7182817 2.7182817 1.3686724]!=[2.7182817 2.7182817 1.3706255]
E
E
E Falsifying example: test_log_poisson_loss(
E dtype_and_targets=(['float16'], [array([0., 0., 2.], dtype=float16)]),
E dtype_and_log_input=(['float32'], [array([1., 1., 1.], dtype=float32)]),
E compute_full_loss=True,
E fn_name='log_poisson_loss',
E test_flags=FunctionTestFlags(
E num_positional_args=0,
E with_out=False,
E instance_method=False,
E test_gradients=None,
E test_compile=None,
E as_variable=[False],
E native_arrays=[False],
E container=[False],
E ),
E ground_truth_backend='tensorflow',
E backend_fw=<module 'ivy.functional.backends.paddle' from '/opt/project/ivy/functional/backends/paddle/__init__.py'>,
E on_device='cpu',
E )
It seems like ivy.exp
is returning slightly different results when running paddle backend for float16. (probably because of type promotion?)
Since the difference is "1.370-1.368= 0.002", we can just decrease the absolute tolerance value in tests.
Please add atol_=1e-2
to helpers.test_function
after compute_full_loss=compute_full_loss
Also, for the sake of completeness, could you please add test function in test_tensorflow/test_nn.py
to be consistent.
P.S. Make sure that your first comment starts with Close #issue _num. Thanks.
Hi @MuhammadSaeedBatikh, thanks for the feedback, everything should be fine now 👍
Hi @rullo16, lgtm.