rm np_config.enable_numpy_behavior()
Description
Fixes #1092, while trying to keep the original printing behavior.
I just ran this code - removing the NumPy behaviour doesn't break prints, it breaks the line above avg_loss = round(epoch_loss_avg.result(), 3) in the file dsp_aware_pruning/keras/__init__.py (line 234). So this has to do with querying EagerTensors (which are not always evaluated until necessary). The error I get is:
AttributeError: EagerTensor object has no attribute 'astype'.
If you are looking for numpy-related methods, please run the following:
from tensorflow.python.ops.numpy_ops import np_config
np_config.enable_numpy_behavior()
So if we remove this import we need to find a way to print the loss during training.
I just ran this code - removing the NumPy behaviour doesn't break prints, it breaks the line above
avg_loss = round(epoch_loss_avg.result(), 3)in the filedsp_aware_pruning/keras/__init__.py(line 234). So this has to do with querying EagerTensors (which are not always evaluated until necessary). The error I get is:AttributeError: EagerTensor object has no attribute 'astype'. If you are looking for numpy-related methods, please run the following: from tensorflow.python.ops.numpy_ops import np_config np_config.enable_numpy_behavior()So if we remove this import we need to find a way to print the loss during training.
Use tf.cast(x) instead of x.astype would solve that.
I just ran this code - removing the NumPy behaviour doesn't break prints, it breaks the line above
avg_loss = round(epoch_loss_avg.result(), 3)in the filedsp_aware_pruning/keras/__init__.py(line 234). So this has to do with querying EagerTensors (which are not always evaluated until necessary). The error I get is:AttributeError: EagerTensor object has no attribute 'astype'. If you are looking for numpy-related methods, please run the following: from tensorflow.python.ops.numpy_ops import np_config np_config.enable_numpy_behavior()So if we remove this import we need to find a way to print the loss during training.
Use
tf.cast(x)instead ofx.astypewould solve that.
That's a likely solution, but in the code we don't actually call x.astype. This is called somewhere internally in TF when trying to get the value of the loss tensor (epoch_loss_avg.result()), so we need to see what the alternative is to get the loss value to this function. Also do we now if this buggy behaviour is introduced by an updated in NumPy / TF / QKeras?
Turns out the changes in this PR have already been merged into main and DSP-aware pruning is actually broken in the main branch. My guess is that one of @calad0i previous PR accidentally included the change to remove enable_numpy_behaviour.
The proposed solution with tf_prints is on the right track, but one also needs to remove the round. I also found a minor inconsistency with the function docstring, and to avoid opening a PR to a PR, the two fixes are now in PR #1396.
I suggest we close this PR, as the core change (removing enable_numpy_behaviour) is already in main and merge #1396.
Closed in favor of #1396