POT icon indicating copy to clipboard operation
POT copied to clipboard

nan values and division by zero warnings on stochastic solvers

Open selmanozleyen opened this issue 3 years ago • 3 comments

Description

The following output showed up when playing with the codes on the docs of the stochastic sub-module. I think the output of the following code snippet is clear enough to show where the problem is but I didn't want to put a PR request directly since I am really new to these topics.

To Reproduce

Below is the same code samples from the docs except n_source here is significantly higher.

import cupy as cp
import ot
n_source = 70000
n_target = 100
reg = 1
numItermax = 100000
lr = 0.1
batch_size = 3
log = True

a = ot.utils.unif(n_source)
b = ot.utils.unif(n_target)

rng = np.random.RandomState(0)
X_source = rng.randn(n_source, 2)
Y_target = rng.randn(n_target, 2)
M = ot.dist(X_source, Y_target)

method = "ASGD"
asgd_pi, log_asgd = ot.stochastic.solve_semi_dual_entropic(a, b, M, reg, method, numItermax, log=log)
print(log_asgd['alpha'], log_asgd['beta'])
print(asgd_pi)
/home/selman/anaconda3/envs/sd/lib/python3.9/site-packages/ot/stochastic.py:85: RuntimeWarning: overflow encountered in exp
  exp_beta = np.exp(-r / reg) * b
/home/selman/anaconda3/envs/sd/lib/python3.9/site-packages/ot/stochastic.py:86: RuntimeWarning: invalid value encountered in true_divide
  khi = exp_beta / (np.sum(exp_beta))
[nan nan nan ... nan nan nan] [nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
 nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
 nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
 nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
 nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
 nan nan nan nan nan nan nan nan nan nan]
[[nan nan nan ... nan nan nan]
 [nan nan nan ... nan nan nan]
 [nan nan nan ... nan nan nan]
 ...
 [nan nan nan ... nan nan nan]
 [nan nan nan ... nan nan nan]
 [nan nan nan ... nan nan nan]]

Additional Context

Right now I use stochastic.py file on my own project seperately because of this problem. I added a small value on the divisions and it seems to work fine but I am not sure if it is an appropiate aproach. For example:

khi = exp_beta / (np.sum(exp_beta) + 1e-8)

Environment:

Linux-5.10.42-1-MANJARO-x86_64-with-glibc2.33
Python 3.9.5 (default, Jun  4 2021, 12:28:51) 
[GCC 7.5.0]
NumPy 1.20.2
SciPy 1.6.2
POT 0.7.0

selmanozleyen avatar Jun 23 '21 19:06 selmanozleyen

Adding a mall value is classic and it will work in practice so but i agree that we do not handle this well ;).

I have to look into it but i'm sure we can find a numerically stable way to do this update (using logsumexp? ).

rflamary avatar Jun 24 '21 06:06 rflamary

Hi, thanks for the respond. Btw I love the work of you guys, just wanted to say... Anyways considering this:

r = M[i, :] - beta
exp_beta = cp.exp(-r / reg) * b
khi = exp_beta / (cp.sum(exp_beta))

can't we write this instead:

 softmax(log(b)*(-r / reg))

selmanozleyen avatar Jun 26 '21 03:06 selmanozleyen

yes this equivalent but it comes with more complex computations (matrix product vs stabilized log+exp). We plan on implementing this one by default because it will indeed provide more numerical stability but note that it will not help with convergence speed if you have a small regularization term.

rflamary avatar Jun 28 '21 06:06 rflamary