POT
POT copied to clipboard
[WIP] Gaussian initialization for sinkhorn
Types of changes
This is a first shot for intializing empirical sinkhorn but the computational gain is not very clear on tests I did locally.
The following code
n = 2000
rng = np.random.RandomState(0)
x = rng.randn(n, 2)
x2 = 10*rng.randn(n//2, 2)
x2[:,0]+=2
ot.tic()
G, log = ot.empirical_sinkhorn(x,x2, 1, method='sinkhorn_log', warmstart=None, verbose=False, isLazy=False, stopThr=1e-5, log = True)
ot.toc()
print("Err=",log['err'][-1], "niter=", log['niter'])
ot.tic()
G2, log2 = ot.empirical_sinkhorn(x,x2, 1, method='sinkhorn_log', warmstart='gaussian', verbose=False, isLazy=False, stopThr=1e-5, log = True)
ot.toc()
print("Err=",log2['err'][-1], "niter=", log2['niter'])
give sthe following output
Elapsed time : 3.0527355670928955 s
Err= 9.441113655553818e-06 niter= 140
Elapsed time : 2.391462564468384 s
Err= 9.89690596643644e-06 niter= 110`
Quite far from the computational gains in the paper. Will investigate it more.
Motivation and context / Related issue
How has this been tested (if it applies)
PR checklist
- [ ] I have read the CONTRIBUTING document.
- [ ] The documentation is up-to-date with the changes I made (check build artifacts).
- [ ] All tests passed, and additional code has been covered with new tests.
- [ ] I have added the PR and Issue fix to the RELEASES.md file.