ncem icon indicating copy to clipboard operation
ncem copied to clipboard

LinAlgError: SVD did not converge

Open Chuang1118 opened this issue 1 year ago • 2 comments

Describe the bug

Hello Author,

I am following your tutorial

https://github.com/theislab/ncem_tutorials/blob/main/tutorials/type_coupling_visium.ipynb

In total, I have 3 slides 10x visium using the same preprocessing, There are 2 samples I've got error message below:

---------------------------------------------------------------------------
LinAlgError                               Traceback (most recent call last)
Cell In[26], line 1
----> 1 ncem_ip.get_sender_receiver_effects()

File ~/miniconda3/envs/tf-gpu-cuda10/lib/python3.8/site-packages/ncem/interpretation/interpreter.py:1957, in InterpreterDeconvolution.get_sender_receiver_effects(self, params_type, significance_threshold)
   1955 # get inverse fisher information matrix
   1956 print('calculating inv fim.')
-> 1957 fim_inv = get_fim_inv(x_design, y)
   1959 is_sign, pvalues, qvalues = wald_test(
   1960     params=params, fisher_inv=fim_inv, significance_threshold=significance_threshold
   1961 )
   1962 interaction_shape = np.int(self.n_features_0**2)

File ~/miniconda3/envs/tf-gpu-cuda10/lib/python3.8/site-packages/ncem/utils/wald_test.py:10, in get_fim_inv(x, y)
      6 fim = np.expand_dims(np.matmul(x.T, x), axis=0) / np.expand_dims(var, axis=[1, 2])
      8 fim = np.nan_to_num(fim)
---> 10 fim_inv = np.array([
     11     np.linalg.pinv(fim[i, :, :])
     12     for i in range(fim.shape[0])
     13 ])
     14 return fim_inv

File ~/miniconda3/envs/tf-gpu-cuda10/lib/python3.8/site-packages/ncem/utils/wald_test.py:11, in <listcomp>(.0)
      6 fim = np.expand_dims(np.matmul(x.T, x), axis=0) / np.expand_dims(var, axis=[1, 2])
      8 fim = np.nan_to_num(fim)
     10 fim_inv = np.array([
---> 11     np.linalg.pinv(fim[i, :, :])
     12     for i in range(fim.shape[0])
     13 ])
     14 return fim_inv

File <__array_function__ internals>:180, in pinv(*args, **kwargs)

File ~/miniconda3/envs/tf-gpu-cuda10/lib/python3.8/site-packages/numpy/linalg/linalg.py:1990, in pinv(a, rcond, hermitian)
   1988     return wrap(res)
   1989 a = a.conjugate()
-> 1990 u, s, vt = svd(a, full_matrices=False, hermitian=hermitian)
   1992 # discard small singular values
   1993 cutoff = rcond[..., newaxis] * amax(s, axis=-1, keepdims=True)

File <__array_function__ internals>:180, in svd(*args, **kwargs)

File ~/miniconda3/envs/tf-gpu-cuda10/lib/python3.8/site-packages/numpy/linalg/linalg.py:1648, in svd(a, full_matrices, compute_uv, hermitian)
   1645         gufunc = _umath_linalg.svd_n_s
   1647 signature = 'D->DdD' if isComplexType(t) else 'd->ddd'
-> 1648 u, s, vh = gufunc(a, signature=signature, extobj=extobj)
   1649 u = u.astype(result_t, copy=False)
   1650 s = s.astype(_realType(result_t), copy=False)

File ~/miniconda3/envs/tf-gpu-cuda10/lib/python3.8/site-packages/numpy/linalg/linalg.py:97, in _raise_linalgerror_svd_nonconvergence(err, flag)
     96 def _raise_linalgerror_svd_nonconvergence(err, flag):
---> 97     raise LinAlgError("SVD did not converge")

LinAlgError: SVD did not converge

Any help on how to fix this issue would be appropriated.

Best, Chuang

To Reproduce ncem_ip.get_sender_receiver_effects()

Steps to reproduce the behavior:

  1. ...
  2. ...
  3. ...

Expected behavior

System [please complete the following information]:

  • OS: e.g. [Ubuntu 18.04]
  • Language Version: [e.g. Python 3.8]
  • Virtual environment: [e.g. Conda]

Additional context

Chuang1118 avatar Mar 27 '23 17:03 Chuang1118

Hello Author,

For more informations, I upload the matrix called fim as your code in
https://github.com/theislab/ncem/blob/1ed3ccead38f70fcdbcc6640971ddfa877ff62cd/ncem/utils/wald_test.py#L8

For building fim matrix, I am following previous Traceback messages.

Additional context

(fim< 0).sum() 
0
np.isnan(fim).sum()
0
fim.shape
(2000, 306, 306)
type(fim[0,0,0])
numpy.float64
grep MemTotal /proc/meminfo
MemTotal:       263710348 kB # 251.49378586 gigabyte

To Reproduce 1/ Download file https://filesender.renater.fr/?s=download&token=d1c0b219-b040-4919-a59d-439dc546e1e8

2/ Rename

mv fim_bug.npy.txt  fim_bug.npy

3/Loading

with open('fim_bug.npy', 'rb') as f:
    fim = np.load(f)

4/ Running

fim_inv = np.array([
        np.linalg.pinv(fim[i, :, :])
        for i in range(fim.shape[0])
    ])

LinAlgError: SVD did not converge

---------------------------------------------------------------------------
LinAlgError                               Traceback (most recent call last)
Cell In[56], line 1
----> 1 fim_inv = np.array([
      2         np.linalg.pinv(fim[i, :, :])
      3         for i in range(fim.shape[0])
      4     ])

Cell In[56], line 2, in <listcomp>(.0)
      1 fim_inv = np.array([
----> 2         np.linalg.pinv(fim[i, :, :])
      3         for i in range(fim.shape[0])
      4     ])

File <__array_function__ internals>:180, in pinv(*args, **kwargs)

File ~/miniconda3/envs/tf-gpu-cuda10/lib/python3.8/site-packages/numpy/linalg/linalg.py:1990, in pinv(a, rcond, hermitian)
   1988     return wrap(res)
   1989 a = a.conjugate()
-> 1990 u, s, vt = svd(a, full_matrices=False, hermitian=hermitian)
   1992 # discard small singular values
   1993 cutoff = rcond[..., newaxis] * amax(s, axis=-1, keepdims=True)

File <__array_function__ internals>:180, in svd(*args, **kwargs)

File ~/miniconda3/envs/tf-gpu-cuda10/lib/python3.8/site-packages/numpy/linalg/linalg.py:1648, in svd(a, full_matrices, compute_uv, hermitian)
   1645         gufunc = _umath_linalg.svd_n_s
   1647 signature = 'D->DdD' if isComplexType(t) else 'd->ddd'
-> 1648 u, s, vh = gufunc(a, signature=signature, extobj=extobj)
   1649 u = u.astype(result_t, copy=False)
   1650 s = s.astype(_realType(result_t), copy=False)

File ~/miniconda3/envs/tf-gpu-cuda10/lib/python3.8/site-packages/numpy/linalg/linalg.py:97, in _raise_linalgerror_svd_nonconvergence(err, flag)
     96 def _raise_linalgerror_svd_nonconvergence(err, flag):
---> 97     raise LinAlgError("SVD did not converge")

LinAlgError: SVD did not converge

Chuang1118 avatar Mar 29 '23 11:03 Chuang1118

I experienced the same problem. Did you find a workaround ?

junyho486 avatar Apr 03 '23 14:04 junyho486