Parthe Pandit
Parthe Pandit
``` import numpy as np from sigpy.prox import L2Reg import unittest def g_in(r_in_minus, gamma_in_minus): eta_in_plus = (1 + gamma_in_minus) xhat_in_plus = r_in_minus * gamma_in_minus / eta_in_plus return xhat_in_plus, eta_in_plus class...
I created a pull request
pykeops was successfully installed on my machine (using cuda) and tested using the commands `python -c "import pykeops; pykeops.test_numpy_bindings(); pykeops.test_torch_bindings()"`
I installed using the command `pip install git+https://github.com/falkonml/falkon.git` as instructed [here](https://falkonml.github.io/falkon/install.html)
When I install using `python setup.py develop` the following log is printed ``` No CUDA runtime is found, using CUDA_HOME='/home/$USER/.conda/envs/Falkon_ML' running develop running egg_info writing falkon.egg-info/PKG-INFO writing dependency_links to falkon.egg-info/dependency_links.txt...
I'm working on a slurm cluster CUDA is installed, but not in this location: `usr/local/bin/` `torch.cuda.is_available()` returns `True`
Here is my install script for falkon: ``` yes | conda create -n Falkon_ML python=3.10 ipython conda activate Falkon_ML yes | conda install -c nvidia/label/cuda-11.3.1 cuda-toolkit yes | conda install...
We have tried setting the following options, yet the seg-fault persists. `never_store_kernel=True` `chol_force_kernel=True` `no_single_kernel=False`
Here is a minimal working code that reproduces the error that was raised by @ahabedsoltan ``` import falkon, torch n, N, M, d, bw = 200_000, 1000, 64_000, 1, 1....
Reinstalling falkon as follows solved the issue. @Giodiro Thanks for the quick bug-fix! ``` pip uninstall falkon pip install --no-build-isolation git+https://github.com/FalkonML/falkon.git ```