pySCENIC
pySCENIC copied to clipboard
distributed.worker - WARNING - Could not find data:
I am running PySCENIC using the protocols described in the paper titled "A scalable SCENIC workflow for single-cell gene regulatory network analysis". I have generated Loom files using the PBMC dataset downloaded from 10x Genomics, as suggested in the paper. However, when I input the following code: (scenic_protocol) lij@shpc-1392-instance-hgaxepHO:~$ pyscenic grn --num_workers 20 --output adj.tsv --method grnboost2 PBMC10k_filtered.loom hs_hgnc_tfs.txt, an error occurred:
here is the code and error demonstration:
(scenic_protocol) lij@shpc-1392-instance-hgaxepHO:~$ pyscenic grn --num_workers 20 --output adj.tsv --method grnboost2 PBMC10k_filtered.loom hs_hgnc_tfs.txt
2023-06-05 10:02:26,215 - pyscenic.cli.pyscenic - INFO - Loading expression matrix.
2023-06-05 10:02:30,653 - pyscenic.cli.pyscenic - INFO - Inferring regulatory networks.
/home/lij/miniconda3/envs/scenic_protocol/lib/python3.10/site-packages/distributed/node.py:182: UserWarning: Port 8787 is already in use.
Perhaps you already have a cluster running?
Hosting the HTTP server on port 39847 instead
warnings.warn(
preparing dask client
parsing input
creating dask graph
2023-06-05 10:09:22,886 - distributed.worker - WARNING - Could not find data: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ['tcp://127.0.0.1:40481']} on workers: [] (who_has: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ['tcp://127.0.0.1:40481']})
2023-06-05 10:09:25,385 - distributed.scheduler - WARNING - Worker tcp://127.0.0.1:34745 failed to acquire keys: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ('tcp://127.0.0.1:40481',)}
2023-06-05 10:09:48,372 - distributed.worker - WARNING - Could not find data: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ['tcp://127.0.0.1:44933']} on workers: [] (who_has: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ['tcp://127.0.0.1:44933']})
2023-06-05 10:09:50,752 - distributed.scheduler - WARNING - Worker tcp://127.0.0.1:41941 failed to acquire keys: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ('tcp://127.0.0.1:44933',)}
2023-06-05 10:11:22,756 - distributed.worker - WARNING - Could not find data: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ['tcp://127.0.0.1:43283', 'tcp://127.0.0.1:33025']} on workers: [] (who_has: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ['tcp://127.0.0.1:43283', 'tcp://127.0.0.1:33025']})
2023-06-05 10:11:22,763 - distributed.worker - WARNING - Could not find data: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ['tcp://127.0.0.1:43283', 'tcp://127.0.0.1:33025']} on workers: [] (who_has: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ['tcp://127.0.0.1:43283', 'tcp://127.0.0.1:33025']})
2023-06-05 10:11:25,204 - distributed.scheduler - WARNING - Worker tcp://127.0.0.1:35385 failed to acquire keys: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ('tcp://127.0.0.1:43283', 'tcp://127.0.0.1:33025')}
2023-06-05 10:11:25,204 - distributed.scheduler - WARNING - Worker tcp://127.0.0.1:41555 failed to acquire keys: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ('tcp://127.0.0.1:43283', 'tcp://127.0.0.1:33025')}
2023-06-05 10:11:57,216 - distributed.worker - WARNING - Could not find data: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ['tcp://127.0.0.1:37661', 'tcp://127.0.0.1:41941', 'tcp://127.0.0.1:35717']} on workers: [] (who_has: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ['tcp://127.0.0.1:37661', 'tcp://127.0.0.1:41941', 'tcp://127.0.0.1:35717']})
2023-06-05 10:11:57,274 - distributed.worker - WARNING - Could not find data: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ['tcp://127.0.0.1:37661', 'tcp://127.0.0.1:41941', 'tcp://127.0.0.1:35717']} on workers: [] (who_has: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ['tcp://127.0.0.1:37661', 'tcp://127.0.0.1:41941', 'tcp://127.0.0.1:35717']})
2023-06-05 10:11:59,471 - distributed.scheduler - WARNING - Worker tcp://127.0.0.1:37643 failed to acquire keys: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ('tcp://127.0.0.1:37661', 'tcp://127.0.0.1:41941', 'tcp://127.0.0.1:35717')}
2023-06-05 10:11:59,473 - distributed.scheduler - WARNING - Worker tcp://127.0.0.1:39775 failed to acquire keys: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ('tcp://127.0.0.1:37661', 'tcp://127.0.0.1:41941', 'tcp://127.0.0.1:35717')}
2023-06-05 10:13:01,197 - distributed.worker - WARNING - Could not find data: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ['tcp://127.0.0.1:44933', 'tcp://127.0.0.1:40481', 'tcp://127.0.0.1:34745', 'tcp://127.0.0.1:39775', 'tcp://127.0.0.1:33025']} on workers: [] (who_has: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ['tcp://127.0.0.1:44933', 'tcp://127.0.0.1:40481', 'tcp://127.0.0.1:34745', 'tcp://127.0.0.1:39775', 'tcp://127.0.0.1:33025']})
2023-06-05 10:13:03,690 - distributed.scheduler - WARNING - Worker tcp://127.0.0.1:43555 failed to acquire keys: {'ndarray-bfe6126093e0117e52fb084d3f577fc9': ('tcp://127.0.0.1:44933', 'tcp://127.0.0.1:40481', 'tcp://127.0.0.1:34745', 'tcp://127.0.0.1:39775', 'tcp://127.0.0.1:33025')}
Someone else might be running someting on port 8787
. Or you have e.g. RStudio Server running on the same server (8787
is the default port for RStudio Server).
I encountered a similar issue; I got similar warning messages after I got the message "Port 8787 is already in use". Do you know why it occurs when port 8787 is occupied?
好嘛都是西柚云用户是吧😅
我这两天解决问题了,分三步: 1.sudo lsof -i :8787 先看一下8787端口是不是被rstudio占用了 2.vim /etc/rstudio/rserver.conf,进去把www-port改了,找到这一行,把原先的注释掉,下面写个www-port=你的替代端口,比如 10941 3.sudo rstudio-server restart 这样8787端口就空余出来了 如果还显示8787端口占用可以关闭ssh页面重连一下或者重启实例。
另外ssh服务器建议可以使用arboreto_with_multipricessing脚本来跑grn,后面可以避免很多报错 具体参考下面的🔗: https://pyscenic.readthedocs.io/en/latest/faq.html
😀😀😀发自我的 iPhone在 2023年7月29日,19:55,philo♂sophist @.***> 写道: 我这两天解决问题了,分三步: 1.sudo lsof -i :8787 先看一下8787端口是不是被rstudio占用了 2.vim /etc/rstudio/rserver.conf,进去把www-port改了,找到这一行,把原先的注释掉,下面写个www-port=你的替代端口,比如 10941 3.sudo rstudio-server restart 这样8787端口就空余出来了 如果还显示8787端口占用可以关闭ssh页面重连一下或者重启实例。 另外ssh服务器建议可以使用arboreto_with_multipricessing脚本来跑grn,后面可以避免很多报错 具体参考下面的🔗: https://pyscenic.readthedocs.io/en/latest/faq.html
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>
我这两天解决问题了,分三步: 1.sudo lsof -i :8787 先看一下8787端口是不是被rstudio占用了 2.vim /etc/rstudio/rserver.conf,进去把www-port改了,找到这一行,把原先的注释掉,下面写个www-port=你的替代端口,比如 10941 3.sudo rstudio-server restart 这样8787端口就空余出来了 如果还显示8787端口占用可以关闭ssh页面重连一下或者重启实例。
另外ssh服务器建议可以使用arboreto_with_multipricessing脚本来跑grn,后面可以避免很多报错 具体参考下面的🔗: https://pyscenic.readthedocs.io/en/latest/faq.html
“vim /etc/rstudio/rserver.conf” this step which I cant find “www-port”