pySCENIC icon indicating copy to clipboard operation
pySCENIC copied to clipboard

Error in grn (about tornado.application)[BUG]

Open mengfeiwangdmu opened this issue 1 year ago • 2 comments

Dear pySCENIC developer, Thank you for your excellent work. This is Mengfei Wang from Dalian Medical University. When I tried to run grn, there were some error messages about tornado.application. I checked former issues, some people said it was because of the version of python , dask or other scripts. But I am not sure, and I am a new user of python, so I have no idear what to do. Could you please help me? Thank you very much.

 pyscenic grn --num_workers 20 --output adj.sample.tsv --method grnboost2 $input_loom  $tfs 

The work log:

2022-08-24 12:53:33,952 - pyscenic.cli.pyscenic - INFO - Loading expression matrix.

2022-08-24 12:53:53,562 - pyscenic.cli.pyscenic - INFO - Inferring regulatory networks.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.
preparing dask client
parsing input
creating dask graph
20 partitions
computing dask graph
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7fb515f31830>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33160 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f28c61c5dd0>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33162 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f1ca4178dd0>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33166 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7fc5c6502b90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33168 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f7d61157b90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33174 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f4b18f40ef0>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33178 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7fe068112dd0>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33180 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f7fb1613b90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33182 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7fd3c83dbb90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33184 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f9baef50c20>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33188 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f3292f23b90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33190 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7fdd79e13b90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33192 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f0c94c02dd0>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33196 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f9fb1834b90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33208 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f8506f67dd0>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33216 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f43545e5a70>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33218 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f3280411b90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33220 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f230c17ddd0>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33224 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f75110fcb90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:44864 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7fb515f31830>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33160 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f1ca4178dd0>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33166 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7fc5c6502b90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33168 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f7d61157b90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33174 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7fe068112dd0>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33180 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f7fb1613b90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33182 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7fd3c83dbb90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33184 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f9baef50c20>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33188 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f3292f23b90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33190 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7fdd79e13b90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33192 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f0c94c02dd0>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33196 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f9fb1834b90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33208 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f8506f67dd0>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33216 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f43545e5a70>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33218 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f3280411b90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33220 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f230c17ddd0>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:33224 remote=tcp://127.0.0.1:41759> already closed.
tornado.application - ERROR - Exception in callback <function Worker.__init__.<locals>.<lambda> at 0x7f75110fcb90>
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 905, in _run
    return self.callback()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/worker.py", line 1008, in <lambda>
    lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/batched.py", line 137, in send
    raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Worker->Scheduler local=tcp://127.0.0.1:44864 remote=tcp://127.0.0.1:41759> already closed.
distributed.scheduler - ERROR - Error transitioning 'str_-a68523d3c0915cd35360a5cb008716dc' from 'erred' to 'memory'
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/scheduler.py", line 2275, in _transition
    assert not args and not kwargs, (args, kwargs, start_finish)
AssertionError: ((), {'worker': 'tcp://127.0.0.1:34855', 'nbytes': 85, 'typename': 'numpy.str_'}, ('erred', 'memory'))
distributed.utils - ERROR - ((), {'worker': 'tcp://127.0.0.1:34855', 'nbytes': 85, 'typename': 'numpy.str_'}, ('erred', 'memory'))
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/utils.py", line 693, in log_errors
    yield
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/scheduler.py", line 4453, in add_worker
    typename=types[key],
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/scheduler.py", line 2275, in _transition
    assert not args and not kwargs, (args, kwargs, start_finish)
AssertionError: ((), {'worker': 'tcp://127.0.0.1:34855', 'nbytes': 85, 'typename': 'numpy.str_'}, ('erred', 'memory'))
distributed.core - ERROR - Exception while handling op register-worker
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/core.py", line 525, in handle_comm
    result = await result
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/scheduler.py", line 4453, in add_worker
    typename=types[key],
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/scheduler.py", line 2275, in _transition
    assert not args and not kwargs, (args, kwargs, start_finish)
AssertionError: ((), {'worker': 'tcp://127.0.0.1:34855', 'nbytes': 85, 'typename': 'numpy.str_'}, ('erred', 'memory'))
not shutting down client, client was created externally
finished
distributed.nanny - WARNING - Worker process still alive after 3.999998664855957 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.999998092651367 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.999998474121094 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.999998664855957 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.9999988555908206 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.999998092651367 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.9999982833862306 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.999998092651367 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.9999982833862306 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.999998664855957 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.9999982833862306 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.999998664855957 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.999998092651367 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.999998664855957 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.9999982833862306 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.999998474121094 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.9999982833862306 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.999998092651367 seconds, killing
distributed.nanny - WARNING - Worker process still alive after 3.9999977111816407 seconds, killing
Traceback (most recent call last):
  File "/home/shpc_100710/miniconda3/bin/pyscenic", line 8, in <module>
    sys.exit(main())
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/pyscenic/cli/pyscenic.py", line 677, in main
    args.func(args)
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/pyscenic/cli/pyscenic.py", line 100, in find_adjacencies_command
    expression_data=ex_mtx, tf_names=tf_names, verbose=True, client_or_address=client, seed=args.seed
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/arboreto/algo.py", line 41, in grnboost2
    early_stop_window_length=early_stop_window_length, limit=limit, seed=seed, verbose=verbose)
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/arboreto/algo.py", line 135, in diy
    .compute(graph, sync=True) \
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/client.py", line 3209, in compute
    result = self.gather(futures)
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/client.py", line 2152, in gather
    asynchronous=asynchronous,
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/utils.py", line 310, in sync
    self.loop, func, *args, callback_timeout=callback_timeout, **kwargs
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/utils.py", line 376, in sync
    raise exc.with_traceback(tb)
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/utils.py", line 349, in f
    result = yield future
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/tornado/gen.py", line 762, in run
    value = future.result()
  File "/home/shpc_100710/miniconda3/lib/python3.7/site-packages/distributed/client.py", line 2009, in _gather
    raise exception.with_traceback(traceback)
distributed.scheduler.KilledWorker: ('str_-a68523d3c0915cd35360a5cb008716dc', <WorkerState 'tcp://127.0.0.1:40309', name: 18, status: closed, memory: 0, processing: 8218>)

Supplementary information:

  • pySCENIC version: 0.12.0
  • Installation method: Pip
  • Run environment: CLI
  • OS: Linux version 5.4.0-124-generic (buildd@lcy02-amd64-089) (gcc version 9.4.0 (Ubuntu 9.4.0-1ubuntu1~20.04.1)) #140-Ubuntu SMP Thu Aug 4 02:23:37 UTC 2022

pip freeze :

aiohttp==3.8.1
aiosignal==1.2.0
alabaster==0.7.12
anndata @ file:///home/conda/feedstock_root/build_artifacts/anndata_1657187088889/work
anyio @ file:///home/conda/feedstock_root/build_artifacts/anyio_1660053721269/work/dist
arboreto==0.1.6
argon2-cffi @ file:///home/conda/feedstock_root/build_artifacts/argon2-cffi_1640817743617/work
argon2-cffi-bindings @ file:///home/conda/feedstock_root/build_artifacts/argon2-cffi-bindings_1649500320262/work
asttokens @ file:///home/conda/feedstock_root/build_artifacts/asttokens_1618968359944/work
async-timeout==4.0.2
asynctest==0.13.0
attrs @ file:///home/conda/feedstock_root/build_artifacts/attrs_1659291887007/work
Babel @ file:///home/conda/feedstock_root/build_artifacts/babel_1655419414885/work
backcall @ file:///home/conda/feedstock_root/build_artifacts/backcall_1592338393461/work
backports.functools-lru-cache @ file:///home/conda/feedstock_root/build_artifacts/backports.functools_lru_cache_1618230623929/work
beautifulsoup4 @ file:///home/conda/feedstock_root/build_artifacts/beautifulsoup4_1649463573192/work
bleach @ file:///home/conda/feedstock_root/build_artifacts/bleach_1656355450470/work
bokeh==2.4.3
boltons==21.0.0
brotlipy @ file:///home/conda/feedstock_root/build_artifacts/brotlipy_1648854164153/work
cached-property @ file:///home/conda/feedstock_root/build_artifacts/cached_property_1615209429212/work
certifi @ file:///opt/conda/conda-bld/certifi_1655968806487/work/certifi
cffi @ file:///home/conda/feedstock_root/build_artifacts/cffi_1636046052501/work
chardet @ file:///home/conda/feedstock_root/build_artifacts/chardet_1649184113124/work
charset-normalizer==2.1.1
click==8.1.3
cloudpickle==2.1.0
colorama @ file:///home/conda/feedstock_root/build_artifacts/colorama_1655412516417/work
conda==4.14.0
conda-package-handling @ file:///home/conda/feedstock_root/build_artifacts/conda-package-handling_1649385049221/work
cryptography @ file:///home/conda/feedstock_root/build_artifacts/cryptography_1652967085355/work
ctxcore==0.2.0
cycler @ file:///home/conda/feedstock_root/build_artifacts/cycler_1635519461629/work
cytoolz==0.11.0
dask==2022.2.0
debugpy @ file:///home/conda/feedstock_root/build_artifacts/debugpy_1649586340600/work
decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1641555617451/work
defusedxml @ file:///home/conda/feedstock_root/build_artifacts/defusedxml_1615232257335/work
dill==0.3.5.1
distributed==2022.2.0
docutils @ file:///home/conda/feedstock_root/build_artifacts/docutils_1657104278180/work
entrypoints @ file:///home/conda/feedstock_root/build_artifacts/entrypoints_1643888246732/work
executing @ file:///home/conda/feedstock_root/build_artifacts/executing_1658852325129/work
fastjsonschema @ file:///home/conda/feedstock_root/build_artifacts/python-fastjsonschema_1658064924516/work/dist
flit_core @ file:///home/conda/feedstock_root/build_artifacts/flit-core_1645629044586/work/source/flit_core
fonttools @ file:///home/conda/feedstock_root/build_artifacts/fonttools_1651017735934/work
frozendict==2.3.4
frozenlist==1.3.1
fsspec==2022.7.1
h5py @ file:///home/conda/feedstock_root/build_artifacts/h5py_1624405626125/work
HeapDict==1.0.1
idna @ file:///home/linux1/recipes/ci/idna_1610986105248/work
igraph @ file:///home/conda/feedstock_root/build_artifacts/python-igraph_1641852719823/work
imagesize @ file:///home/conda/feedstock_root/build_artifacts/imagesize_1656939531508/work
importlib-metadata @ file:///home/conda/feedstock_root/build_artifacts/importlib-metadata_1653252814274/work
importlib-resources @ file:///home/conda/feedstock_root/build_artifacts/importlib_resources_1658604161399/work
interlap==0.2.7
ipykernel @ file:///home/conda/feedstock_root/build_artifacts/ipykernel_1657295047882/work
ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1651240553635/work
ipython-genutils==0.2.0
jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1659959867326/work
Jinja2 @ file:///home/conda/feedstock_root/build_artifacts/jinja2_1654302431367/work
joblib @ file:///home/conda/feedstock_root/build_artifacts/joblib_1633637554808/work
json5 @ file:///home/conda/feedstock_root/build_artifacts/json5_1600692310011/work
jsonschema @ file:///home/conda/feedstock_root/build_artifacts/jsonschema-meta_1659525086692/work
jupyter-client @ file:///home/conda/feedstock_root/build_artifacts/jupyter_client_1633454794268/work
jupyter-server @ file:///home/conda/feedstock_root/build_artifacts/jupyter_server_1657107521771/work
jupyter_core @ file:///home/conda/feedstock_root/build_artifacts/jupyter_core_1658332345782/work
jupyterlab @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_1658420081373/work
jupyterlab-pygments @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_pygments_1649936611996/work
jupyterlab-server @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_server_1657063151834/work
kiwisolver @ file:///home/conda/feedstock_root/build_artifacts/kiwisolver_1648854392523/work
leidenalg==0.8.8
llvmlite==0.36.0
locket==1.0.0
loompy==3.0.7
MarkupSafe @ file:///home/conda/feedstock_root/build_artifacts/markupsafe_1648737551960/work
matplotlib @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-suite_1651609498426/work
matplotlib-inline @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-inline_1631080358261/work
mistune @ file:///home/conda/feedstock_root/build_artifacts/mistune_1635844677043/work
msgpack==1.0.4
multidict==6.0.2
multiprocessing-on-dill==3.5.0a4
munkres==1.1.4
natsort @ file:///home/conda/feedstock_root/build_artifacts/natsort_1643636597628/work
nbclassic @ file:///home/conda/feedstock_root/build_artifacts/nbclassic_1657631862903/work
nbclient @ file:///home/conda/feedstock_root/build_artifacts/nbclient_1656688109017/work
nbconvert @ file:///home/conda/feedstock_root/build_artifacts/nbconvert-meta_1649676641343/work
nbformat @ file:///home/conda/feedstock_root/build_artifacts/nbformat_1651607001005/work
nest-asyncio @ file:///home/conda/feedstock_root/build_artifacts/nest-asyncio_1648959695634/work
networkx @ file:///home/conda/feedstock_root/build_artifacts/networkx_1646497321764/work
notebook @ file:///home/conda/feedstock_root/build_artifacts/notebook_1654636967533/work
notebook-shim @ file:///home/conda/feedstock_root/build_artifacts/notebook-shim_1646330736330/work
numba @ file:///home/conda/feedstock_root/build_artifacts/numba_1623568548143/work
numexpr==2.8.3
numpy @ file:///home/conda/feedstock_root/build_artifacts/numpy_1649806299270/work
numpy-groupies==0.9.19
olefile @ file:///home/conda/feedstock_root/build_artifacts/olefile_1602866521163/work
packaging @ file:///home/conda/feedstock_root/build_artifacts/packaging_1637239678211/work
pandas==1.3.5
pandocfilters @ file:///home/conda/feedstock_root/build_artifacts/pandocfilters_1631603243851/work
parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1638334955874/work
partd==1.3.0
patsy @ file:///home/conda/feedstock_root/build_artifacts/patsy_1632667180946/work
pexpect @ file:///home/conda/feedstock_root/build_artifacts/pexpect_1602535608087/work
pickleshare @ file:///home/conda/feedstock_root/build_artifacts/pickleshare_1602536217715/work
Pillow @ file:///home/conda/feedstock_root/build_artifacts/pillow_1636558793805/work
pkgutil_resolve_name @ file:///home/conda/feedstock_root/build_artifacts/pkgutil-resolve-name_1633981968097/work
prometheus-client @ file:///home/conda/feedstock_root/build_artifacts/prometheus_client_1649447152425/work
prompt-toolkit @ file:///home/conda/feedstock_root/build_artifacts/prompt-toolkit_1656332401605/work
psutil @ file:///home/conda/feedstock_root/build_artifacts/psutil_1653089169272/work
ptyprocess @ file:///home/conda/feedstock_root/build_artifacts/ptyprocess_1609419310487/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
pure-eval @ file:///home/conda/feedstock_root/build_artifacts/pure_eval_1642875951954/work
pyarrow==9.0.0
pycosat @ file:///home/conda/feedstock_root/build_artifacts/pycosat_1649384814992/work
pycparser @ file:///tmp/build/80754af9/pycparser_1594388511720/work
Pygments @ file:///home/conda/feedstock_root/build_artifacts/pygments_1650904496387/work
pynndescent @ file:///home/conda/feedstock_root/build_artifacts/pynndescent_1652648933546/work
pyOpenSSL @ file:///tmp/build/80754af9/pyopenssl_1608057966937/work
pyparsing @ file:///home/conda/feedstock_root/build_artifacts/pyparsing_1652235407899/work
pyrsistent @ file:///home/conda/feedstock_root/build_artifacts/pyrsistent_1649013358450/work
pyscenic==0.12.0
PySocks @ file:///home/conda/feedstock_root/build_artifacts/pysocks_1648857264451/work
python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/python-dateutil_1626286286081/work
pytz @ file:///home/conda/feedstock_root/build_artifacts/pytz_1647961439546/work
PyYAML==6.0
pyzmq @ file:///home/conda/feedstock_root/build_artifacts/pyzmq_1652965483789/work
requests @ file:///tmp/build/80754af9/requests_1608241421344/work
ruamel-yaml-conda @ file:///home/conda/feedstock_root/build_artifacts/ruamel_yaml_1636009153751/work
scanpy @ file:///home/conda/feedstock_root/build_artifacts/scanpy_1649193201077/work
scikit-learn @ file:///home/conda/feedstock_root/build_artifacts/scikit-learn_1640464152916/work
scipy @ file:///home/conda/feedstock_root/build_artifacts/scipy_1637806658031/work
seaborn @ file:///home/conda/feedstock_root/build_artifacts/seaborn-split_1629095986539/work
Send2Trash @ file:///home/conda/feedstock_root/build_artifacts/send2trash_1628511208346/work
session-info @ file:///home/conda/feedstock_root/build_artifacts/session-info_1649179682763/work
six @ file:///tmp/build/80754af9/six_1623709665295/work
sniffio @ file:///home/conda/feedstock_root/build_artifacts/sniffio_1648819180181/work
snowballstemmer @ file:///home/conda/feedstock_root/build_artifacts/snowballstemmer_1637143057757/work
sortedcontainers==2.4.0
soupsieve @ file:///home/conda/feedstock_root/build_artifacts/soupsieve_1658207591808/work
Sphinx @ file:///home/conda/feedstock_root/build_artifacts/sphinx_1659457306093/work
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp @ file:///home/conda/feedstock_root/build_artifacts/sphinxcontrib-htmlhelp_1621704829796/work
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml @ file:///home/conda/feedstock_root/build_artifacts/sphinxcontrib-serializinghtml_1649380998999/work
stack-data @ file:///home/conda/feedstock_root/build_artifacts/stack_data_1655315839047/work
statsmodels @ file:///home/conda/feedstock_root/build_artifacts/statsmodels_1644535599043/work
stdlib-list @ file:///home/conda/feedstock_root/build_artifacts/stdlib-list_1602639452997/work
tblib==1.7.0
terminado @ file:///home/conda/feedstock_root/build_artifacts/terminado_1652790603075/work
texttable @ file:///home/conda/feedstock_root/build_artifacts/texttable_1626204417032/work
threadpoolctl @ file:///home/conda/feedstock_root/build_artifacts/threadpoolctl_1643647933166/work
tinycss2 @ file:///home/conda/feedstock_root/build_artifacts/tinycss2_1637612658783/work
toolz @ file:///home/conda/feedstock_root/build_artifacts/toolz_1657485559105/work
tornado @ file:///home/conda/feedstock_root/build_artifacts/tornado_1648827244717/work
tqdm @ file:///tmp/build/80754af9/tqdm_1625563689033/work
traitlets @ file:///home/conda/feedstock_root/build_artifacts/traitlets_1655411388954/work
typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1656706066251/work
umap-learn @ file:///home/conda/feedstock_root/build_artifacts/umap-learn_1649908149820/work
unicodedata2 @ file:///home/conda/feedstock_root/build_artifacts/unicodedata2_1649111917568/work
urllib3 @ file:///tmp/build/80754af9/urllib3_1625084269274/work
wcwidth @ file:///home/conda/feedstock_root/build_artifacts/wcwidth_1600965781394/work
webencodings==0.5.1
websocket-client @ file:///home/conda/feedstock_root/build_artifacts/websocket-client_1655796432389/work
yarl==1.8.1
zict==2.2.0
zipp @ file:///home/conda/feedstock_root/build_artifacts/zipp_1659400682470/work

mengfeiwangdmu avatar Aug 24 '22 08:08 mengfeiwangdmu

You can try with a more recent version of python than 3.7, which does not use fork but spawn when using mulitprocessing.

ghuls avatar Sep 02 '22 12:09 ghuls

You can try with a more recent version of python than 3.7, which does not use fork but spawn when using mulitprocessing.

unfortunately i got the same error with python3.8

chenc327 avatar Oct 14 '22 01:10 chenc327

Did you use the latest git version of pySCENIC?

ghuls avatar Oct 14 '22 13:10 ghuls