manoaman
manoaman
Hi @william-silversmith > I'm not totally sure, but I think that error means you have the info file in the main directory. The skeleton info file should be in the...
Do you know if the vertex_type will get lost during the conversion from a swc to a precomputed format? This part of the information seems to be empty when I...
Hi Will, I can provide info files I'm using for precomputed dataset. My doubts are in the construction of info files for both the skeleton and the fake one for...
This is the code I use to convert swc to precomputed format. ``` def swc_to_precomputed(base_dir, tgt_dir, transform_matrix=[1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0]): if not...
Hi Will, Thank you for pointing out the missing fragment. I also noticed by commenting out this section of the code during format conversion and updated the generated precomputed file...
It looks like `{'id': 'vertex_types', 'data_type': 'uint8', 'num_components': 1}` get removed from `skel.extra_attributes`. Should I allow `uint8`?
Hi Will, So by setting and keeping uint8 vertex_types attribute in the `skel.extra_attributes`, I was able to save a skeleton with CloudVolume. ``` [{'id': 'radius', 'data_type': 'float32', 'num_components': 1}, {'id':...
Hi Will, Notable parameters used in xfer task: ``` igneous image xfer --mip 0 --chunk-size 128,128,64 --fill-missing --sharded ``` Utilizing 36 cpu cores. Indeed, the disk has been unstable at...
I'm using a compute node which has 36 cpu cores, and 1TB memory. I understand the default is 3.5GB. Should I set much smaller than the default value? Maybe 1GB?...
I reset the queue and running once again. From looking at htop summary, total memory usage is fluctuating between 130GB~150GB at the moment. Gradually increasing.