cloud-volume icon indicating copy to clipboard operation
cloud-volume copied to clipboard

No data written to local folder on Windows 11

Open chourroutm opened this issue 5 months ago • 6 comments

Hi, I have come up with a script to write chunks of 256^3 voxels into a precomputed segmentation, but no data is actually written to the disk (which is not full), not even the JSON info file. I am wondering whether it is related to using a Windows (W11) workstation (although they got it working in https://github.com/seung-lab/cloud-volume/issues/618).

This is the script I have:

import tifffile as tiff
from tqdm.notebook import tqdm
from cloudvolume import CloudVolume
from cloudvolume.lib import mkdir
import pathlib
import numpy as np

image_files = pathlib.Path("annotated_data").glob("data_labeled_chunk_*.tif")

first_image = tiff.imread("annotated_data\data_labeled_chunk_44_31_13.tif")
img_shape = first_image.shape
dtype = first_image.dtype

print(f"Dataset shape: {img_shape}")
print(f"Dataset dtype: {dtype}")

output_dir = "./data_ngprec/"
output_dir = pathlib.Path(output_dir)
mkdir(output_dir)

output_dir = output_dir.absolute().as_uri()

print(output_dir)

# Create a CloudVolume object for the Neuroglancer precomputed format
info = CloudVolume.create_new_info(
	num_channels = 1,
	layer_type = 'segmentation', # 'image' or 'segmentation'
	data_type = 'uint8', # can pick any popular uint
	encoding = 'raw', # see: https://github.com/seung-lab/cloud-volume/wiki/Compression-Choices
	resolution = [ 7720, 7720, 7720 ], # X,Y,Z values in nanometers
	voxel_offset = [ 0, 0, 0 ], # values X,Y,Z values in voxels
	chunk_size = [ 256, 256, 256 ], # rechunk of image X,Y,Z in voxels
	volume_size = [18709, 18709, 21517], # X,Y,Z size in voxels
)
vol = CloudVolume(
    output_dir,
    info=info,
    progress=False,
    cache=False
)

vol.commit_info()

print("CloudVolume info:")
print(vol.info)

# Write data to the Neuroglancer precomputed format
for chunk_filename in tqdm(image_files, desc="Converting to Neuroglancer format"):
    chunk_data = tiff.imread(chunk_filename).astype(np.uint8)
    ids = list(map(int,chunk_filename.stem.split("_labeled_chunk_")[1].split("_")))
    print(ids)
    chunk_data = chunk_data[..., np.newaxis]
    # Calculate bounds_inf and bounds_sup together
    bounds = [(id_ * shape, id_ * shape + shape) for id_, shape in zip(ids, chunk_data.shape)]

    # Create slices for the first three dimensions
    slices = [slice(start, stop) for start, stop in bounds]

    # Assign the chunk data to the volume
    vol[slices[2], slices[1], slices[0], 1] = chunk_data

This is the output, which comfirms the files were found:

Dataset shape: (256, 256, 256)
Dataset dtype: uint8
file:///d:/Matthieu/data_ngprec
CloudVolume info:
{'num_channels': 1, 'type': 'segmentation', 'data_type': 'uint8', 'scales': [{'encoding': 'raw', 'chunk_sizes': [[256, 256, 256]], 'key': '7720_7720_7720', 'resolution': [7720, 7720, 7720], 'voxel_offset': [0, 0, 0], 'size': [18709, 18709, 21517]}]}
Converting to Neuroglancer format: 
 11/? [00:01<00:00,  6.47it/s]
[45, 36, 10]
[45, 36, 9]
[45, 37, 10]
[44, 31, 13]
[44, 32, 12]
[44, 32, 13]
[45, 32, 10]
[45, 32, 11]
[45, 32, 12]
[45, 33, 10]
[45, 33, 9]

chourroutm avatar Sep 19 '24 17:09 chourroutm