pyvips
pyvips copied to clipboard
Save image to tif
Dear Professor: I consider to save images to tif, but there is an error occured as follows. I don't know how to save. I would greatly appreciate it if you could help me. This is my code:
img = pyvips.Image.new_from_file('E:\\testpicture_jupyter\\ometif_def6.tif', access='sequential')
patch_size = 512
n_across = img.width // patch_size
n_down = img.height // patch_size
x_max = n_across - 1
y_max = n_down - 1
print(x_max, y_max)
for y in range(0, n_down):
print("row {} ...".format(y))
for x in range(0, n_across):
patch = img.crop(x * patch_size, y * patch_size,
patch_size, patch_size)
image = pyvips.Image.arrayjoin(patch, across=img.width)
image.tiffsave("huge.tif")
And the error occurred is:
(wsi2.py:1468): GLib-GObject-WARNING **: 19:19:47.663: value "34102784" of type 'gint' is invalid or out of range for property 'width' of type 'gint'
Hi @SikangSHU,
You have:
image = pyvips.Image.arrayjoin(patch, across=img.width)
Try n_across
instead.
Why are you cropping and joining the image? I don't think you need to do this.
Thanks so much for your patient answer. As I am doing a task and not familiar with pyvips, I am making some attempts.
My task is to unmix a wsi(contains 3 stains, each of which have the different color feature) into 3 images(each image represents a stain). As the wsi is so big, I don’t know how to realize by using ‘numpy’, so I turn to ‘pyvips’. Meanwhile, I meet some obstacles when using pyvips. My thinking to do the task is as follows.
Firstly,I crop small patches(512 * 512)from a large slide(53298 * 66607).
After splitting WSI into patches,I consider to do color convolution on each patches(512 * 512) to create 3 individual images, each of which is 512 * 512.
Finally, I consider to merge 3 individual images (512 * 512) which represents 3 individual stains back to 3 wsi images (53298 * 66607) which represents 3 individual stains.
My code representing my thinking is as follows. I think two mistakes may exist where I marked(@@@). Can you help me to correct them? Thank you very much!
import pyvips
import numpy as np
from numpy import linalg
from skimage.color.colorconv import _prepare_colorarray
img = pyvips.Image.new_from_file('E:\\testpicture_jupyter\\ometif_def6.tif', access='sequential')
patch_size = 512
n_across = img.width // patch_size
n_down = img.height // patch_size
x_max = n_across - 1
y_max = n_down - 1
print(x_max, y_max)
for y in range(0, n_down):
print("row {} ...".format(y))
for x in range(0, n_across):
patch = img.crop(x * patch_size, y * patch_size,
patch_size, patch_size)
img1 = _prepare_colorarray(patch, force_copy=True) @@@@@
np.maximum(img1, 1E-6, out=img1)
X = np.array([[0.468, 0.721, 0.511], [0, 0.141, 0.99], [0.767, 0.576, 0.284]])
X_inv = linalg.inv(X)
y = np.log(img1)
C = y @ X_inv
null = np.zeros_like(C[:, :, 0])
C_a = np.stack((C[:, :, 0], null, null), axis=-1)
C_b = np.stack((null, C[:, :, 1], null), axis=-1)
C_c = np.stack((null, null, C[:, :, 2]), axis=-1)
conv_matrix = X
log_rgb_a = C_a @ conv_matrix
rgb1 = np.exp(log_rgb_a)
log_rgb_b = C_b @ conv_matrix
rgb2 = np.exp(log_rgb_b)
log_rgb_c = C_c @ conv_matrix
rgb3 = np.exp(log_rgb_c)
rgb1[rgb1 < 0] = 0
rgb1[rgb1 > 1] = 1
rgb2[rgb2 < 0] = 0
rgb2[rgb2 > 1] = 1
image_CD8 = pyvips.Image.arrayjoin(rgb1, n_across=img.width)
image_PanCK = pyvips.Image.arrayjoin(rgb2, n_across=img.width)
image_Hema = pyvips.Image.arrayjoin(rgb3, n_across=img.width)
image_CD8.tiffsave("CD8.tif") @@@@@
image_PanCK.tiffsave("PanCK.tif")
image_Hema.tiffsave("Hema.tif")
I consider to do stain unmixing for the wsi.
The picture as follows is a part of a wsi. Thank you!
I made you a demo program:
#!/usr/bin/python3
import sys
import pyvips
import numpy as np
def process_tile(patch):
"""pass a pyvips image through numpy
"""
# make a numpy array from the pyvips image
array = np.ndarray(buffer=patch.write_to_memory(),
dtype=np.uint8,
shape=[patch.height, patch.width, patch.bands])
# do something with the numpy array
array = 255 - array
# back to pyvips
height, width, bands = array.shape
linear = array.reshape(width * height * bands)
new_patch = pyvips.Image.new_from_memory(linear.data,
width, height, bands,
'uchar')
return new_patch
patch_size = 512
img = pyvips.Image.new_from_file(sys.argv[1])
# WSI images are RGBA, with the A always 255 ... just take the first three
# bands (RGB)
img = img[0:3]
all_patches = []
for top in range(0, img.height, patch_size):
for left in range(0, img.width, patch_size):
width = min(patch_size, img.width - left)
height = min(patch_size, img.height - top)
patch = img.crop(left, top, width, height)
print(f"processing {left}, {top} ...")
patch = process_tile(patch)
all_patches.append(patch)
n_across = 1 + img.width // patch_size
new_image = pyvips.Image.arrayjoin(all_patches, across=n_across)
new_image.write_to_file(sys.argv[2])
You'll need to drop your own numpy into the processing step.
However, this will run out of memory for large images, since it must have all the inputs to arrayjoin
. You will probably need to reimplement colour convolution in pyvips if you need large image support. A pyvips implementation of colour convolution would be much faster, and you could skip the tiling stage.
I made you a demo program:
#!/usr/bin/python3 import sys import pyvips import numpy as np def process_tile(patch): """pass a pyvips image through numpy """ # make a numpy array from the pyvips image array = np.ndarray(buffer=patch.write_to_memory(), dtype=np.uint8, shape=[patch.height, patch.width, patch.bands]) # do something with the numpy array array = 255 - array # back to pyvips height, width, bands = array.shape linear = array.reshape(width * height * bands) new_patch = pyvips.Image.new_from_memory(linear.data, width, height, bands, 'uchar') return new_patch patch_size = 512 img = pyvips.Image.new_from_file(sys.argv[1]) # WSI images are RGBA, with the A always 255 ... just take the first three # bands (RGB) img = img[0:3] all_patches = [] for top in range(0, img.height, patch_size): for left in range(0, img.width, patch_size): width = min(patch_size, img.width - left) height = min(patch_size, img.height - top) patch = img.crop(left, top, width, height) print(f"processing {left}, {top} ...") patch = process_tile(patch) all_patches.append(patch) n_across = 1 + img.width // patch_size new_image = pyvips.Image.arrayjoin(all_patches, across=n_across) new_image.write_to_file(sys.argv[2])
You'll need to drop your own numpy into the processing step.
However, this will run out of memory for large images, since it must have all the inputs to
arrayjoin
. You will probably need to reimplement colour convolution in pyvips if you need large image support. A pyvips implementation of colour convolution would be much faster, and you could skip the tiling stage.
Thank you for your help!
I'd like to try implementing colour deconvolution in libvips. Do you have a sample image I could experiment with?
I'd like to try implementing colour deconvolution in libvips. Do you have a sample image I could experiment with?
Yes, but It's too big. The wsi is about 10G. How can L send it to you?
I'd like to try implementing colour deconvolution in libvips. Do you have a sample image I could experiment with?
Yes, but It's too big. The wsi is about 10G. How can L send it to you?
The format of the image is 'tif'
Just a section would be fine. As long as it shows all three stains.
Just a section would be fine. As long as it shows all three stains.
OK, the stain vector of three stains is [[0.468, 0.721, 0.511], [0, 0.141, 0.99], [0.767, 0.576, 0.284]]
This is sort-of working:
#!/usr/bin/python3
import sys
import pyvips
image = pyvips.Image.new_from_file(sys.argv[1])
# scale to 0-1
image /= 256
# set of stains we deconvolve with
stain_vectors = pyvips.Image.new_from_array([
[0.468, 0.000, 0.767],
[0.721, 0.141, 0.576],
[0.511, 0.990, 0.284]
])
# to stain space
stain_space = image.log().recomb(stain_vectors ** -1)
n_stains = stain_space.bands
# split by stain
stains = []
for i in range(n_stains):
mask = pyvips.Image.new_from_array(
[[0] * n_stains] * i +
[[0] * i + [1] + [0] * (n_stains - i - 1)] +
[[0] * n_stains] * (n_stains - i - 1))
stains.append(stain_space.recomb(mask))
# from stain space back to a set of separated RGB Images
images = [x.recomb(stain_vectors).exp() for x in stains]
for i in range(len(images)):
filename = f"stain-{i}.tif"
print(f"writing {filename} ...")
(256 * images[i]).cast("uchar").write_to_file(filename)
To make:
Though it doesn't look quite right.
This is sort-of working:
#!/usr/bin/python3 import sys import pyvips image = pyvips.Image.new_from_file(sys.argv[1]) # scale to 0-1 image /= 256 # set of stains we deconvolve with stain_vectors = pyvips.Image.new_from_array([ [0.468, 0.000, 0.767], [0.721, 0.141, 0.576], [0.511, 0.990, 0.284] ]) # to stain space stain_space = image.log().recomb(stain_vectors ** -1) n_stains = stain_space.bands # split by stain stains = [] for i in range(n_stains): mask = pyvips.Image.new_from_array( [[0] * n_stains] * i + [[0] * i + [1] + [0] * (n_stains - i - 1)] + [[0] * n_stains] * (n_stains - i - 1)) stains.append(stain_space.recomb(mask)) # from stain space back to a set of separated RGB Images images = [x.recomb(stain_vectors).exp() for x in stains] for i in range(len(images)): filename = f"stain-{i}.tif" print(f"writing {filename} ...") (256 * images[i]).cast("uchar").write_to_file(filename)
To make:
![]()
![]()
Though it doesn't look quite right.
Dear Professor, thank you! I implement the code using a 10GB tif file and it runs successfully. But 3 tiff files after doing colour deconvolution seem to be too big when I open in the Qupath. How can I make some modifications on your code to convert it to the pyramid file format?
Just save as a pyramidal tiff. Try:
filename = f"stain-{i}.tif[tile,compression=jpeg,pyramid]"
Slightly improved version (now uses scrgb space):
#!/usr/bin/python3
import sys
import pyvips
image = pyvips.Image.new_from_file(sys.argv[1])
# transform to linear 0-1
image = image.colourspace("scrgb")
# set of stains we deconvolve with
stain_vectors = pyvips.Image.new_from_array([
[0.468, 0.000, 0.767],
[0.721, 0.141, 0.576],
[0.511, 0.990, 0.284]
])
# to stain space
stain_space = image.log().recomb(stain_vectors ** -1)
n_stains = stain_space.bands
# split by stain
stains = [stain_space.recomb(pyvips.Image.new_from_array(
[[0] * n_stains] * i +
[[0] * i + [1] + [0] * (n_stains - i - 1)] +
[[0] * n_stains] * (n_stains - i - 1)))
for i in range(n_stains)]
# from stain space back to a set of separated scrgb Images
images = [x.recomb(stain_vectors).exp() for x in stains]
# back to srgb
images = [x.colourspace("srgb") for x in images]
for i in range(len(images)):
filename = f"stain-{i}.tif"
print(f"writing {filename} ...")
images[i].write_to_file(filename)
Thanks so much for your patient answers! I have learnt a lot from the code you give and it runs successfully on my computer. While I think the method is right, the result of the stain unmixing seems to be not good. Some colors(CD8, black)(PanCK, yellow) exist on some places that should not have these colors. For example, almost the whole picture of the colour unmixing result of PanCK is yellow while only a few places are yellow in the original image. The result of my stain unmixing is as follows.
- original picture
- CD8, black
- PanCK
- Hematoxylin
I consider to do stain unmixing for the wsi. The picture as follows is a part of a wsi. Thank you!
This is the result I do with numpy on a section. I think maybe this is the ideal result. I don't know why are the results using pyvips seem to be not good.
Dear professor: I think you may do stain unmixing on one channel for three times while there are three channels. (Or maybe my guess is wrong
I agree, it's not working well and I'm not sure why. I might take another look later today if I have time.
I agree, it's not working well and I'm not sure why. I might take another look later today if I have time.
Dear Professor:
I think the inverse process may be wrong:
(stain_vectors ** -1)
This is the process of my debugging, and now_a1 and last_a1 are different:
a1 = np.array([
[0.468, 0.023, 0.767],
[0.721, 0.221, 0.576],
[0.511, 0.975, 0.284]
])
now_a1 = a1 ** -1
last_a1 = linalg.inv(a1)
print(now_a1)
print(last_a1)
You're right, there's something funny about the pyvips matrix invert. If works if you use the numpy invert:
#!/usr/bin/python3
import sys
import pyvips
import numpy as np
from numpy import linalg
stain = [
[0.468, 0.023, 0.767],
[0.721, 0.141, 0.576],
[0.511, 0.990, 0.284]
]
stain_inverse = linalg.inv(stain).tolist()
image = pyvips.Image.new_from_file(sys.argv[1])
# transform to linear 0-1
image = image.colourspace("scrgb")
# to stain space
stain_space = image.log().recomb(stain_inverse)
n_stains = stain_space.bands
# split by stain
stains = [stain_space.recomb(pyvips.Image.new_from_array(
[[0] * n_stains] * i +
[[0] * i + [1] + [0] * (n_stains - i - 1)] +
[[0] * n_stains] * (n_stains - i - 1)))
for i in range(n_stains)]
# from stain space back to a set of separated scrgb Images
images = [x.recomb(stain).exp() for x in stains]
# back to srgb
images = [x.colourspace("srgb") for x in images]
for i in range(len(images)):
filename = f"stain-{i}.png"
print(f"writing {filename} ...")
images[i].write_to_file(filename)
I see:
$ ./colourdeconv.py ~/pics/stain_sample.png
writing stain-0.png ...
writing stain-1.png ...
writing stain-2.png ...
Generating:
Which now looks correct. I'll check the pyvips inverter,
Hello! I need to add a fluorescent color to the image, so I'd like to convert the three channels rgb image(tiff file) to one channel. More specifically, I need to add a fluorescent color to the first image(tiff file) so that its effect can be the same as that in the second image. I appreciate it very much if you can give me some suggestions!
These are OME-TIFFs, right? I would make two one-band images and attach some XML metadata saying that the first is a stain and the second is a florescence image.
You probably have an image like this already -- have a look at the metadata on that to see what you need.
These are OME-TIFFs, right? I would make two one-band images and attach some XML metadata saying that the first is a stain and the second is a florescence image.
You probably have an image like this already -- have a look at the metadata on that to see what you need.
Yes, these are OME-TIFF. I'd like to get three one-band images and then make them to one-band florescence images respectively. When I use 'vipsheader -a' to have a look at the metadata on the florescence image what I want, I can only get some properties but can't get color information. How can I look at the metadata about that? (I tried and found it that I can't send two tiff file in github, so I send two pictures. The first one is the gray image of a stain with rgb three bands and the second one is one-band florescence image that I want.)
You need the image-description
item. Try eg:
$ vipsheader -f image-description LuCa-7color_Scan1.ome.tiff
<?xml version="1.0" encoding="UTF-8"?><!-- Warning: this comment is an OME-XML metadata block, which contains crucial dimensional parameters and other important metadata. Please edit cautiously (if at all), and back up the original data before doing so. For more information, see the OME-TIFF web site: https://docs.openmicroscopy.org/latest/ome-
...
`(learn) C:\Users\91765>vipsheader -f image-description E:\testpicture_jupyter\channel\1.tif
(vipsheader:3340): VIPS-WARNING **: 22:20:58.807: Unknown field with tag 33560 (0x8318) encountered
(vipsheader:3340): VIPS-WARNING **: 22:20:58.817: Unknown field with tag 33560 (0x8318) encountered
(vipsheader:3340): VIPS-WARNING **: 22:20:58.818: Unknown field with tag 33560 (0x8318) encountered
(vipsheader:3340): VIPS-WARNING **: 22:20:58.820: Unknown field with tag 33560 (0x8318) encountered
(vipsheader:3340): VIPS-WARNING **: 22:20:58.822: Unknown field with tag 33560 (0x8318) encountered
(vipsheader:3340): VIPS-WARNING **: 22:20:58.824: Unknown field with tag 33560 (0x8318) encountered
(vipsheader:3340): VIPS-WARNING **: 22:20:58.828: Unknown field with tag 33560 (0x8318) encountered
(vipsheader:3340): VIPS-WARNING **: 22:20:58.833: Unknown field with tag 33560 (0x8318) encountered
Right, but you need to set it to some XML to say what kind of images you have. Take a look at an OME-TIFF image that has the properties you need, or read the OME-TIFF documentation.
OK. Thank you for your patient answering. This is the information.
You'll need to design some XML based on that, then set that as the image description of the TIFF you write. Eg.:
final_image = ...
final_image.set_type(pyvips.GValue.gstr_type, "image-description",
f"""<?xml version="1.0" encoding="UTF-8"?>
...
""")
final_image.write_to_file("my-ome-image.tiff")
Have you found image.sc? It's a good place to ask OME-TIFF questions:
https://forum.image.sc
The QuPath developers all post there.
You'll need to design some XML based on that, then set that as the image description of the TIFF you write. Eg.:
final_image = ... final_image.set_type(pyvips.GValue.gstr_type, "image-description", f"""<?xml version="1.0" encoding="UTF-8"?> ... """) final_image.write_to_file("my-ome-image.tiff")
Have you found image.sc? It's a good place to ask OME-TIFF questions:
https://forum.image.sc
The QuPath developers all post there.
OK, I know about that forum. I am very sincerely grateful to you for your help!
Hi, I’d like to get a pyramidal tiff with which the colourspace is ‘grey16’(As shown in the figure, <pyvips.Image 66607*53298 ushort, 1 bands, grey16>). When I write like that in my code, the output image ‘gray1_1D becomes ‘<pyvips.Image 66607x53298 uchar, 1 bands, b-w>’. How can I modify my code so that the output image can be a pyramidal tiff with which the colourspace is ‘grey16’? Thank you!
mask1 = pyvips.Image.new_from_array(
[[1, 0, 0],
[0, 0, 0],
[0, 0, 0]]
)
gray1 = stain_space_exp.recomb(mask1)
gray1_1D = gray1[0]
filename = f"gray1_1D.tif[tile,compression=jpeg,pyramid]"
print(f"writing {filename} ...")
(gray1_1D * 255).cast("int").colourspace("grey16").write_to_file(filename)