SharpCV
SharpCV copied to clipboard
Initializing Mat object with 3 dimensional NDarray
I have a 250x250 image that is loaded into an NDArray
that uses three channels, so the dimensions of the NDArray
are (250, 250, 3).
It looks like the Mat
object only takes in a 2D NDArray
. Is there any way I can initialize a Mat
object with this NDArray
, or can it be added to the constructor for Mat
to allow 3-Dimensional NDArray
s?
Do you have specific code snippet that doesn't work? It supposed to work.
Hi,The code that I can run normally is as follows:
the graph of load image :
tf_with(tf.variable_scope("LoadImage"), delegate
{
decodeJpeg = tf.placeholder(tf.@byte, name: "DecodeJpeg");
var cast = tf.cast(decodeJpeg, tf.float32);
var dims_expander = tf.expand_dims(cast, 0);
var resize = tf.constant(new int[] { img_h, img_w });
var bilinear = tf.image.resize_bilinear(dims_expander, resize);
var sub = tf.subtract(bilinear, new float[] { img_mean });
normalized = tf.divide(sub, new float[] { img_std }, name: "normalized");
});
image load function :
private (NDArray, NDArray) GetNextBatch(Session sess, string[] x, NDArray y, int start, int end)
{
NDArray x_batch = np.zeros(end - start, img_h, img_w, n_channels);
int n = 0;
for (int i = start; i < end; i++)
{
NDArray img4;
if (n_channels == 1) { img4 = cv2.imread(x[i], IMREAD_COLOR.IMREAD_GRAYSCALE); }
else { img4 = cv2.imread(x[i], IMREAD_COLOR.IMREAD_UNCHANGED); }
x_batch[n] = sess.run(normalized, (decodeJpeg, img4));
n++;
}
var slice = new Slice(start, end);
var y_batch = y[slice];
return (x_batch, y_batch);
}
I use SharpCV by SciSharp,Can you test it this way?
@pepure , I am able to load the image into an NDArray
using something similar to the example you provided. I am not doing any sort of normalization on the image when it is loaded since I am doing this at a later point. Here's my code that I'm having issues with:
Load Image
private void GetJPEGDecodingTensors()
{
tf_with(tf.name_scope("loadImage"), delegate
{
tsrLoadImageInput = tf.placeholder(tf.@string, name: "DecodeJPGInput");
tsrLoadImage = tf.image.decode_jpeg(tsrLoadImageInput, channels: 3);
});
}
private NDArray LoadImage(byte[] pImage)
{
NDArray ndaImage = msesSession.run(tsrLoadImage, (tsrLoadImageInput, new Tensor(pImage, TF_DataType.TF_STRING)));
//Change rgb order.
ndaImage = ndaImage.transpose(new[] { 2, 0, 1 }).flipud.T.transpose(new [] {1, 0, 2});
return ndaImage;
}
Resize Image
private NDArray ResizeImage(NDArray pndaImage, int plngWidth, int plngHeight)
{
return cv2.resize(pndaImage, (plngWidth, plngHeight), interpolation: InterpolationFlags.INTER_AREA);
}
ResizeImage()
will throw an exception because when pndaImage
is converted to a Mat
object, pndaImage.ndim
is 3, so the Mat
constructor throws a NotImplementedException
:
public unsafe Mat(NDArray nd)
{
switch (nd.ndim)
{
case 2:
cv2_native_api.core_Mat_new8(nd.shape[0], nd.shape[1], FromType(nd.dtype), new IntPtr(nd.Unsafe.Storage.Address), new IntPtr(0), out _handle);
break;
default:
throw new NotImplementedException("Not supported");
}
}
It looks like when an image with three channels is loaded into a Mat
object, the ndim
value for the Mat
object is 2. I'm wondering if that is the reason for the logic behind the switch
case in the Mat
constructor.
Also, the FromType(Type type)
function only returns one channel. I believe this function and the Mat
constructor that takes an NDArray
can be updated similar to the following to fix this issue:
FromType()
public MatType FromType(Type type, int channels)
{
switch (Type.GetTypeCode(type))
{
case TypeCode.Int32:
switch (channels)
{
case 1: return MatType.CV_32SC1;
default: return MatType.CV_32SC2;
}
case TypeCode.Single: return MatType.CV_32FC1;
default:
switch (channels) {
case 1: return MatType.CV_8UC1;
default: return MatType.CV_8UC3;
}
}
}
Mat Constructor
public unsafe Mat(NDArray nd)
{
switch (nd.ndim)
{
case 2:
cv2_native_api.core_Mat_new8(nd.shape[0], nd.shape[1], FromType(nd.dtype, 1), new IntPtr(nd.Unsafe.Storage.Address), new IntPtr(0), out _handle);
break;
case 3:
cv2_native_api.core_Mat_new8(nd.shape[0], nd.shape[1], FromType(nd.dtype, 3), new IntPtr(nd.Unsafe.Storage.Address), new IntPtr(0), out _handle);
break;
default:
throw new NotImplementedException("Not supported");
}
}
Simple repro:
var shape = new NumSharp.Shape(100, 100, 3);
var testArray = NumSharp.np.zeros(shape, NumSharp.np.ubyte);
using var mat = new SharpCV.Mat(testArray);
throws:
System.NotImplementedException: Not supported
at SharpCV.Mat..ctor(NDArray nd)
Any solution for this ? I face the same problem ??
Same question here. Any way to transfer a 3-dimensional NDarray (400 * 400 * 3 for example) to SharpCV.Mat format?