Stephan Sturges
Stephan Sturges
> @stephansturges could you share your captured left/right images of this "repeative texture" case? I would like to try my algorithms to see if it can alleviate the "blue" area...
> @stephansturges Thank you for sharing the stereo images. One more thing we need is the camera to camera (stereo: S,K,D,R,T,S_rect,R_rect,P_rect matrix) calibration parameters from your unit. Thank you very...
> @stephansturges Hi, just to confirmed the images you shared are: stereoDepth.rectifiedLeft/rectifiedRight or left/right before rectification? Please advise. Thank you. These images are not rectified, as far as I can...
> you could export them from your unit using depthai package, see example code [here](https://docs.luxonis.com/projects/api/en/latest/samples/calibration/calibration_reader/#calibration-reader) _These stereo images a NOT rectified. Please find the calibration parameters for this camera below:_...
> @stephansturges > > Great! If possible, could we capture another set of stereo rectified images in (1280x720 resolution)? Preferred with similar repetitive texture in the scene. Thanks. I've updated...
@ynjiun Thanks for running this test! I'm having a hard time understanding the format and scale of the .npy files, but from what I can see it looks like you...
@ynjiun Thanks for the explanation of the output, I will set up a better visualization. `The algorithms actually uses transformer to match features extracted from a deep learning model thus...
@ynjiun is your approach based on https://github.com/mli0603/stereo-transformer ? I'd be curious to try it on an actual UAV..
@ynjiun > Interesting. So you use semantic segmentation for identifying "safe landing zone"? Curious: how do you generate the ground truth? manual labeling? or simulation? All the data is synthetic,...
@ynjiun Sure, my email address is my name @gmail .com ;)