Stephen Mather
Stephen Mather
> Hi @pierotofy @smathermather please tell me what licence you want me to put on IRdrone. [GNU Affero General Public License](https://www.gnu.org/licenses/agpl-3.0.en.html) is the most restrictive compatible license (same one we...
> ✌️ done + you have my formal agreement that you can do whatever you want with this code 🤗 Excellent. Pull request with repo license added. Let me know...
Found [DJI's Mavic 3M Image Progressing Guide](https://dl.djicdn.com/downloads/DJI_Mavic_3_Enterprise/20230829/Mavic_3M_Image_Processing_Guide_EN.pdf) [Mavic_3M_Image_Processing_Guide_EN.pdf](https://github.com/OpenDroneMap/ODM/files/15278085/Mavic_3M_Image_Processing_Guide_EN.pdf) Haven't dug into DJI post-processing maybe via the API yet.
> Found [DJI's Mavic 3M Image Progressing Guide](https://dl.djicdn.com/downloads/DJI_Mavic_3_Enterprise/20230829/Mavic_3M_Image_Processing_Guide_EN.pdf) > [Mavic_3M_Image_Processing_Guide_EN.pdf](https://github.com/OpenDroneMap/ODM/files/15278085/Mavic_3M_Image_Processing_Guide_EN.pdf) Looking more deeply at the pull request, my hunch is the Piero is already reading these parameters out of the...
Ok, digging deeper and still well out of my depth, I think the missing step is maybe using the Dewarp HMatrix, which is also in exif: 
From what little I can discern for the process, I assume we'd apply a cv2.warpPerspective after cv2.initUndistortRectifyMap, or would we somehow bundle those transformations together?
Example dataset here: https://drive.google.com/file/d/1Gr424P_ZaVM50iK3Ydicu2SZr9xgjLwz/view?usp=sharing Ongoing discussion here: https://community.opendronemap.org/t/processing-crashes-while-undistorting-panoramic-images-with-no-extra-logs/18683/ In short, it appears that with 8K 360 data, undistort becomes the memory bottleneck rather than our usual texturing step. @elunty --...
I get crashes at different places each time with 32 and 64GB, but I'm not sure how your virtual memory is configured on Windows, so no extra insights from my...
What sizes do you want / need tested?
Cool. I've got it running on a 1900 dataset. Seems like a decent size (larger than most test datasets) without taking overly long to run (should be done tomorrow afternoon)....