MapsModelsImporter icon indicating copy to clipboard operation
MapsModelsImporter copied to clipboard

Thoughts about importing

Open Manaland2020 opened this issue 5 years ago • 5 comments

Hello, I don't know where to post it else, yet so I may use this issue board here since its also an issue for not having these features. I figured that importing with high block numbers work fine, but takes a lot of time.

I have two different thoughts on how to speed up or split the process a little bit and while I am not a programmer, maybe it helps you along the way - This plugin is pure diamonds!!

Examination: I captured an areal with a size of ~191 MB. I figured that I can make out the block size based on file size a little bit. Most 60 MB mesh imports had roughly ~1200 blocks in use. When I imported the 191 MB one with 3000 blocks on import, it was not enough like see on this picture: https://i.imgur.com/zZiG2N6.jpg -- But we can see that its importing it step by step like a printer would print out a file, correct?

Ideas for import options:

a) Skip numbers which are already in: It took very very long for 3000 blocks to process. Would it help to tell on reimport to skip a fixed number of blocks? Like in the example above, I already have my 3000, just need a couple more so the importer could ignore the first 3000 blocks and just start at number 3001.

b) With that in mind, would be interesting if it's possible to tell the importer plugin to check for existing block numbers, and skip these. I have a benchmark test in mind here: Whats being faster - importing 4000 blocks at once, or 100 blocks but 40 times in little chunks by skipping existing parts.

c) Reverse order of import: I figured that my 3000 blocks are not enough, examined here: https://i.imgur.com/zZiG2N6.jpg But we also can see that I probably just need another lets say 1000 to finish it. Before I need to reimport the whole thing with 4000 blocks in my settings, it would be way easier to import with 1000 blocks but in reverse order and then just quick delete the duplicated pieces.

Conclusion: I think there might be some ways to speed up the import process but lacking of programming knowledge to be of more help. Maybe these ideas help anyways. Godspeed!

Manaland2020 avatar Apr 03 '20 06:04 Manaland2020

But we can see that its importing it step by step like a printer would print out a file, correct?

I'd say it is roughly front to back, but I don't know in which order it manages the LoDs. To be exact, I am not deciding on this order, the importing order is the very order in which Google originally draws the little blocks on screen during a frame.

I apreciate your insights, I did not spend enough time myself importing huge captures to figure out the most limiting parts of the workflow, since this was originally just a proof of concept.

A bit of background about what's happening during import:

  1. A separate Python instance opens the rdc file, figure out which draw calls are related to the 3d map and extracts for each one of them the vertex positions, uvs, indices, the texture and a few constants (the "uniforms" in GPU vocable). This is done by google_maps_rd.py and saves all of those in the directory named thecapture-rAnDoM.

  2. Execution comes back to blender, which loads the aforemention files to build many individual objects, in google_maps.py.

The rationnal for this split is that I could not import the RenderDoc python module while being in Blender's Python context.

A few notes now:

  • This is overall a dirty process. Creating so many files, not reusing them when loading the same capture twice, etc. Plus there are several substeps that could strongly benefit from optimization I think.
  • I don't know which part of the process is the most limiting one. It is not hard to benchmark, that could introduce you to the code: just put some start_time = time.time() then some code then print(time.time() - start_time) to measure timings (don't forget to import time at the very beginning of each file in which you want to do this).
  • It makes it a bit more complicated to implement your ideas, because one has to ensure that both steps follow the same strategy (say, "start only at the 3000th draw call". Also the temporary files are poorly named, there would be a risk of mixing them from one execution to the next is nothing is done about it. That being said, nothing truly hard, it just takes time. ;)

eliemichel avatar Apr 03 '20 23:04 eliemichel

Thank you for this indepth explanation which gives a good view on what actually happens when the capture key is pressed. I totally forgot that renderdoc itself is probably not saving any specific data from the map like size, position etc, just the pure 3d data that is on screen.

When I wrote the ideas, I was in the beginnings of the whole capture and import process. What I figured since then are three important factors that helped me on the import process a lot.

A) "Patience & Netflix". I never expected at the beginning that a 350MB file would take an hour to import till i've seen that its producing 6k meshes and textures at the same time. The good story here is that it actually works like a charm, as long as we give the Plugin the time to just import.

B) We can actually measure "roughly" how long the import process takes and how much blocks may be produced based on filesize alone. 60 mb is roughly 700 meshes, 120 mb roughly 1400 captured on highest quality zoom level on Google Maps. A good workflow here is to do a small size capture of the region first as reference, then going higher.

C) I wanted a bigger areal with having most details in the middle and thought its a good idea to capture two different zooms. That's a bad idea since both scale and position between the two captures are completly different and thats probably something hard to fix from the plugin side of things (?). In the end, I ended up with just "one" big terrain at high quality which of course produced a much larger file but also gave much better result.

I hope these infos may help others in terms of general usage and troubleshooting as well, feel free to use it!

Manaland2020 avatar Apr 04 '20 08:04 Manaland2020

Quick idea! (Note: I am not sure how much control the plugin has or how much it could be extended with features so beware of following non-programmer Star Trek techbubble talk <3 ):

Could we cheat the import and positioning system based on file sizes!

Idea: If we have two captures that have overlapping blocks, it may happen that captured textures or single meshes may exist twice and within sharing the same file size.

With that in mind, some processing steps could happen now even if it takes long time:

  1. When importing a second capture into a scene where already a capture exist, the plugin could first look into the first capture folder for duplicated textures and meshes (filesize based). If it finds the first one, it could use its coordinates as reference data, comparing it to where the duplicated one is and calculate the offset position between the two.

Example: Mesh "X" is at 0,0,0 - duplicated mesh would be at 5,-5,1. If we move the duplicated by -5,5,-1 we have it at 0,0,0.

  1. Then, the plugin could look for same file sizes on textures/mesh files and skip all on import that already exist. After that, all new imported meshes could use the offset data (like in the example it would be -5,5,-1 for all) to move meshes to the fitting position. In my little colorful wonderland-nut brain, I now see two captures that are lined up without duplicated meshes overlapping.

I already feel that it would not work exactly like this since its probably hard to offset all meshes based on position data from existing meshes. Its just a quick thought on how the positions of two captures could be lined up.

Manaland2020 avatar Apr 04 '20 08:04 Manaland2020

I think it could work, if eventually I have some time. ^^

eliemichel avatar Apr 05 '20 17:04 eliemichel

(moved to https://github.com/eliemichel/MapsModelsImporter/issues/85 by @eliemichel)

tesfay996 avatar Sep 21 '20 09:09 tesfay996