ddb-importer
ddb-importer copied to clipboard
Cache paths already verified during a munch
I've filed this as a bug because I think it is unintentional, but it could be considered a feature/improvement request as well.
Describe the bug
When performing a munch for items, spells, monsters, thousands of repeat createDirectory
calls are made with the same path.
On a local filesystem, this is not that impactful, but when calling forgeCreateDirectory
and sending the requests to the Forge API, these thousands of unnecessary requests can negatively impact server performance, cause rate limiting to be applied, and cause the munch to take longer or indeed cause the browser to become unresponsive up as the number of calls stack up but take increasingly long time to respond.
A relatively simple fix for this could be to cache the paths already created and if that path has already been created in this run of the munch, do not call the new folder create again. This can also be specific to the forgeCreateDirectory
method.
Something like this, for example
async newFolder(path) {
if (this._newFolderRequests[path] === undefined) {
this._newFolderRequests[path] = FilePicker.newFolder(path);
}
return this._newFolderRequests[path];
}
To Reproduce
- Perform any munch, item munch works better than spell munch to illustrate the point.
- Use browser dev tools network tab with filter for
new-folder
, and see how many calls point to the same path.
Expected behavior
- Create
new-folder
the first time a new path is required, but if that path has already been created in this importer run, it is not necessary to call that endpoint again.
Screenshots
In my case it goes up to around ~4400 for item munches and takes a while. I have heard that monster munches take much longer (I confirmed that when I was patreon subscribed, though I am not currently patreon subscribed to grab a screenshot on the monster munch).
The main problem is that the
new-folder
requests are all for the same few paths, usually variations on base path ddb-images
which already exist and should not need to be created again
Environment:
- Browser any, but generally only an issue on Forge (or where a distributed filesystem or server is involved)
- Foundry Version [e.g. 10.291]
- 5e System Version [e.g 2.1.5]
- Module Version [e.g 3.4.18]
- Does this behaviour occur with other modules disabled: Yes
Additional context
The caching fix could potentially be done for browse
calls for specific paths during a munch and their results, to prevent repetitive browse
calls to the same path, though this may become memory intensive. It may improve munch times especially on The Forge
I already do substantial caching here, so this might just be from the forge uploader - I'll make an additional tweak which might be causing additional calls for Forge only, so if you can test in the next release and let me know.
I already do substantial caching here, so this might just be from the forge uploader - I'll make an additional tweak which might be causing additional calls for Forge only, so if you can test in the next release and let me know.
Thanks! I'll definitely have a look and test. Just let me know which version it's included inπ
The change is in version 3.4.19, thanks
The change is in version 3.4.19, thanks
Awesome, thanks @MrPrimate π
I already do substantial caching here, so this might just be from the forge uploader - I'll make an additional tweak which might be causing additional calls for Forge only, so if you can test in the next release and let me know.
Heya π
I've had a look at this and can confirm that DirectoryPicker.createDirectory()
gets hit in the items.map(async (item) => {...})
before CONFIG.DDBI.KNOWN.CHECKED_DIRS.add(directoryPath);
from FileHelper.generateCurrentFiles()
is reached.
This happens on local and on Forge, but it's not a problem on a local filesystem because the Foundry FilePicker can handle the many folder creation calls elegantly enough. On a distributed filesystem (like on Forge where the folders are created with an API call to server) it can be problematic and impose rate limiting.
As mentioned above, this happens because the functions getDDBItemImages
and getDDBEquipmentIcons
functions use:
const itemMap = items.map(async (item) => {...}
This pattern appears in a couple of different places, and it is sensible to use, but the .map()
function does not serially iterate through the contents of the items
array, waiting for each to complete before moving on to the next. That means that for each of items
, we may reach createDirectory
before CONFIG.DDBI.KNOWN.CHECKED_DIRS.add(directoryPath);
is reached.
It becomes much more likely when createDirectory
makes a network call.
One possible solution here could be to change the .map()
function to a for...of
loop, iterating through each of the items in series, for example:
It might also be possible to do more than one loop through the items/entities, first loop collecting common paths and ensuring that they exist, and then the second loop creating the items themselves.
As some extra information, it does appear to be happening only when using deep file paths
and downloading/storing the DDB images.
Okay, I'm knee-deep in some v11 work right not, and I will revisit after that, as I need to make sure any changes don't impact performance in environments other than the Forge.
Okay, good luck! And thanks for checking out this issue :)