immich
immich copied to clipboard
feat(server): library refresh go brrr
This PR significantly improves library scanning performance. Wherever suitable, we are doing jobs in batches, and many looped database interactions are replaced with SQL queries.
User testimonials "@etnoy what on earth have you done. I tried your PR and it finished the scan for 1M assets in 37 seconds down from 728s on main. It takes 188s just to finish queuing on main" -- @mertalev
Changes made
- When crawling a library: Don't call stat() on the files when importing, instead put a placeholder date (9999-12-31) that will be overwritten by metadata extraction. All dates after year 9000 are ignored in the timeline
- When importing new library files: Crawl 10k files per batch and then insert each batch in a single SQL query instead adding them individually
- When checking asset filenames against import paths and exclusion patterns: Do this directly in SQL which is much faster
- When a library is deleted, hide all its assets first. Otherwise, they'll linger until the asset deletion job catches up which can take days for >1M libraries
Plus several minor cleanups and performance enhancements.
The performance improvements are at least an order of magnitude in library scanning.
Benchmark 1 A library scan with 22k items where nothing has changed since the last scan used to take 1m 22s, now it's below 10 seconds, an improvement of 87 percent!
Benchmark 2 A clean library import with 19k items takes 1m40s in main and 7 seconds in this PR. NOTE: this benchmark is only the library service scan and does not include the metadata extraction. Also, some fs calls have been migrated from the library service to the metadata service, although this should only have a minor impact on overall scan performance
Benchmark 3 Importing a library with >5M assets.
- Time to 1M imported (without metadata extraction): 6m50s.
No need to compare to main, you know it's fast!
Benchmark 4 Importing a library of 527041 files took 1m58s (without metadata extraction) in this PR. No need to compare to main, you know it's fast!
Bonus:
- Greatly improved log messages related to library scans.
This scan imports all new files:
This is an "idle scan", where a refresh finds no changes:
- More e2e tests for handling when offline files go back online, leading to one major bug fixed
Future work:
- Crawl for XMP sidecars instead of queuing a sidecar discovery for each asset
Final note: This PR allowed me to hit a milestone of 10M assets in a single Immich instance, likely a world-first. This does require max-old-space-size=8096, but that's to be expected anyway
The update to fileCreatedAt, fileModifiedAt and originalFileName is unnecessary and can be handled in metadata extraction since this will be queued anyway. This makes the batched update for isOffline and deletedAt simpler since there'll be no values that are unique to each asset.
Thanks for your comments @mertalev ! I'll first attempt to do the import path and exclusion pattern checks in SQL and then move to your suggestions
The update to
fileCreatedAt,fileModifiedAtandoriginalFileNameis unnecessary and can be handled in metadata extraction since this will be queued anyway. This makes the batched update forisOfflineanddeletedAtsimpler since there'll be no values that are unique to each asset.
Never thought of that, I've implemented your suggestion. I'm also considering changing the initial import code to ignore file mtime, this allows us to not do any file system calls except for the crawl. Metadata extraction will have to do the heavy lifting instead
The update to
fileCreatedAt,fileModifiedAtandoriginalFileNameis unnecessary and can be handled in metadata extraction since this will be queued anyway. This makes the batched update forisOfflineanddeletedAtsimpler since there'll be no values that are unique to each asset.Never thought of that, I've implemented your suggestion. I'm also considering changing the initial import code to ignore file mtime, this allows us to not do any file system calls except for the crawl. Metadata extraction will have to do the heavy lifting instead
Would that mean you queue them for metadata extraction even if they're unchanged? You can test it but I think it'd be more overhead than the stat calls.
Edit: also if you do this with the source set to upload, it would definitely be worse because it would queue a bunch of other things after metadata extraction.
The update to
fileCreatedAt,fileModifiedAtandoriginalFileNameis unnecessary and can be handled in metadata extraction since this will be queued anyway. This makes the batched update forisOfflineanddeletedAtsimpler since there'll be no values that are unique to each asset.Never thought of that, I've implemented your suggestion. I'm also considering changing the initial import code to ignore file mtime, this allows us to not do any file system calls except for the crawl. Metadata extraction will have to do the heavy lifting instead
Would that mean you queue them for metadata extraction even if they're unchanged? You can test it but I think it'd be more overhead than the stat calls.
Edit: also if you do this with the source set to
upload, it would definitely be worse because it would queue a bunch of other things after metadata extraction.
I was referring to new imports, files that are new to immich. I hoped to improve the ingest performance by removing the stat call. After testing, there are two issues:
- assetRepository.create requires mtime, which we can only get from stat. We could work around that by setting it to new Date(), but ideally it should be undefined
- We still check for the existence of a sidecar, and this complicates things
If we can mitigate the two issues above, I can rewrite the library import feature and do that in batches as well!
I don't see why fileModifiedAt needs a non-null constraint in the DB. Might just be an oversight that didn't matter because it didn't affect our usage. I think you can change the asset entity and generate a migration to remove that constraint.
For sidecar files, maybe you could add.xmp to the glob filter and enable the option to make the files come in sorted order? That way you could make sure they're in the same batch.
I don't see why fileModifiedAt needs a non-null constraint in the DB. Might just be an oversight that didn't matter because it didn't affect our usage. I think you can change the asset entity and generate a migration to remove that constraint.
For sidecar files, maybe you could add
.xmpto the glob filter and enable the option to make the files come in sorted order? That way you could make sure they're in the same batch.
I might just put new Date() in at the moment to keep the PR somewhat constrained.
Regarding sidecars, I have thought about that, problem right now is that we're batching the crawled files in batches of 10k. It might be hard to do get that working alright. Maybe I'll just queue a sidecar discovery for every imported asset for now
I think queueing sidecar discovery would introduce a race condition where it could run before, during or after metadata extraction. Since the refresh logic is already so much better, maybe leave the import for another PR so we can think about it more.
I think queueing sidecar discovery would introduce a race condition where it could run before, during or after metadata extraction. Since the refresh logic is already so much better, maybe leave the import for another PR so we can think about it more.
Would this matter, because won't sidecar discovery re-queue metadata extraction afterwards if it does discover a sidecar file? I don't recall 100% if this is the behavior
I haven't looked closely at the sidecar discovery job either, but I think it's an issue either way since it's possible for jobs dependent on the metadata to behave differently. For example, a sidecar file that changes the orientation of the image won't be respected during thumbnail generation.
You call on merging this one @mertalev
You call on merging this one @mertalev
If it's ready for final review I would like to take a look at it too before it is merged, I can take a look tomorrow morning.
You call on merging this one @mertalev
If it's ready for final review I would like to take a look at it too before it is merged, I can take a look tomorrow morning.
Thanks, I'd apprecate it!
@etnoy I made a few changes, let me know if there's anything that seems off.
@etnoy I made a few changes, let me know if there's anything that seems off.
I had a look and it's a big improvement, thanks for helping me clean this up
Didn't get to this today sorry, on my list for tomorrow 😅
@zackpollard good to go now that the release is done?