Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(server): library refresh go brrr #14456

Open
wants to merge 26 commits into
base: main
Choose a base branch
from
Open

Conversation

etnoy
Copy link
Contributor

@etnoy etnoy commented Dec 2, 2024

This PR significantly improves library scanning performance. Wherever suitable, we are doing jobs in batches, and many looped database interactions are replaced with SQL queries.

User testimonials
"@etnoy what on earth have you done. I tried your PR and it finished the scan for 1M assets in 37 seconds down from 728s on main. It takes 188s just to finish queuing on main" -- @mertalev

Changes made

  • When crawling a library: Don't call stat() on the files when importing, instead put a placeholder date (9999-12-31) that will be overwritten by metadata extraction. All dates after year 9000 are ignored in the timeline
  • When importing new library files: Crawl 10k files per batch and then insert each batch in a single SQL query instead adding them individually
  • When checking asset filenames against import paths and exclusion patterns: Do this directly in SQL which is much faster
  • When a library is deleted, hide all its assets first. Otherwise, they'll linger until the asset deletion job catches up which can take days for >1M libraries

Plus several minor cleanups and performance enhancements.

The performance improvements are at least an order of magnitude in library scanning.

Benchmark 1
A library scan with 22k items where nothing has changed since the last scan used to take 1m 22s, now it's below 10 seconds, an improvement of 87 percent!

Benchmark 2
A clean library import with 19k items takes 1m40s in main and 7 seconds in this PR.
NOTE: this benchmark is only the library service scan and does not include the metadata extraction. Also, some fs calls have been migrated from the library service to the metadata service, although this should only have a minor impact on overall scan performance

Benchmark 3
Importing a library with >5M assets.

  • Time to 1M imported (without metadata extraction): 6m50s.

No need to compare to main, you know it's fast!

Benchmark 4
Importing a library of 527041 files took 1m58s (without metadata extraction) in this PR.
No need to compare to main, you know it's fast!

Bonus:

  • Greatly improved log messages related to library scans.
    This scan imports all new files:
    image

This is an "idle scan", where a refresh finds no changes:
image

  • More e2e tests for handling when offline files go back online, leading to one major bug fixed

Future work:

  • Crawl for XMP sidecars instead of queuing a sidecar discovery for each asset

Final note:
This PR allowed me to hit a milestone of 10M assets in a single Immich instance, likely a world-first. This does require max-old-space-size=8096, but that's to be expected anyway

image

@etnoy etnoy force-pushed the feat/inline-offline-check branch from 80aa615 to 8ecde3b Compare December 2, 2024 21:46
Copy link
Contributor

@mertalev mertalev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice start! I think there are still a lot of untapped potential improvements here.

server/src/services/library.service.ts Outdated Show resolved Hide resolved
server/src/services/library.service.ts Outdated Show resolved Hide resolved
@mertalev
Copy link
Contributor

mertalev commented Dec 4, 2024

The update to fileCreatedAt, fileModifiedAt and originalFileName is unnecessary and can be handled in metadata extraction since this will be queued anyway. This makes the batched update for isOffline and deletedAt simpler since there'll be no values that are unique to each asset.

@etnoy
Copy link
Contributor Author

etnoy commented Dec 8, 2024

Thanks for your comments @mertalev ! I'll first attempt to do the import path and exclusion pattern checks in SQL and then move to your suggestions

@etnoy etnoy force-pushed the feat/inline-offline-check branch 2 times, most recently from d394654 to 8b2a48c Compare December 9, 2024 21:34
@etnoy etnoy force-pushed the feat/inline-offline-check branch 3 times, most recently from 6d69307 to c26f6aa Compare December 10, 2024 16:41
@etnoy etnoy force-pushed the feat/inline-offline-check branch from c26f6aa to a3be620 Compare December 10, 2024 20:39
@etnoy etnoy changed the title feat(server): run all offline checks in a single job feat(server): library refresh go brrr Dec 10, 2024
@etnoy etnoy force-pushed the feat/inline-offline-check branch 5 times, most recently from 775b817 to 69b273d Compare December 12, 2024 20:59
@etnoy
Copy link
Contributor Author

etnoy commented Dec 12, 2024

The update to fileCreatedAt, fileModifiedAt and originalFileName is unnecessary and can be handled in metadata extraction since this will be queued anyway. This makes the batched update for isOffline and deletedAt simpler since there'll be no values that are unique to each asset.

Never thought of that, I've implemented your suggestion. I'm also considering changing the initial import code to ignore file mtime, this allows us to not do any file system calls except for the crawl. Metadata extraction will have to do the heavy lifting instead

@mertalev
Copy link
Contributor

mertalev commented Dec 12, 2024

The update to fileCreatedAt, fileModifiedAt and originalFileName is unnecessary and can be handled in metadata extraction since this will be queued anyway. This makes the batched update for isOffline and deletedAt simpler since there'll be no values that are unique to each asset.

Never thought of that, I've implemented your suggestion. I'm also considering changing the initial import code to ignore file mtime, this allows us to not do any file system calls except for the crawl. Metadata extraction will have to do the heavy lifting instead

Would that mean you queue them for metadata extraction even if they're unchanged? You can test it but I think it'd be more overhead than the stat calls.

Edit: also if you do this with the source set to upload, it would definitely be worse because it would queue a bunch of other things after metadata extraction.

@etnoy
Copy link
Contributor Author

etnoy commented Dec 12, 2024

The update to fileCreatedAt, fileModifiedAt and originalFileName is unnecessary and can be handled in metadata extraction since this will be queued anyway. This makes the batched update for isOffline and deletedAt simpler since there'll be no values that are unique to each asset.

Never thought of that, I've implemented your suggestion. I'm also considering changing the initial import code to ignore file mtime, this allows us to not do any file system calls except for the crawl. Metadata extraction will have to do the heavy lifting instead

Would that mean you queue them for metadata extraction even if they're unchanged? You can test it but I think it'd be more overhead than the stat calls.

Edit: also if you do this with the source set to upload, it would definitely be worse because it would queue a bunch of other things after metadata extraction.

I was referring to new imports, files that are new to immich. I hoped to improve the ingest performance by removing the stat call. After testing, there are two issues:

  • assetRepository.create requires mtime, which we can only get from stat. We could work around that by setting it to new Date(), but ideally it should be undefined
  • We still check for the existence of a sidecar, and this complicates things

If we can mitigate the two issues above, I can rewrite the library import feature and do that in batches as well!

@mertalev
Copy link
Contributor

mertalev commented Dec 12, 2024

I don't see why fileModifiedAt needs a non-null constraint in the DB. Might just be an oversight that didn't matter because it didn't affect our usage. I think you can change the asset entity and generate a migration to remove that constraint.

For sidecar files, maybe you could add.xmp to the glob filter and enable the option to make the files come in sorted order? That way you could make sure they're in the same batch.

@etnoy
Copy link
Contributor Author

etnoy commented Dec 12, 2024

I don't see why fileModifiedAt needs a non-null constraint in the DB. Might just be an oversight that didn't matter because it didn't affect our usage. I think you can change the asset entity and generate a migration to remove that constraint.

For sidecar files, maybe you could add.xmp to the glob filter and enable the option to make the files come in sorted order? That way you could make sure they're in the same batch.

I might just put new Date() in at the moment to keep the PR somewhat constrained.

Regarding sidecars, I have thought about that, problem right now is that we're batching the crawled files in batches of 10k. It might be hard to do get that working alright. Maybe I'll just queue a sidecar discovery for every imported asset for now

@etnoy etnoy marked this pull request as ready for review January 14, 2025 21:43
@etnoy etnoy requested a review from danieldietzler as a code owner January 14, 2025 21:43
@etnoy etnoy requested a review from mertalev January 14, 2025 21:44
@etnoy etnoy force-pushed the feat/inline-offline-check branch from d876ee9 to 4d1dd39 Compare January 14, 2025 21:45
@etnoy etnoy force-pushed the feat/inline-offline-check branch from 4d1dd39 to 49eaafa Compare January 14, 2025 21:53
@etnoy etnoy force-pushed the feat/inline-offline-check branch from c49b406 to 1155de6 Compare January 15, 2025 21:28
@etnoy etnoy force-pushed the feat/inline-offline-check branch from 1155de6 to 9d8a9d9 Compare January 15, 2025 21:50
@etnoy etnoy force-pushed the feat/inline-offline-check branch from e266307 to 0669169 Compare January 16, 2025 22:56
server/src/repositories/asset.repository.ts Show resolved Hide resolved
async getLibraryAssetCount(options: AssetSearchOptions = {}): Promise<number | undefined> {
const { count } = await this.db
.selectFrom('assets')
.select(sql`COUNT(*)`.as('count'))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can eb.fn.count

server/src/services/library.service.ts Outdated Show resolved Hide resolved
server/src/services/library.service.ts Outdated Show resolved Hide resolved
server/src/services/library.service.ts Outdated Show resolved Hide resolved
server/src/services/metadata.service.ts Show resolved Hide resolved
@etnoy etnoy force-pushed the feat/inline-offline-check branch 12 times, most recently from 6f7b903 to a4421dd Compare January 21, 2025 22:08
@etnoy etnoy force-pushed the feat/inline-offline-check branch from a4421dd to 0830e9d Compare January 21, 2025 22:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants