pydantic-core icon indicating copy to clipboard operation
pydantic-core copied to clipboard

Lingering performance issues after dealing with a very complex model

Open okhaliavka opened this issue 2 years ago • 7 comments

Hi!

First of all, thanks for all the hard work put into this awesome library.

I'm in the process of migrating a service heavily reliant on pydantic from v1 to v2. For context, this service deals with document content, utilizing complex, deeply/recursively nested models with heavy use of tagged unions. Models represent various "building blocks" like block, text, image, table, etc. Basically, imagine parsing an HTML with pydantic and you get a feel of what these models look like.

I was particularly excited about the performance improvements promised by Pydantic v2. However, after completing the migration, our benchmarks showed unexpected results.

There's a significant performance degradation (compared to v1) and notable variance in benchmarks that serialize and deserialize synthetic data using various models. After looking closer, I noticed that this issue only occurs when running multiple benchmarks sequentially. When executed individually, the performance is much better and more consistent. I also noticed that it depends on the order of benchmarks. Eventually, I came to the conclusion that degradation starts on the first fairly complex benchmark and lingers, affecting subsequent simple benchmarks. I managed to boil down the repro to this (full gist here):

benchmark_simple_model()
benchmark_complex_model()  # with wide and deep synthetic data
benchmark_simple_model()

The first benchmark performs well, outpacing v1 with minimal variance. However, the second and third benchmarks exhibit much worse performance compared to v1, with significant variance. What's most interesting is that the third benchmark's performance is more than an order of magnitude worse than the first one, even though they exercise the same simple model with the same small/simple data.

Removing the complex_model benchmark removes this issue - both simple_model benchmarks perform consistently and well.

It seems that processing large and complex data with a highly intricate model leaves a lingering negative performance .

I attempted to profile pydantic-core during these benchmarks. The flamegraph revealed that about 80% of the time in the third benchmark is spent in GILPool::drop. image In comparison, the first benchmark only spends 3% of the time in GILPool::drop.


All in all, it feels like the second benchmark leaves behind obscene amounts of garbage, somehow jamming up future deallocations. I also feel like this issue might be related to https://github.com/PyO3/pyo3/issues/1056, this feeling is reinforced by https://github.com/pydantic/pydantic-core/pull/997#issuecomment-1742776731.

At this point, I'm unsure how to proceed and would appreciate your help. Thanks in advance!

okhaliavka avatar Oct 16 '23 19:10 okhaliavka

FWIW, I tried running the same benchmarks using the pydantic version that runs under mimalloc (pre-#900). The issue is still present but not as severe. Under mimalloc, the third benchmark is only 1.5 times slower than the first one; under the default allocator, it's ~25 times slower.

I'm not sure what exactly this evidence suggests, but thought I'd share it nonetheless.

okhaliavka avatar Oct 16 '23 19:10 okhaliavka

You're probably right that the issue is related to https://github.com/PyO3/pyo3/issues/1056. One thing we've observed is that after the "GILPool" hits a certain threshold, we get a permanent performance regression which is probably related to CPU cache sizes, see https://github.com/PyO3/pyo3/issues/3299

Are you able to share any "minimal" reproductions of this? It would be extremely helpful for future optimization work. (Also, we're actively working on ripping out the GIL pool upstream!)

davidhewitt avatar Oct 16 '23 19:10 davidhewitt

@davidhewitt probably not the most minimal, but here's the gist I ended up with.

Also, we're actively working on ripping out the GIL pool upstream!

Nice! Do you have any rough estimates on how soon that work will land and make its way to pydantic?

okhaliavka avatar Oct 16 '23 21:10 okhaliavka

It's hard to commit to timeline of volunteer contributions however I'd love to see this by the end of the year.

davidhewitt avatar Oct 17 '23 08:10 davidhewitt

Also thanks for sharing the gist! I get the same reproduction on M1 Mac but on my desktop PC there is no slowdown (which again supports the theory this is related to cache sizes or other platform specific pains).

davidhewitt avatar Oct 17 '23 17:10 davidhewitt

@davidhewitt what about using

unsafe { py.new_pool() };

At the start of every validation?

samuelcolvin avatar Oct 17 '23 18:10 samuelcolvin

@davidhewitt what about using

unsafe { py.new_pool() };

At the start of every validation?

That won't help; afaik it's the fact that the pool has been permanently expanded which is the problem. PyO3 could add a new global API to shrink the empty capacity of the pool away.

davidhewitt avatar Oct 17 '23 19:10 davidhewitt