RawDataAccessBencher
RawDataAccessBencher copied to clipboard
EF Core - DbContext should have EnableThreadSafetyChecks(false)
As described on EF Core Advanced Performance Topics the DbContext could/should also be configured with EnableThreadSafetyChecks(false).
public EntityFrameworkCoreNoChangeTrackingBencher(string connectionString)
: base(e => e.SalesOrderId, usesChangeTracking: false, usesCaching: false)
{
var options = new DbContextOptionsBuilder<AWDataContext>()
.UseSqlServer(connectionString)
.EnableThreadSafetyChecks(false)
.Options;
pooledDbContextFactory = new PooledDbContextFactory<AWDataContext>(options);
}
I made a copy of the EntityFrameworkCoreNoChangeTrackingBencher and created a new test where the only change is EnableTreadSafetyChecks(false) that is show here with NTSC and running on my dev computer gives this result:
Results per framework. Values are given as: 'mean (standard deviation)'
Non-change tracking fetches, set fetches (10 runs), no caching
Entity Framework Core NTSC v6.0.7.0 (v6.0.722.31501) : 70,29ms (0,67ms) Enum: 0,93ms (0,10ms) Entity Framework Core v6.0.7.0 (v6.0.722.31501) : 80,37ms (6,19ms) Enum: 1,02ms (0,09ms)
Memory usage, per iteration
Entity Framework Core NTSC v6.0.7.0 (v6.0.722.31501) : 16 673 KB (17 073 744 bytes) Entity Framework Core v6.0.7.0 (v6.0.722.31501) : 16 673 KB (17 073 744 bytes)
Non-change tracking individual fetches (100 elements, 10 runs), no caching
Entity Framework Core NTSC v6.0.7.0 (v6.0.722.31501) : 0,16ms (0,00ms) per individual fetch Entity Framework Core v6.0.7.0 (v6.0.722.31501) : 0,19ms (0,01ms) per individual fetch
Memory usage, per individual element
Entity Framework Core NTSC v6.0.7.0 (v6.0.722.31501) : 17 KB (17 904 bytes) Entity Framework Core v6.0.7.0 (v6.0.722.31501) : 17 KB (17 904 bytes)
That is about 15% lower response time.
Must be something I'm doing wrong. When I run all tests there is negligible difference whether tests is running with EnableThreadSafetyChecks(false) and not. Closing the issue.
After reviewing the code I found that I had somehow EnableThreadSafetyChecks(false) in both tests. When comparing with original test it is still about 15% faster when ThreadSafetyChecks i disabled. So reopening the issue.
Is disabling these checks recommended for production use? As otherwise it's a bit of a weird thing to enable I think, people won't be doing that in practice if it's not recommended to disable it in production, so it gives a wrong impression?
from Julie Lerman : https://www.youtube.com/watch?v=bXZCdoKpjts&t=1850s
See: https://github.com/dotnet/efcore/issues/23611#issuecomment-778191277 and https://github.com/dotnet/efcore/pull/24125
And from the summary in EF Core Advanced Performance Topic:
Reducing runtime overhead
However, for high-performance, low-latency applications where every bit of perf is important, the following recommendations can be used to reduce EF Core overhead to a minimum:
- Turn on DbContext pooling; our benchmarks show that this feature can have a decisive impact on high-perf, low-latency applications.
- Use precompiled queries for hot queries.
- Consider disabling thread safety checks by setting EnableThreadSafetyChecks to false in your context configuration (EF Core 6 and above). Using the same DbContext instance concurrently from different threads isn't supported. EF Core has a safety feature which detects this programming bug in many cases (but not all), and immediately throws an informative exception. However, this safety feature adds some runtime overhead.
WARNING: Only disable thread safety checks after thoroughly testing that your application doesn't contain such concurrency bugs.
Still feels like cutting corners to win in a benchmark, similar to how some ORMs cut corners in null checks to win in benchmarks (at the expense of running into crashes with fetches from views).
@FransBouma I personally view the default configuration as positive as it gives you more accurate picture what is something you can expect on project which you haven't done any optimizations for or on other peoples projects, while not doing something obviously stupid as well. Posting any kind of benchmarks always come with backlash usually ignoring any context in which it was written. Expecting scientific publication level of correctness. Benchmark from someone who is obviously aware of the possible optimizations and techniques how to improve the times of the framework having most experience in and intentionally not doing them is what makes this in my eyes more valuable. More than trying to make compiler emit the most optimized IL from code which you would not write on majority of projects. Usually you need to balance code quality and productivity and with experience and time your effortless day to day code starts to improve towards performance (if you care about it). Optimizing materialization of sales report done once a month by 10ms sure is nice but I think your time could be spent better elsewhere.
Merged in PR #64
@jonnybee So now some frameworks are optimized and some are not, it would probably be worth mentioning this somewhere so there is more context. Even better solution would be to have two versions of runs, default and optimized. Then people could still submit performance improving PRs while not having to decide what is cutting corners and what is not.
No, that's not the case. All frameworks are optimized to their best abilities. However some optimizations are not practical for every-day use (like, to 'win' in this benchmark you can just re-use the open connection, which will speed up the results, but that's not a realistic scenario so it's not allowed).