.Net: Add multitargeting to .NET libraries
Adds net8.0 targets and updates various code to take advantage of newer APIs and also fix analyzers. Fixes https://github.com/microsoft/semantic-kernel/issues/4308
What benefit does net8.0 targeting give us if we won't be able to use all the new APIs to maintain compatibility with net standard 2.0?
What benefit does net8.0 targeting give us if we won't be able to use all the new APIs to maintain compatibility with net standard 2.0?
When running on .NET 8+, apps will use the net8.0 binaries rather than the netstandard2.0 binaries. That means it can use newer APIs. So, for example, it can use SocketsHttpHandler and its finer-grained control when on .NET 8+. It can use overloads that take a CancellationToken that don't exist in netstandard in order to improve cancellation and responsiveness. It can use built-in collections like PriorityQueue rather than being to ship its own version. It can use the regex source generator. And so on. It also implicitly benefits from new APIs in places the compiler targets them, eg interpolated strings will compile with the more efficient implementation introduced in C# 10 / .NET 6. It further has deployment benefits, eg System.Text.Json doesn't need to be deployed as part of the app because it's built-in to netcoreapp8. Etc.
Depends on https://github.com/microsoft/semantic-kernel/pull/5802
Depends on https://github.com/microsoft/semantic-kernel/pull/5819
Still need to figure out why code coverage is unhappy, but other than that, this should be ready for review @markwallace-microsoft
@dmytrostruk, I think you set up the code coverage checking? Can you make a recommendation for what to do here?
@dmytrostruk, I think you set up the code coverage checking? Can you make a recommendation for what to do here?
@stephentoub Yes, I can see some changes in OpenAI connector, so it worth to run dotnet/code-coverage.ps1 locally to generate HTML report and see potential places that can be covered with unit-tests in that assembly. If you want, I can assist with that.
@dmytrostruk, I think you set up the code coverage checking? Can you make a recommendation for what to do here?
@stephentoub Yes, I can see some changes in OpenAI connector, so it worth to run
dotnet/code-coverage.ps1locally to generate HTML report and see potential places that can be covered with unit-tests in that assembly. If you want, I can assist with that.
There aren't any meaningful changes. I think the problem is that the library is already razor-thin above the code coverage line (main is sitting at 81% branch coverage for the OpenAI connector, and at < 80% CI fails), so anything that perturbs it even slightly could cause CI to totally fail. There's nothing related to my changes that wasn't covered, so I went and added unrelated tests to boost the coverage back above 80%, but that's not really a sustainable position. I suggest someone take some time to bump the coverage number up above 90% so there's much more wiggle room.
Finally green. @markwallace-microsoft, please review when you get a chance.
I think the problem is that the library is already razor-thin above the code coverage line (main is sitting at 81% branch coverage for the OpenAI connector, and at < 80% CI fails), so anything that perturbs it even slightly could cause CI to totally fail.
@stephentoub That's true. When I was improving code coverage, I spent some time to hit 80% threshold by adding a lot of tests and back then I couldn't afford even more time to make it 90% or higher. On the other hand, I haven't noticed this problem recurring often, but maybe it's just me.
There's nothing related to my changes that wasn't covered, so I went and added unrelated tests to boost the coverage back above 80%, but that's not really a sustainable position. I suggest someone take some time to bump the coverage number up above 90% so there's much more wiggle room.
Thank you! Yes, I agree, we should add more tests to give us some space before it breaks CI pipeline.
On the other hand, I haven't noticed this problem recurring often, but maybe it's just me.
I think it got worse for this particular library when all the various audio, image, etc. services were added, as well as the update function calling abstractions, as they don't have as good coverage in the tests.