[Rant] All the "wheel reinvention" going on in `Testing.Platform` is extremely disappointing
Context
I opened another issue some time ago to show some of my frustration with what seemed to be untapped potential in the new Microsoft.Testing.Platform framework here:
- https://github.com/microsoft/testfx/issues/4198
Some of my concerns then were addressed: it turned out I didn't fully comprehend how MTP worked behind the scenes and how one could actually manage their own Program.cs etc. That was a nice find for me.
However... after now having worked more closely with MTP as we start a strong push on our solution to migrate from NET472 to NET9 and add UI and API test projects (higher level test project types, basically), it became increasingly frustrating to see how barebones and limited most of the abstractions around MTP really are under the hood.
This is supposed to be a discussion/rant post based on my recent experience and hopefully it will have enough constructive criticism to steer some of these decisions in what I think would be a better direction.
Reinventing the wheel (and the entire universe while we are at it...)
I was super hopefull when I saw the TestApplicationBuilder class and how the "manual" Program.cs looked like.
"This really looks like they are following the modern pattern established for other application types, nice!"
But as soon as I actually started working with it, I realized how wrong I was. Starting with that IConfiguration abomination.
My first instinct:
"oh this is cool, so they expose
builder.Configurationthe same way other apps do. I'll just attach myAzure AppConfigurationprovider here and..."
"...wait a second.... I can't call
AddAzureAppConfigurationhere..... oh...... WAIT...... this is a differentIConfigurationtype?"
So we have the first wheel reinvented right there. A completely custom IConfiguration, that has like 5% of the capabilities of the original, and of course doesn't work at all with all existing providers.
Related:
- https://github.com/microsoft/testfx/issues/5492
Then I saw there is no builder.Services... weird. Surely I'm missing something.
And that's when everything clicked together: the entire thing is fake. Starting from TestApplicationBuilder down to all of its properties, everything was custom built and doesn't use any of the Microsoft.Extensions.* packages.
Why on earth would you even think about doing something like this! Just use the thing you already have abstracted that is super general purpose and battle-tested! It is absolutely insane to me that you'd recreate your own.
The consequence of this is that of course, that whole Program.cs is now completely useless to me. I can't register my services, I can't register my configuration providers, I can't tap into anything else that is standard Microsoft.Extensions.Hosting. I can't add OpenTelemetry, no hosted services, nothing.
It is a useless barely functional "full rewrite" of Microsoft.Extensions.Hosting that exposes a few convoluted interfaces for library authors only.
"All right.... I think I get this now. Let's try to make
testconfig.jsonload environment variables now..."
- https://github.com/microsoft/testfx/issues/5491
And suddenly I'm dealing with another config file that was introduced to..... configure things... of course. Is it following the same pattern of appsettings.json? Of course not! It is fully custom!
Now we have a billion mechanisms for providing parameters and configuration settings:
launchSettings.json: doesn't work properly with VS test integration. only applies to development regardless. currently uselessrunsettings: deprecated, apparently? fair enoughtestconfig.jsona new thing... replacingrunsettings, but of course, dropping a bunch of capabilitiesxunit.runner.jsonbecause why not have one custom made for a specific test framework? After all we are dealing with raw Json which is highly proprietary and not portable at all..... wait....appsettings.{env}.json: nah, tests won't use this by default. why would they? its just a general configuration abstraction that is used in all other project types... surely there is no reason at all to tap into it... right?dotnetCLI- and of course, you have a custom MSBuild property where you can set cmd arguments as well... very discoverable, super idiomatic, great stuff
"But if you are using the MSBuild nuget package, testconfig.json is automatically copied and renamed to {application}.testconfig.json". Of course..... a random NuGet package, and not a project SDK like all other project types. Great idea guys.
But it doesn't stop there. You want to create an extension? Sure... just declare a random static method and then create a MSBuild property pointing to it in the target project. Surely super intuitive to do, particularly when the extension is not a NuGet package but a shared library in your solution. Very idiomatic, just ask people to add completely random custom properties and values to their csproj! Talk about ease of adding extensions!
Imagine if there was a hosting mechanism where you could..... add things, and configure them! That would be amazing, right? Nah... let's rely on extremely obscure MSBuild targets buried in transient package dependencies to automatically generate a bunch of code and hide that as much as possible from users! Those dumb users, they wouldn't be able to ever configure this themselves, we should hide all of it!
You know... the same way that ASP.NET Core and Worker projects hide all of their hosting and... no.... they don't do that? Oh... whatever, we will do it anyways!
Suggestions
Guys...... what on earth are you doing here? You have such a strong abstraction to build on top of, and you ditch all of it in favor of.... this mess?
Stop hiding host details from engineers
Just expose Program.cs. No more MSBuild shenanigans to "make it look like the old projects". That's completely useless. .NET engineers are used to the new hosting model for years now. They will be able to handle a new Program.cs file with 3 lines of setup, no problem.
All the MSBuild-based magic is insane right now.
Drop all this custom config nonsense
.NET already has an idiomatic configuration subsystem on Microsoft.Extensions.Configuration. All configuration should be through this, period. Drop this custom testconfig.json nonsense, use the standard appsettings.{env}.json, env vars, secrets providers. Tap into the existing IConfiguration interface. Encourage all framework authors to use it.
Libraries for anything else in the .NET ecosystem have extensions for being configured with the hosting model, usually via AddX methods. Those methods then rely on IOptions<T> to propagate configuration settings, that can be optionally bound to one of the many configuration providers (appsettings.json etc) or just be configured inline).
Just do that! AddXunit(XunitOptions...), AddHotReloadWhatever(HotReloadWhateverOptions...).
Again, get rid of the insane MSBuild-based magic and all that IExtension nonsense you invented.
Everything is DI-based
I don't want to have to use Xunit.DependencyInjection. I especially don't want to do it when the platform itself already has a hosting mechanism and even has a IServiceProvider.
Final Thoughts
This framework is an abomination. I had reservations before (as evidenced by that first issue I linked) but now that I've actually worked with it a bit more they became orders of magnitude worse.
I really really hope you'll reconsider (dramatically) some of the decisions you made thus far here. There is still time, this framework is brand new. Do thinks right, please.
I'll now go back to our new hodgepodge of a project with multiple workarounds and limitations.
- We use the
.WorkerSDK so thatappsettings.{env}.jsonare copied automatically. Not intuitive at all, but it works... - We force MTP runner via the
UseMicrosoftTestingPlatformRunnerMSBuild property (this has no reason to exist) - We now maintain a custom envvar provider extension to be able to define environment variables (including
DOTNET_ENVIRONMENT) when running the tests. Of courselaunchSettings.jsondoesn't work, why would it? That would be too easy - We depend on
Xunit.DependencyInjection, which itself uses the old callback-based host builder, because of limitations with xunit. If we want DI with another framework, well... we are screwed: this is xunit-specific, after all. Imagine if there was a central place where DI could be setup regardless of underlying test framework... wouldn't that be nice? - We configure xunit settings, MTP settings, Azure settings, Playwright settings, and domain-specific settings, all using different mechanisms
Thanks so much for your feedback. I read it, and will respond to the important points.
So we have the first wheel reinvented right there. A completely custom IConfiguration, that has like 5% of the capabilities of the original, and of course doesn't work at all with all existing providers.
One important design decision of MTP is that the core platform shouldn't have any dependencies. This was explained in more details by @Evangelink in https://github.com/microsoft/testfx/issues/4198#issuecomment-2508620910.
And that's when everything clicked together: the entire thing is fake. Starting from TestApplicationBuilder down to all of its properties, everything was custom built and doesn't use any of the Microsoft.Extensions.* packages.
That's the same answer, the core platform is expected to not have any dependencies. But that generally shouldn't be so much limiting for you. If it is, we will need more concrete cases of what you are trying to do and what's missing (similar to #5492 where it's more clear to us what feature is missing).
The consequence of this is that of course, that whole Program.cs is now completely useless to me. I can't register my services, I can't register my configuration providers, I can't tap into anything else that is standard Microsoft.Extensions.Hosting. I can't add OpenTelemetry, no hosted services, nothing.
Please have a read on Microsoft.Testing.Platform extensibility, which explains how you can extend the platform with your own stuff. If something is still not clear in that regard, please ask us back.
And suddenly I'm dealing with another config file that was introduced to..... configure things... of course. Is it following the same pattern of appsettings.json? Of course not! It is fully custom!
On that part, we don't have control over xunit, so xunit.runner.json is not in our hands. For RunSettings, that was the VSTest way of doing stuff, which has partial support via VSTestBridge (used only by MSTest, NUnit, and Expecto at the time of writing). For launchSettings, this is already supported today if you run with dotnet run (MTP apps are typical console apps after all, nothing special), and will be supported in dotnet test in .NET 10. Then testconfig.json is the MTP main way of configuration. It serves a very different purpose compared to launchSettings for example. launchSettings is read before the process starts, so it can correctly set the environment variables from the very beginning before starting the process. appsettings.*.json is irrelevant for MTP.
and of course, you have a custom MSBuild property where you can set cmd arguments as well... very discoverable, super idiomatic, great stuff
There are scenarios where setting command-line arguments via MSBuild property is important. For example, if you run dotnet test on a solution where you want to ignore a specific exit code only for a specific project. We have no other way to support this scenario very well apart from an MSBuild property. In the new dotnet test in .NET 10, we will be supporting RunArguments as well, which is more standard property that has existed in .NET SDK for long. What we will be doing is simply that TestingPlatformCommandLineArguments gets added to RunArguments. See here.
"But if you are using the MSBuild nuget package, testconfig.json is automatically copied and renamed to {application}.testconfig.json". Of course..... a random NuGet package, and not a project SDK like all other project types. Great idea guys.
We were trying to keep Microsoft.Testing.Platform package as minimal as possible, with anything "optional" kept separately (e.g, TrxReport is optional, so it's its own package, MSBuild stuff are optional, so they are in their own package, etc). Using a NuGet-provided MSBuild SDK has its own problems, so I don't think it's a good fit here. And having a custom MSBuild SDK as part of the .NET SDK itself puts us in the same release cycle as .NET SDK, which is also not so good.
But it doesn't stop there. You want to create an extension? Sure... just declare a random static method and then create a MSBuild property pointing to it in the target project. Surely super intuitive to do, particularly when the extension is not a NuGet package but a shared library in your solution. Very idiomatic, just ask people to add completely random custom properties and values to their csproj! Talk about ease of adding extensions!
What the MSBuild item does is that it simply calls the hook's AddExtensions in a generated file. You can always disable the auto-generated file and have the code on your own.
Just expose Program.cs. No more MSBuild shenanigans to "make it look like the old projects". That's completely useless. .NET engineers are used to the new hosting model for years now. They will be able to handle a new Program.cs file with 3 lines of setup, no problem.
At least I would love to do that, but we still want to consider the migration story from VSTest and try to make it easier. You still have the option to disable the auto-generated entry point. It's only that it's enabled by default.
We force MTP runner via the UseMicrosoftTestingPlatformRunner MSBuild property (this has no reason to exist)
UseMicrosoftTestingPlatformRunner is not our MSBuild property, it's xunit's (however, we have similar one for MSTest). The problem it tries to solve is simple: Support both VSTest and MTP, while keeping VSTest the default.
Of course launchSettings.json doesn't work, why would it? That would be too easy
As mentioned earlier, this is something we will change in dotnet test in .NET 10.
Overall, I understand why you find it confusing, but most of it really boils down to the fact that the core platform shouldn't have dependencies.
Please let us know if you have further questions.
Thanks so much for your feedback. I read it...
Thank you for taking the time to go through it @Youssef1313 . I really do appreciate it.
One important design decision of MTP is that the core platform shouldn't have any dependencies. This was explained in more details by @Evangelink in #4198 (comment). ... That's the same answer, the core platform is expected to not have any dependencies.
Yes, I know from that previous discussion why you guys went with this approach. It is not good though (of course, my opinion here).
You should very strongly reconsider this constraint and just use the common abstractions instead. I can't stress this point enough. If there is anything else I could do to highlight that please let me know.
But that generally shouldn't be so much limiting for you. If it is, we will need more concrete cases of what you are trying to do and what's missing (similar to #5492 where it's more clear to us what feature is missing).
When I say it becomes useless, it's not in the sense that "I can't create custom extensions" etc (after all, we did create one...), but in the sense that it just doesn't work at all for application concerns: that entire hosting model is only there to serve tool extensions and nothing else.
I can't configure my configuration providers (that I need for my test app), I can't configure my services (that I need to inject), I can't configure OpenTelemetry using OpenTelemetry.Extensions.Hosting because your hosting is incompatible, etc).
It is extremely limitting, to the point where we needed to add Xunit.DependencyInjection to be able to get some of those benefits back (although in a very convoluted, test framework-specific way as I mentioned). That entire package should be completely unnecessary if this was done "properly".
I want an open hosting model, where the framework's needs are defined (AddMsTest/AddXunit etc), but where I can also define my own needs.
A good example of how this works is isolated Azure Functions, which evolved over time to finally get a proper hosting experience similar to the generic host using HostApplicationBuilder, even though it has very complex and particular hosting requirements.
The consequence of this is that of course, that whole Program.cs is now completely useless to me. I can't register my services, I can't register my configuration providers, I can't tap into anything else that is standard Microsoft.Extensions.Hosting. I can't add OpenTelemetry, no hosted services, nothing.
Please have a read on Microsoft.Testing.Platform extensibility, which explains how you can extend the platform with your own stuff. If something is still not clear in that regard, please ask us back.
Again, I think you misunderstood my point here. I know what your entry point exposes, and I know I can use it. I'm just saying it is not good for application level concerns at all.
How do I add appsettings.{env}.json support in your pipeline? How do I connect to Azure AppConfiguration? How do I enable dependency injection for my test classes and inject a custom service into them?
None of this is possible out of the box, and attempting to implement any of that would mean recreating massive amounts of code from existing Microsoft.Extensions.* libraries. It completely defeats the point.
And suddenly I'm dealing with another config file that was introduced to..... configure things... of course. Is it following the same pattern of appsettings.json? Of course not! It is fully custom!
On that part, we don't have control over xunit, so
xunit.runner.jsonis not in our hands.
Sure, I know that. But you have contacts and connections with each test library author. If you provided a robust configuration mechanism (...Microsoft.Extensions.Configuration....) there would be zero reason for a test framework author to ever come up with their own custom json/xml files: they would just tap into the existing, well-known idiomatic configuration system.
What you are doing here is similar to having each random ASPNET.Core library author come up with weird nonstandard ways to configure their libraries. Imagine we have a mvcSettings.json file, a blazorOptions.xml file, a automapper.yaml, a dapperConfiguration.ini etc, each one 100% custom for their specific needs. That does NOT happen because the ecosystem is standardized: everyone knows they are supposed to create a AddX method that takes in a configuration lambda that can be bound to the underlying IConfiguration mechanism. Anyone not doing that is doing it wrong.
It is the same thing here.
If MTP was standardized on a similar hosting model, all configuration options could be done via AddX methods and no library would ever need to define custom files/formats/environment variables/msbuild properties/anything of that sort.
Again..... please think about what you are doing here for the future. This should be easy to standardize, since the standard has already been defined for every other project type. You just need to take that and apply to test projects.
For launchSettings, this is already supported today if you run with
dotnet run(MTP apps are typical console apps after all, nothing special), and will be supported indotnet testin .NET 10.
Yeah, I mentioned it for completeness but I knew it was going to be "fixed" in NET10. That's a step in the right direction there for sure if only for consistency alone.
Then
testconfig.jsonis the MTP main way of configuration. It serves a very different purpose compared to launchSettings for example.launchSettingsis read before the process starts, so it can correctly set the environment variables from the very beginning before starting the process.appsettings.*.jsonis irrelevant for MTP.
You are missing the point though. There is no reason why you couldn't just use the same appsettings.*.json mechanism for all settings and eliminate the need for testconfig.json. All of the things it does should be doable via the standard configuration, which you'd load at the very beginning of the process (again, like with all other application types) and then pass along those settings to the frameworks/extensions that need them.
You don't need to have everything depend on environment variables either. Things should depend on the configuration abstraction, IOptions<T>, which can be populated from multiple different sources including environment variables.
You are creating a brand new architecture/framework, it is the perfect time to modernize how you configure things using the new mechanisms that were invented since then.
And for environment variables, you said it yourself that .NET 10 is fixing the launchsettings.json compatibility issue, so it will just work for tests as well.
The reason we needed to customize testconfig.json to allow setting envvars as well was precisely because things in the test realm keep depending directly on them. You have the opportunity here to finally change that and have everything depend on IOptions instead which then opens it up for the consumer to use any configuration source they want: appsettings, secrets, appconfig, envvars, cmdline arguments, etc.
and of course, you have a custom MSBuild property where you can set cmd arguments as well... very discoverable, super idiomatic, great stuff
There are scenarios where setting command-line arguments via MSBuild property is important. For example, if you run
dotnet teston a solution where you want to ignore a specific exit code only for a specific project. We have no other way to support this scenario very well apart from an MSBuild property.
Why would that be hard to setup at all? If you expose it as a setting on AddXWhatever, developers would be able to configure the status code mapping on a per project basis. This would be equivalent to configuring health checks, or ProblemDetails on an ASPNET.Core project.
I fail to see why this configuration can't be parameterized and set at startup time at all. Perhaps I'm missing something obvious, in which case I'd love for you to elaborate.
In the new
dotnet testin .NET 10, we will be supportingRunArgumentsas well, which is more standard property that has existed in .NET SDK for long. What we will be doing is simply thatTestingPlatformCommandLineArgumentsgets added toRunArguments. See here.
I hope you realize how convoluted that is. Once again you create test-specific things when an equivalent already exists natively, and now you have to reconcile.
If RunArguments existed, you should've just used that in the first place. This feels incredibly obvious to me but again, I don't have your internal context.
"But if you are using the MSBuild nuget package, testconfig.json is automatically copied and renamed to {application}.testconfig.json". Of course..... a random NuGet package, and not a project SDK like all other project types. Great idea guys.
We were trying to keep Microsoft.Testing.Platform package as minimal as possible, with anything "optional" kept separately (e.g, TrxReport is optional, so it's its own package, MSBuild stuff are optional, so they are in their own package, etc). Using a NuGet-provided MSBuild SDK has its own problems, so I don't think it's a good fit here. And having a custom MSBuild SDK as part of the .NET SDK itself puts us in the same release cycle as .NET SDK, which is also not so good.
I 100% get the intent of keeping optional extensions (such as Trx, Coverage, etc) in separate packages. I'm not disputing that at all and I think you should keep that approach for sure.
What I think is a massive mistake here is not to take a dependency on the core abstractions from Microsoft.Extensions.*. Configuration, DI, hosting, logging, metrics, all the core concepts have already been abstracted there over several years, and have standardized across all application types.
Even OpenTelemetry has full support for those abstractions, and that framework is built around a static SDK library with zero dependencies as well. I mentioned this earlier: these are other teams from Microsoft, all you need to do is talk amongst yourselves and share the knowledge on how it was built.
Apply the same thing that OTEL did: have your "zero dependencies" core thing, but have a Extensions.Hosting package of some kind that bridges your thing with "the modern development world of .NET" and brings in all the benefits that the standard hosting provides.
What you are doing currently is that you are baking in a poor version of those things into your library. This will never scale properly and you are going to keep getting infinite requests to make whatever you created there be on par with the Microsoft.Extensions.* libraries. "I want DI support", "I want to be able to add Json configs, azure configs", "I want to add OpenTelemetry to my tests". Are you going to completely reimplement all of Microsoft.Extensions inside of MTP?!
But it doesn't stop there. You want to create an extension? Sure... just declare a random static method and then create a MSBuild property pointing to it in the target project. Surely super intuitive to do, particularly when the extension is not a NuGet package but a shared library in your solution. Very idiomatic, just ask people to add completely random custom properties and values to their csproj! Talk about ease of adding extensions!
What the MSBuild item does is that it simply calls the hook's
AddExtensionsin a generated file. You can always disable the auto-generated file and have the code on your own.
Sure. Again, I know that. I'm just arguing it should be the default and the other "form" shouldn't even need to exist in the first place.
Instead, you guys went the opposite route and made the convoluted thing the default, and the super idiomatic, standard thing, the optional path.
With that said... I'll look into not using that MSBuild-driven extension point and move to the manual Program.cs for sure.
Just expose Program.cs. No more MSBuild shenanigans to "make it look like the old projects". That's completely useless. .NET engineers are used to the new hosting model for years now. They will be able to handle a new Program.cs file with 3 lines of setup, no problem.
At least I would love to do that, but we still want to consider the migration story from VSTest and try to make it easier. You still have the option to disable the auto-generated entry point. It's only that it's enabled by default.
I fail to see why you think it would be that challenging to enforce the Program.cs to be there... how many lines does it even need? 5 at most?
Compare that to any other project type and it's not even close. .NET engineers are used to the hosting model by now. This shouldn't be this big of a barrier you guys are making it out to be.
This constraint you imposed on yourselves is introducing all sorts of convoluted, archaic, nonstandard concepts that shouldn't need to exist at all (like that IExtension interface I mentioned).
We force MTP runner via the UseMicrosoftTestingPlatformRunner MSBuild property (this has no reason to exist)
UseMicrosoftTestingPlatformRunneris not our MSBuild property, it's xunit's (however, we have similar one for MSTest). The problem it tries to solve is simple: Support both VSTest and MTP, while keeping VSTest the default.
Just don't make VSTest the default... have MTP not work with it.
Again, you have such a great opportunity here to restart things from scratch, and you are clinging to these "compatibility concerns" that don't even need to exist. MTP is brand new. People creating new projects now would use it directly, and those who want to migrate, perform the migration.
It's not like VSTest is deprecated or anything. You can still "target" it just fine.
Overall, I understand why you find it confusing, but most of it really boils down to the fact that the core platform shouldn't have dependencies.
This is a terrible mistake. MTP will never properly scale if you don't tap into the well-known hosting abstractions. Those hosting abstractions will NOT move anywhere either anytime soon. There is no option here.
Once again thanks for at least listening to my rants. I'm coming at this as someone who has been working with .NET for 15 years, creating unit test projects since day 1, and I see such a gigantic opportunity with this that I am basically forced to chime in. If you can make MTP work with the existing hosting abstractions, you will have a winner in your hands. Otherwise, I just don't see this ever working "well", it's always going to be filled with hacks, duplicated solutions, framework-specific solutions, etc.
@julealgon thanks for the detailed, rough but honest, feedback!
I totally understand your point of view which I would 100% share if I didn't spent time working in testing space at Microsoft. We have many teams that are in need of testing at different levels (e.g. runtime is needing to test really low level APIs where most of Microsoft.Extensions.* would not work...). We have also had many experiences with VSTest where having a dependency is bitting us in the future because at Microsoft you cannot easily deprecate or remove something - back compat forever.
Let me take some examples:
-
let's say we depend upon
Microsoft.Extensions.Configurationv10 and the team working on this tool is working on v11 that contains a breaking change then suddenly they can no longer test their library. -
now, let's say MTP depends upon
Microsoft.Extensions.Configurationv10 and your project depends upon v8, we will impact the runtime of your app through our test environment meaning you are not truly testing what you will deploy/ship. One solution here is to target the oldest version possible which comes with the limitation of hitting potential breaking change (see point 1) and could cause users not to be happy because they cannot use feature x or y.
Just don't make VSTest the default... have MTP not work with it.
If you manage to convince our leadership, consider that @Youssef1313 and I would work night and weekend and drop everything VSTest on the shortest notice ever.
Now we have a billion mechanisms for providing parameters and configuration settings:
I agree that's painful, sadly and although you are pointing the fingers at us, there is no global config file in .NET that we can extend (runtimeconfig.json is proprietary of runtime, appsettings.json to ASP.NET...) so yeah we had to add our config file. I again understand your view and frustration, if you are managing to get some approval from other teams we can reuse and extend their config file then we are happy to consider evolutions/replacements.
If you can make MTP work with the existing hosting abstractions, you will have a winner in your hands.
During my spare time, I have done some experiments with doing a wrapper from Microsoft.Extensions to our core platform. This is easy and work well for everything that is supported by the core platform interfaces and obviously doesn't work well for the rest.
The only other approach I could see is to have 2 versions of the platform (one with the deps and one without) which isn't that hard but it gets harder for extensions (including test frameworks) needing to be also duplicated.
I'll try to find some key devs/PMs from Microsoft.Extensions.* and runtime willing to discuss/brainstorm this topic so we can hopefully mitigate some of these issues.
@julealgon I am also happy to hear your suggestions
I totally understand your point of view which I would 100% share if I didn't spent time working in testing space at Microsoft. We have many teams that are in need of testing at different levels (e.g. runtime is needing to test really low level APIs where most of
Microsoft.Extensions.*would not work...). We have also had many experiences with VSTest where having a dependency is bitting us in the future because at Microsoft you cannot easily deprecate or remove something - back compat forever.Let me take some examples:
- let's say we depend upon
Microsoft.Extensions.Configurationv10 and the team working on this tool is working on v11 that contains a breaking change then suddenly they can no longer test their library.- now, let's say MTP depends upon
Microsoft.Extensions.Configurationv10 and your project depends upon v8, we will impact the runtime of your app through our test environment meaning you are not truly testing what you will deploy/ship. One solution here is to target the oldest version possible which comes with the limitation of hitting potential breaking change (see point 1) and could cause users not to be happy because they cannot use feature x or y.
Man.... I get these, but we are punishing the entire ecosystem because of these special cases. There needs to be some other solution to this besides "let's not depend on anything" IMHO. Think about it, how many engineers are testing projects that are potential dependencies of the test framework itself? Surely that's less than 0.001% of all test projects out there.
On 1:
For example.... say I was the owner of Microsoft.Extensions.Configuration and needed to test my changes using MTP that also depends on Microsoft.Extensions.Configuration... Wouldn't it be possible to just generate a "test version" of Microsoft.Extensions.Configuration with a different assembly name that would work independently of the "real" Microsoft.Extensions.Configuration dependency? Surely if you had both DLLs in place, and they were actually 2 distinct assemblies, you'd be able to use a global alias to alias your custom version and even keep the exact same namespaces in place.
There must be something like this that would enable MTP to use one version, and the actual tests test another version.
Even if that's a bit clunky, it only impacts a very small subset of the real world test projects and the added annoyance is not even comparable with the massive benefits it would bring everyone else.
On 2:
This should honestly be even easier. If a consumer wants to test using Microsoft.Extensions.* v8 to match their real apps v8 usage, they just use a version of MTP that targets those versions. If MTP releases a new version that now relies on v9 or v10 for newer features, then the users make an informed decision:
- they are fine with testing against a different version of this shared library (super low risk here, these are incredibly stable libraries), so they just upgrade MTP without necessarily upgrading the dependencies on their real apps
- they want to make sure to use the same versions across, so they update the dependencies on their app and update MTP so both match
- they can't currently update the deps on their app, so they just avoid the MTP update and stick with the version that targeted v8
To me, these are all extremely viable options. And they are in a sense quite similar to, say, how ASP.NET Core works. If you have libraries targeting Microsoft.Extensions.* v8 and then decide to move to ASP.NET Core v9, you are also basically forced to upgrade those dependencies. Same thing would happen here.
All you'd need to do is potentially support multiple versions of MTP at any given time (again... something that is quite normal), until the frameworks are considered out of support.
For example, MTP right now would have a version depending on Microsoft.Extensions.* v8 (since NET8 is LTS), and optionally another version targeting v9 libraries, but that's really only necessary if MTP itself needs something out of those newer versions (or if vulnerabilities are found etc). Then once STS for NET9 ends, you drop support for anything referencing v9 extensions and publish a new version targeting v10 extensions.
This feels honestly very doable to me, but again I don't have all the context you guys do.
Just don't make VSTest the default... have MTP not work with it.
If you manage to convince our leadership, consider that @Youssef1313 and I would work night and weekend and drop everything VSTest on the shortest notice ever.
If there is any way I could do that, I would. If there are any channels beside github that I could voice my concern on this, please let me know. Hell... give me the PM's email or setup a call or whatever, I'm in.
People usually don't treat test libraries the same way they do with "real" libraries, but this moment here is such a key moment in test framework's timeline where we have this insane opportunity to finally unify things. This needs to be ingrained in the mind of whoever is leading this.
Now we have a billion mechanisms for providing parameters and configuration settings:
I agree that's painful, sadly and although you are pointing the fingers at us, there is no global config file in .NET that we can extend (
runtimeconfig.jsonis proprietary of runtime,appsettings.jsonto ASP.NET...) so yeah we had to add our config file.
No no no, this is incorrect. appsettings.json started in ASP.NET Core, but it is not exclusive to it anymore, and it's been several years since that's been the case. If you take a look at HostApplicationBuilder itself, which is the new, modern general form of the old generic host, it registers all of the appsettings.*.json and secrets just as WebApplicationBuilder does for ASP.NET: this has been standardized now. We use appsettings.json on our workers, and even on our Azure Functions.
This is why I was so happy to see that TestApplicationBuilder, only to become super disapointed to learn it was really not following the same idea but reinventing every sub-component of the hosting model.
This is from one of the console apps we just migrated to .NET9:
We use this mechanism today in many different app types:
- ASP.NET Core apps
- legacy ASP.NET WebForms apps (yep... even there, altough we don't run the host on those, we use all the other capabilities of it)
- Windows Service apps (using the Worker SDK, which even handles these files automatically for us by copying them etc)
- console apps (also using the Worker SDK)
- Azure Functions
- etc
What you are saying there was true at one point, but it is not anymore. Most of the Microsoft.Extensions.* libraries started targeting ASP.NET Core use cases, but were quickly standardized to also work outside of it. This is even more true now that the newer property-based API was created (HostApplicationBuilder).
If this argument was used to justify not tapping into appsettings.*.json before, it needs to be reconsidered.
The only other approach I could see is to have 2 versions of the platform (one with the deps and one without) which isn't that hard but it gets harder for extensions (including test frameworks) needing to be also duplicated.
If you really want to have a super low level, zero dependencies "version" of MTP, then again my suggestion would be to check what the OpenTelemetry dotnet team did. They have a "raw" version, that models the OTEL SDK as it is "documented" by the general spec, and then they have OpenTelemetry.Extensions.Hosting which completely bridges that raw low level thing with the modern .NET hosting model, including full support for IConfiguration, DI, logging, etc.
Having just one version of MTP that is heavily compromised due to special cases is really not ideal at all. As I mentioned earlier, it punishes 99% of the population for 1% special cases that would need some workarounds. And those percentages are likely bloated... the number of people who would have problems running a test project that depends on Microsoft.Extensions.* is probably lower than that.
And maybe, even in that scenario, they could just use old test projects? Honestly, even that would be a better way to handle this than butchering the new platform.
I'll try to find some key devs/PMs from
Microsoft.Extensions.*and runtime willing to discuss/brainstorm this topic so we can hopefully mitigate some of these issues.
Please... this is such a great opportunity. @davidfowl and others have worked a lot on the hosting standardization effort, I'm sure he could share his two cents here. It needs to be top priority somehow, or we will never get to a point where writing any type of test project is really seamless.
Thanks @Evangelink .
Again a big thank you for all the valuable comments.
Very useful comments👍🏻
If you really want to have a super low level, zero dependencies "version" of MTP, then again my suggestion would be to check what the OpenTelemetry dotnet team did. They have a "raw" version, that models the OTEL SDK as it is "documented" by the general spec, and then they have OpenTelemetry.Extensions.Hosting which completely bridges that raw low level thing with the modern .NET hosting model, including full support for IConfiguration, DI, logging, etc.
Looking at open telemetry code, and reading their docs, this is what I see. They provide OpenTelemetry.Api package that people can depend on when authoring instrumentation, and that implements the "raw" spec. As recommended by them here, in the following excerpt:
Libraries providing SDK plugins such as exporters, resource detectors, and/or samplers should take a dependency on the OpenTelemetry SDK package. Library authors providing instrumentation should take a dependency on OpenTelemetry.Api or OpenTelemetry.Api.ProviderBuilderExtensions package. OpenTelemetry.Api.ProviderBuilderExtensions exposes interfaces for accessing the IServiceCollection which is a requirement for supporting the .NET Options pattern.
Where "OpenTelemetry SDK package" refers to OpenTelemetry nuget package, and "OpenTelemetry.Api" refers to the nuget of the same name.
Looking at the dependencies of OpenTelemetry.Api, it is indeed pure and only depends on the runtime part that allows it to instrument stuff. This would be the low level, zero dependencies package that you refer to.
They also recommend that exporters etc, depend on the OpenTelemetry package, which does depend on the Microsoft.Extensions.* dependencies.
The glue between the two layers is the OpenTelemetry.Api.ProviderBuilderExtensions package (and the fact that IServiceCollection is part of runtime).
There we can see for example for Tracing, that OpenTelemetry.Api provides abstractions for creating a tracer (IDeferredTracerProviderBuilder, TracerProviderBuilder) which are used by the higher level packages to do composition via dependency injection and configuration via Configure callback.
The .Api package also provides TraceProvider.Default, which can create tracers and do tracing, but without all the additional features of reporting etc.
Looking at AddConsoleReporter for example, which is not part of the raw package, we can see that the extension for the builder is implemented here
And the overloads that actually configure it are used like this: https://github.com/OneUptime/oneuptime/blob/master/Examples/otel-dotnet/Program.cs#L61
The types mentioned there are also not part of the .Api package https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry/Metrics/Reader/MetricReaderOptions.cs they come from the open telemetry SDK.
This is the ConfigureServices used there, https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api.ProviderBuilderExtensions/Trace/OpenTelemetryDependencyInjectionTracerProviderBuilderExtensions.cs
I cannot find any "configuration" per se in the .Api package to make the scenario complete, but I can imagine that we simply use our own objects (as we do now) for the configuration, require users of the low level package to provide them, and otherwise we use defaults.
So for MTP to follow the same model, we would have to:
- split Microsoft.Testing.Platform into a Microsoft.Testing.Platform."api" package, that holds the real real core.
- take the current MTP and change how we construct it (probably).
- rename the types like IConfiguration to avoid confusion
- agree that we want to use the Microsoft.Extensions.* as dependencies for all additional services like any non-core extensions.
- extension authors (like xunit, or mstest) would have to choose if they want to depend on the full version that uses microsoft.extensions.* or not, and possibly provide 2 versions one for low-level only and one for full.
- use the oldest supported version of the microsoft.extensions* and make sure that is enough for our purposes, with user being able to update by installing newer packages (but this exposes us to breaking changes)
- write wrappers around our configuration map from microsoft.extensions* in places where it needs to propagate to the real core
- more?
I understand the zero dependency thing, I'm assuming because if your test platform has a dependency, that your system-under-test also has, then you might run into conflicts. It might upgrade your sut dependency which then isn't a true reflection of your app.
But I also get the huge potential you could get by using the Microsoft.Extensions.* packages.
To avoid transitive nuget dependency and potential conflicts, could you use Microsoft Extension packages, reference them as development dependencies (so it's not seen as referencing those packages when packaged up), and then do some clever msbuild logic where those packages' DLLs are placed within the MTP packages themselves under the /lib path, meaning they can still reference extensions code, it shouldn't conflict with users (because you essentially have your own sandboxed version if I'm thinking correctly?), and then you've opened up an entire massive ecosystem of possibilities!
I may be being naive or not considered something, but wouldn't that work?
The caveat would be that users wouldn't be able to manually update the backing version of the microsoft extensions packages, because they'll be whatever version was used at the time of packing MTP, but I think that's a small price to pay, and unlikely to affect most people tbh since they're mostly abstractions anyway. And MTP can just regularly keep them up to date if they don't then conflict with user's SUTs.
If we place Microsoft.Extensions.* dlls under lib, NuGet will be considering them to be copied to output directory. So I think this will still have the same issue.
If we place Microsoft.Extensions.* dlls under
lib, NuGet will be considering them to be copied to output directory. So I think this will still have the same issue.
Damn! I'm not an msbuild expert so is this a hard limitation?
Is there no way to provide them their own output path? e.g. $(OutputDirectory)/MTP/*.dll
Or can they be compiled directly into the app?
I think at the end of the day, only a single version of the dll will be passed to Csc. We could hack it so that the user dll is the one that's always present in some way, but then we risk binary incompatibilities between the version MTP compiles against and the version the user compiles against.
You can load two different versions of the same assembly though if they're publicly signed can't you?
I've never used this, but NuGet supports a /ref location which isn't copied to the output directory, but IS used by the compiler?
https://github.com/NuGet/Home/discussions/11097#discussioncomment-1082474
You can load two different versions of the same assembly though if they're publicly signed can't you?
Yes, but I think that will bring other issues. I think we will need to use AppDomains for that (under .NET Framework), and AssemblyLoadContexts (under .NET Core). This will likely be much more problematic, IMO. We wanted to explicitly not have any kind of dynamic assembly loading etc, and leave everything flow really as a typical console app.
@Youssef1313 don't you think the compatibility risk argument is a bit overblown as well? Microsoft.Extensions.* packages, particularly the .Abstractions ones, have been extremely backwards compatible for several years now, with only very few, incredibly specific breaking changes.
If you just gave control to the user to pick their packages and maintained a couple of versions of MTP yourself (targeting STS and LTS versions) I feel like that would be more than enough to cover all scenarios.
For the extremely rare case where an incompatibility is found, the user can just opt to use the older version of MTP until they can themselves migrate to latest Microsoft.Extensions.* and regain compatibility.
We are not talking about random external libraries here, in which case I would agree the potential risk would be much higher.
@thomhurst @Youssef1313 The ref solution is imho worse than the alternative of using the oldest reasonably usable version of those extensions, and letting the user upgrade if needed. At least that will correctly resolve via nuget, and we won't miss those dependencies on runtime.
Reference only dependency would (IMHO) be useful only in cases where we want to provide 1 dll with some parts that are dependent on a library that we don't want to ship, but that will run only in context where that dependency will be present.
You can see this going wrong in VSTest translation layer where we require newtonsoft.json for netstandard2.0 shipment, but don't depend on it, effectively making it a ref dependency, and failing on runtime.
I understand not wanting to do dynamic assembly loading. But sounds like that could potentially solve this in the cleanest way, by isolating one specific version of a dependency into only the context of MTP?
@julealgon let's say we would start using microsoft.extensions and used it to compose our application and configuration. What are the real benefits that we are getting, and do we need to avoid mixing details of how the test runner is configured with the tested app?
e.g.:
The consequence of this is that of course, that whole Program.cs is now completely useless to me. I can't register my services, I can't register my configuration providers, I can't tap into anything else that is standard Microsoft.Extensions.Hosting. I can't add OpenTelemetry, no hosted services, nothing.
I am assuming that here you want to add services related to testing and configuration related to testing, and the tested app would have a completely separate ioc container, and config?
Or am I misunderstanding you? What I am wondering about is how do we achieve separation between the tests and the tested app, when they both use the same infrastructure. To avoid leaking the details of how tests are configured, and what services they use. For example: If we will put our test config into the standard config files, and the tested app will use automatic config loading, then it will load test config as well, and might not perform the same way as normally.
I understand not wanting to do dynamic assembly loading. But sounds like that could potentially solve this in the cleanest way, by isolating one specific version of a dependency into only the context of MTP?
I think this would be hard to achieve and hard to debug.
Test frameworks are the ones that determine the final versions to be used, so assuming mstest would want to use the same configuration mechanism, it would depend on microsoft.extensions* and would have to have a version that is the same as MTP (easy to achieve as dependency, harder to do on runtime), and then the user test code would have to be isolated from the framework, so each callback to test would have to run in a "test running" assembly load context.
We would have to ship the distinct version of microsoft.extensions as separate copy and put it in bin in non-standard way, to give option to install 2 different versions, or ship them renamed. Load them in non-standard way to achieve the isolation.
We do quite the same in vstest already with appdomains and custom assembly loaders, and it is just trouble and also adds slowness because of all the marshalling in between appdomains.
Ah fair enough. It does sound like a pain.
The middle ground I guess would be creating separate packages for MTP extensions that could map from M.E.* to M.T.P.
That would also be a way to dog-food test the extensibility and customisation of the current MTP interfaces/extensions
@julealgon let's say we would start using microsoft.extensions and used it to compose our application and configuration. What are the real benefits that we are getting, and do we need to avoid mixing details of how the test runner is configured with the tested app?
I personally don't see any need to avoid mixing test runner and app config, or try to hide the test setup in any way.
Setting up the runner could be a single AddX or UseX or ConfigureX call in the builder configuration, similar to how Azure functions work (to give an example which I think is similarly basic in comparison to a test host).
You can see how Azure Functions is looking into integrating with IHostApplicationBuilder here which is the modern, property-based host building API:
- https://github.com/Azure/azure-functions-dotnet-worker/issues/2438
https://github.com/Azure/azure-functions-dotnet-worker/blob/15c0a8063f47cc8103b847eeee045edf8ff61d65/extensions/Worker.Extensions.Http.AspNetCore/src/FunctionsWebApplicationBuilder.cs#L15
Their previous IHostBuilder-based implementation was super simple to use. This is from one of our own Azure Function projects:
You can see basically all the Azure Function-specific stuff is handled by this ConfigureFunctionsWorkerDefaults method:
- https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.hosting.workerhostbuilderextensions.configurefunctionsworkerdefaults?view=azure-dotnet
Every other line we have in our hosting setup code is application things we need: registered services, configuration providers, logging, open telemetry, etc.
Benefits should be fairly evident: you get a central, native point that mimics all other well-known application types, where you can configure all details of everything you need for that particular application, including everything that the hosting model exposes:
- configuration providers (default ones, plus custom ones)
- services using DI
- metrics (for telemetry, something which I really wanted to explore for test projects in combination with OTEL)
- logs (potentially super useful for test projects as well)
- hosted services (usually under "services)
- anything custom specific to the application type host builder (e.g. ASPNETCore -> middleware setup)
I don't want to have to add a third-party package, such as Xunit.DependencyInjection, to get some of these benefits in a test-framework-specific way. I want that native, central starting point that is self-evident and orthogonal to how other apps are setup.
The consequence of this is that of course, that whole Program.cs is now completely useless to me. I can't register my services, I can't register my configuration providers, I can't tap into anything else that is standard Microsoft.Extensions.Hosting. I can't add OpenTelemetry, no hosted services, nothing.
I am assuming that here you want to add services related to testing and configuration related to testing, and the tested app would have a completely separate ioc container, and config?
Yes. This is for "the test project". Unit tests usually don't have to deal with dependencies and more elaborate infrastructure, so injection there is not that useful (though it could still be, in some cases). Even on a pure unit testing project we would love to be able to seamlessly integrate OpenTelemetry though, and that could be done using the standard hosting abstractions trivially.
However, other, higher level testing projects, such as integration testing, UI testing, API testing, etc, all benefit hugely from having that native injection support built into them.
One obvious case for us is our API testing projects, where we want to inject our MyApiClient backed by a kiota proxy and using IHttpClientFactory behind the scenes with a AddHttpClient call in the container. We then configure this client to call into either dev, beta, staging or prod endpoint, leveraging IConfiguration + appsettings.{env}.json and Azure AppConfig.
All of these abstractions work perfectly in our "normal" web apps, but to get anything similar to that experience today we are forced to depend on Xunit.DependencyInjection. We did just that a few days ago when setting up a new API testing project, but I had to "fight" the current framework to get it all working properly.
MTP has the opportunity to finally eliminate that barrier and provide the hosting abstraction itself so consumers can add everything they need.
Or am I misunderstanding you? What I am wondering about is how do we achieve separation between the tests and the tested app, when they both use the same infrastructure. To avoid leaking the details of how tests are configured, and what services they use. For example: If we will put our test config into the standard config files, and the tested app will use automatic config loading, then it will load test config as well, and might not perform the same way as normally.
As I mentioned earlier, from my perspective, there should be no need for separation. MTP would leverage the same IServiceProvider it exposes for consumers to instantiate test classes, for example.
You don't need to "hide" how the test framework utilizes the hosting classes, you can just encapsulate it either the TestApplicationBuilder itself or with a single (or very few) calls such as how Azure Functions do it.
In this environment, switching to a different test framework could be as simple as a single AddMsBuild or AddXunit call in Program.cs, and that call encapsulates everything that is needed to override or implement MTP internals for that specific framework. The caller doesn't need to know all the details behind that at all.
@Evangelink and @Youssef1313 , I'm curious to hear if you guys discussed the issues presented here more thoroughly within the team and could report back with some updates?
This is such an important part of the framework to me that I didn't want this topic to die down. Not only would this change massively benefit end users, but I'm pretty sure test library authors would also appreciate the standardization and reliance on more flexible and solid abstractions.
I want to highlight @davidfowl here again as he worked on the existing IHostApplicationBuilder abstractions for other app types. Perhaps he could provide some insightful recommendations.
Thanks for the feedback!
You should very strongly reconsider this constraint and just use the common abstractions instead. I can't stress this point enough. If there is anything else I could do to highlight that please let me know.
The main reason, as explained by my teammates, is the dependency issue. Our focus is testing—not building fast web servers or common host systems—and we want the platform to run everywhere .NET can run, in any form supported today and in the future by the .NET runtime. This includes devices, AOT scenarios (where trimming and reflection can be problematic), and future runtime features. The goal is to achieve this with a single model, without shortcuts or custom solutions. The core library should depend only on the Base Class Library (BCL). When additional dependencies are needed, we attach them through “bridge” libraries. For example: https://github.com/microsoft/testfx/pull/6777/files#diff-febe34d06990db46629fa2897cd7c30d14e3ae4cebdd68ba9c21e522161929ea, where we added a dependency on Microsoft.Extensions.AI to allow extension to use the IChatClient.
What if users have an incompatible version in their dependency chain? In that case, they cannot use AI functionality, but they can always run their tests—the base minimal functionality is guaranteed.
We discussed this extensively. To respect these https://learn.microsoft.com/en-us/dotnet/core/testing/microsoft-testing-platform-intro?tabs=dotnetcli#microsofttestingplatform-pillars, taking dependencies “today” without knowing future implications is risky.
Man.... I get these, but we are punishing the entire ecosystem because of these special cases.
Why is this considered a “special case”? The goal of the testing platform is to test .NET code. If we cannot do that, it seems to me we are failing our purpose. I agree that, for example, a test adapter might involve certain “special cases” or scenarios that we cannot test due to the adapter’s inherent principles. However, as the core platform, we cannot afford to exclude legitimate .NET applications or code because of limitations in our core design.
@MarcoRossignoli
Thanks for the feedback!
You should very strongly reconsider this constraint and just use the common abstractions instead. I can't stress this point enough. If there is anything else I could do to highlight that please let me know.
The main reason, as explained by my teammates, is the dependency issue. Our focus is testing—not building fast web servers or common host systems—and we want the platform to run everywhere .NET can run, in any form supported today and in the future by the .NET runtime. This includes devices, AOT scenarios (where trimming and reflection can be problematic), and future runtime features. The goal is to achieve this with a single model, without shortcuts or custom solutions. The core library should depend only on the Base Class Library (BCL). When additional dependencies are needed, we attach them through “bridge” libraries. For example: https://github.com/microsoft/testfx/pull/6777/files#diff-febe34d06990db46629fa2897cd7c30d14e3ae4cebdd68ba9c21e522161929ea, where we added a dependency on Microsoft.Extensions.AI to allow extension to use the IChatClient.
That's totally fine by me. This is similar to the example I provided with OpenTelemetry, where you have a "base" implementation that doesn't depend on anything, the OpenTelemetry package, and then you have a separate OpenTelemetry.Extensions.Hosting package which provides the glue to interface with all standard hosting abstractions.
Surely that approach would be 100% feasible for you guys? Why not have a Microsoft.Testing.Platform.Hosting package that then gives full power to people that don't need to be able to test the extension packages themselves (the vast majority), and then have a fully-fledged TestApplication.CreateBuilder() which relies on Microsoft.Extensions.Hosting.Abstractions to provide all the same facilities that any other project has. Then, people can just opt-in to using this, similar to how they can opt-in to using OpenTelemetry.Extensions.Hosting.
What if users have an incompatible version in their dependency chain? In that case, they cannot use AI functionality, but they can always run their tests—the base minimal functionality is guaranteed.
Sounds good. Again, just do the same thing for hosting abstractions!
Man.... I get these, but we are punishing the entire ecosystem because of these special cases.
Why is this considered a “special case”? The goal of the testing platform is to test .NET code. If we cannot do that, it seems to me we are failing our purpose.
I think I provided enough examples and suggestions on how to achieve this without removing the capability to "test anything": the Hosting module would be an add-on, sitting on top of the lower-level testing API. If people have those specific needs where they cannot have the dependency, they can just use the lower-level API (basically what exists today). For everyone else, they can use the higher-level API and get all the benefits of the standardized hosting model.
You can also compare this to a "raw CLI console app" vs one with the generic host. The generic host model is just an additional abstraction layer that is completely optional, useful for those who want the "managed" experience with support for logging, DI, telemetry, configuration, etc. If you want a raw CLI, you just don't use it.
Surely that approach would be 100% feasible for you guys?
Please see this comment again: https://github.com/microsoft/testfx/issues/5497#issuecomment-2824983662
In short open telemetry provides a package that allows instrumentations to be written independently of the rest of the tooling. But rest of the tooling depends on the common libraries that you mention.
So to instrument an app, and collect telemetry you need the common libraries.
In short open telemetry provides a package that allows instrumentations to be written independently of the rest of the tooling. But rest of the tooling depends on the common libraries that you mention.
So to instrument an app, and collect telemetry you need the common libraries.
@nohwnd I don't think that's the case. The "base" OTEL packages give you the static SDK which allows you to host your own TraceProvider, MetricsProvider etc instances, and create listeners and exporters for those without any additional dependencies. This is the "no-dependencies" scenario I refer to in comparison to MTP.
Yes, the .Instrumentation... packages do depend on the higher-level abstraction, but they are not required to get instrumentation going: they just help out by giving you prebuilt implementations. Additionally, you can use the instrumentations with the raw SDK API as well, without ever going through the Hosting abstractions (even though, like you said, they are dependencies in the package).
In any case, you went a lot deeper than I anticipated in that comment. I was bringing up OTEL as an example of an approach where you have a package with a "base" implementation, and a separate package with an integration with Microsoft.Extensions.Hosting. How each of those is implemented and what each would contain to me is an implementation detail up to the MTP team, of course. I have no reservations on what you proposed there.
@julealgon I did some experimentations on my spare time and I think we could provide something. The main "problem" I have seen in reusing the common hosting is that it's sadly quite web oriented (we could decide to throw or ignore the property related to links).
It would definitely help with some cases but I haven't yet found how to make it of high-value enough at the moment. For example, assuming we have the core + extended core we would not be able to offer OTel with just the extended, we still need to modify the core which means that at this stage we can directly provide extension like I did in #6511
@julealgon I did some experimentations on my spare time and I think we could provide something. The main "problem" I have seen in reusing the common hosting is that it's sadly quite web oriented (we could decide to throw or ignore the property related to links).
@Evangelink I'm curious to hear more on that since hosting abstractions are used in worker services, azure functions, cli apps etc for some time now.
What "property related to links" are you referring to?
It would definitely help with some cases but I haven't yet found how to make it of high-value enough at the moment. For example, assuming we have the core + extended core we would not be able to offer OTel with just the extended, we still need to modify the core which means that at this stage we can directly provide extension like I did in #6511
See, I think that entire approach is a big mistake. Instead of building up on existing hosting-based OTEL support, you guys are once again replicating existing abstractions there with things such as IPlatformActivity and other custom wrappers.
I mean... it is definitely nice to see that there is native OTEL support incoming, but I strongly dislike the design/architecture of this current implementation.
OTEL in my opinion should be one of those "high level concerns" that is built on top of the suggested Microsoft.Testing.Platform.Hosting package: if you found a way to just expose a TestApplication.CreateBuilder using IHostApplicationBuilder behind the scenes, there is nothing to "add" to support OTEL besides just starting Activity objects in the proper places (this part actually could very well be a separate extension, but it would use standard classes like Activity).
If you do still want to isolate the "core" testing capabilities from System.Diagnostics.* dependencies, you could offer a "Diagnostics" extension (say, Microsoft.Testing.Platform.Diagnostics) that included decorator implementations of various core interfaces and added Activity.Start... calls via that mechanism. Then, if someone wants to be able to use MTP to test System.Diagnostics.* classes themselves, they can avoid that dependency and do it (while losing the ability to interface with OTEL since there would be no Activity/Meter instrumentation).
Consider this a vote for some kind of "extended core" that leverages the Microsoft.Extensions.* dependencies.
I just went through the exercise of developing a test platform extension and ran into the exact same problems that @julealgon did when I realized that none of the abstractions match. It was a shocking revelation. I end up just creating a WebApplicationBuilder directly inside the ITestFramework implementation, but this just swept the problem into a different corner. In order to get logging to work, I had to write a bunch of bridge classes from, eg Microsoft.Extensions.Logging.ILoggerFactory to Microsoft.Testing.Platform.Logging.ILoggerFactory, etc that added zero value.
But it doesn't stop there. You want to create an extension? Sure... just declare a random static method and then create a MSBuild property pointing to it in the target project. Surely super intuitive to do, particularly when the extension is not a NuGet package but a shared library in your solution. Very idiomatic, just ask people to add completely random custom properties and values to their csproj! Talk about ease of adding extensions!
What the MSBuild item does is that it simply calls the hook's AddExtensions in a generated file. You can always disable the auto-generated file and have the code on your own.
Sure. Again, I know that. I'm just arguing it should be the default and the other "form" shouldn't even need to exist in the first place.
Instead, you guys went the opposite route and made the convoluted thing the default, and the super idiomatic, standard thing, the optional path.
Agreed that getting this to work was convoluted and frustrating.
Anyway - I like the idea of what MTP wants to be, but the current implementation is really frustrating to use.