AspNetCore.Docs
AspNetCore.Docs copied to clipboard
Document what's new in ASP.NET Core for .NET 9 Preview 4
Description
Update the "What's new in ASP.NET Core 9.0" for .NET 9 Preview 4
@adityamandaleeka @mkArtakMSFT Please have folks comment on this issue with the desired content.
FYI: @samsp-msft @JeremyLikness @mikekistler @claudiaregio @JamesNK @SteveSandersonMS @davidfowl
Page URL
https://learn.microsoft.com/aspnet/core/release-notes/aspnetcore-9.0
Content source URL
https://github.com/dotnet/AspNetCore.Docs/blob/main/aspnetcore/release-notes/aspnetcore-9.0.md
Document ID
4e75ad25-2c3f-b28e-6a91-ac79a9c683b6
Article author
@Rick-Anderson @tdykstra @guardrex
Developer exception page improvements
The ASP.NET Core developer exception page is displayed when an app throws an unhandled exception during development. The developer exception page provides detailed information about the exception and request. It's a feature you hate to see but are glad it's there.
Last preview introduced endpoint metadata. While testing the developer exception page, small quality of life improvements were identified. They ship in preview 4:
- Better text wrapping. Long cookies, query string values and method names no longer add horizontal browser scroll bars.
- Bigger text. This page has a long history (10+ years) and web design has changed over time. The text felt a little small compared to modern designs.
- More consistent table sizes.
Thank you @ElderJames for this contribution.
From @mgravell
New HybridCache
library
.NET 9 Preview 4 includes the first release of the new HybridCache
API; this API bridges some gaps in the existing IDistributedCache
and IMemoryCache
APIs, while also adding new capabilities
including "stampede" protection (to prevent parallel fetches of the same work) and configurable serialization - all with a simple, clean API.
It is designed to be easy to add for new code, or in place of existing caching code.
The best way to illustrate HybridCache
is by comparison to existing IDistributedCache
code; consider:
public class SomeService(IDistributedCache cache)
{
public async Task<SomeInformation> GetSomeInformationAsync(string name, int id, CancellationToken token = default)
{
var key = $"someinfo:{name}:{id}"; // unique key for this combination
var bytes = await cache.GetAsync(key, token); // try to get from cache
SomeInformation info;
if (bytes is null)
{
// cache miss; get the data from the real source
info = await SomeExpensiveOperationAsync(name, id, token);
// serialize and cache it
bytes = SomeSerializer.Serialize(info);
await cache.SetAsync(key, bytes, token);
}
else
{
// cache hit; deserialize it
info = SomeSerializer.Deserialize<SomeInformation>(bytes);
}
return info;
}
// this is the work we're trying to cache
private async Task<SomeInformation> SomeExpensiveOperationAsync(string name, int id,
CancellationToken token = default)
{ /* ... */ }
// ...
}
That's a lot of work to get right each time; additionally, we had to know about things like serialization - and importantly: in the "cache miss" scenario, in a busy system we could easily end up with multiple concurrent threads all getting a cache miss, all fetching the underlying data, all serializing it, and all sending that data to the cache.
To simplify and improve this with HybridCache
, we first need to add the new library Microsoft.Extensions.Caching.Hybrid
:
<PackageReference Include="Microsoft.Extensions.Caching.Hybrid" Version="..." />
and register the HybridCache
service (much like we're already registering an IDistributedCache
implementation):
services.AddHybridCache(); // not shown: optional configuration API
Now we can offload most of our caching concerns to HybridCache
:
public class SomeService(HybridCache cache)
{
public async Task<SomeInformation> GetSomeInformationAsync(string name, int id, CancellationToken token = default)
{
return await cache.GetOrCreateAsync(
$"someinfo:{name}:{id}", // unique key for this combination
async cancel => await SomeExpensiveOperationAsync(name, id, cancel),
token: token
);
}
// ...
}
with HybridCache
dealing with everthing else, including combining concurrent operations. The cancel
token here represents the combined cancellation
of all concurrent callers - not just the cancellation of the caller we can see (token
). If this is very high throughput scenario, we can further
optimize this by using the TState
pattern, to avoid some overheads from "captured" variables and per-instance callbacks:
public class SomeService(HybridCache cache)
{
public async Task<SomeInformation> GetSomeInformationAsync(string name, int id, CancellationToken token = default)
{
return await cache.GetOrCreateAsync(
$"someinfo:{name}:{id}", // unique key for this combination
(name, id), // all of the state we need for the final call, if needed
static async (state, token) =>
await SomeExpensiveOperationAsync(state.name, state.id, token),
token: token
);
}
// ...
}
HybridCache
will use your configured IDistributedCache
implementation, if any, for the secondary out-of-process caching - for example
Redis (more information) - but even without
an IDistributedCache
, the HybridCache
service will still provide in-process caching and "stampede" protection.
A note on object reuse
Because a lot of HybridCache
usage will be adapted from existing IDistributedCache
code, we need to be mindful that existing code will usually
be deserializing every call - which means that concurrent callers will get separate object instances that cannot interact and are inherently
thread-safe. To avoid introducing concurrency bugs into code, HybridCache
preserves this behaviour by default, but if your scenario is itself
thread-safe (either because the types are fundamentally immutable, or because you're just not mutating them), you can hint to HybridCache
that it can safely reuse instances by marking the type (SomeInformation
in this case) as sealed
and using the [ImmutableObject(true)]
annotation,
which can significantly reduce per-call deserialization overheads of CPU and object allocations.
Other HybridCache
features
As you might expect for parity with IDistributedCache
, HybridCache
supports explicit removal by key (cache.RemoveKeyAsync(...)
). HybridCache
also introduces new optional APIs for IDistributedCache
implementations, to avoid byte[]
allocations (this feature is implemented
by the preview versions of Microsoft.Extensions.Caching.StackExchangeRedis
and Microsoft.Extensions.Caching.SqlServer
).
Serialization is configured as part of registering the service, with support for type-specific and generalized serializers via the
.WithSerializer(...)
and .WithSerializerFactory(...)
methods, chained from the AddHybridCache(...)
call. By default, the library
handles string
and byte[]
internally, and uses System.Text.Json
for everything else, but if you want to use protobuf, xml, or anything
else: that's easy to do.
HybridCache
includes support for older .NET runtimes, down to .NET Framework 4.7.2 and .NET Standard 2.0.
Outstanding HybridCache
work includes:
- support for "tagging" (similar to how tagging works for "Output Cache"), allowing invalidation of entire categories of data
- backend-assisted cache invalidation, for backends that can provide suitable change notifications
- relocation of the core abstractions to
Microsoft.Extensions.Caching.Abstractions
If the HybridCache
schtick is running a bit long, I could move not of it to a separate blog post and cross-link from here instead?
If the
HybridCache
schtick is running a bit long, I could move not of it to a separate blog post and cross-link from here instead?
@mgravell This issue is tracking the content for the release notes, which includes the aggregated What's New doc and the per-release notes in the dotnet/core repo. The release notes typically capture what the feature is and why it's important and then link to official documentation for all the details. You can, of course, publish related blog post content as well, but official docs and release notes typically shouldn't reference blog posts, because blog posts are generally point-in-time content.
Please feel free to edit the above content as appropriate for what you actually want to be in the release notes. Any additional details can go into a separate issue for adding reference docs about the feature that get linked to from the release notes.
Ability to add static SSR pages to a globally-interactive Blazor Web application
Since .NET 8, the Blazor Web application template includes an option to enable global interactivity, which means that all pages run on either Server or WebAssembly interactivity modes (or Auto, which combines both). It was not possible to add static SSR pages to those sites, since global interactivity meant that all pages would be interactive.
Many developers requested a way to have static SSR pages on an otherwise globally-interactive site. This was almost possible already, except for a limitation that when the interactive Server/WebAssembly router was active, there was no way to escape from it back into a static SSR context.
As of .NET 9 Preview 4 this is now possible. You can mark any Blazor page component with the new [ExcludeFromInteractiveRouting]
attribute, for example:
@page "/weather"
@attribute [ExcludeFromInteractiveRouting]
<h1>The rest of the page</h1>
This causes navigations to the page to exit from interactive routing. That is, inbound navigations will be forced to perform a full-page reload instead of being resolved via SPA-style interactive routing. This means that your top-level App.razor
will re-run, allowing you to switch to a different top-level rendermode. For example, in your top-level App.razor
, you can use the following pattern:
<!DOCTYPE html>
<html>
<head>
... other head content here ...
<HeadOutlet @rendermode="@PageRenderMode" />
</head>
<body>
<Routes @rendermode="@PageRenderMode" />
<script src="_framework/blazor.web.js"></script>
</body>
</html>
@code {
[CascadingParameter]
private HttpContext HttpContext { get; set; } = default!;
private IComponentRenderMode? PageRenderMode
=> HttpContext.AcceptsInteractiveRouting() ? InteractiveServer : null;
}
When set up like this, all pages will default to InteractiveServer
render mode, retaining global interactivity, except for pages annotated with [ExcludeFromInteractiveRouting]
which will render as static SSR only. Of course, you can replace InteractiveServer
with InteractiveWebAssembly
or InteractiveAuto
to specify a different default global mode.
The new HttpContext.AcceptsInteractiveRouting
extension method is simply a helper that makes it easy to detect whether [ExcludeFromInteractiveRouting]
is applied to the current page. If you prefer, you can read endpoint metadata manually using HttpContext.GetEndpoint()?.Metadata
instead.
When to consider doing this
This is useful only if you have certain pages that can't work with interactive Server or WebAssembly rendering. For example, those pages might include code that depends on reading/writing HTTP cookies, and hence can only work in a request/response cycle. Forcing those pages to use static SSR mode will force them into this traditional request/response cycle instead of interactive SPA-style rendering.
For pages that do work with interactive SPA-style rendering, you should not force them to use static SSR rendering only as it would simply be less efficient and less responsive for the end user.
Introducing built-in support for OpenAPI document generation
The OpenAPI specification is a standard for describing HTTP APIs. The standard allows developers to define the shape of APIs that can be plugged into client generators, server generators, testing tools, documentation and more. As of .NET 9 Preview 4, ASP.NET Core provides built-in support for generating OpenAPI documents representing the underlying controller-based or minimal API via the Microsoft.AspNetCore.OpenApi
package.
To take advantage of this feature, install the Microsoft.AspNetCore.OpenApi
project in your web project of choice.
// NOTE: This version needs to be updated before the release notes are published
dotnet add package Microsoft.AspNetCore.OpenApi --version 9.0.0-preview.4
In your application's Program.cs
:
- Call
AddOpenApi
to register the required dependencies into your application's DI container. - Call
MapOpenApi
to register the required OpenAPI endpoints in your application's routes.
var builder = WebApplication.CreateBuilder();
builder.Services.AddOpenApi();
var app = builder.Build();
app.MapOpenApi();
app.MapGet("/hello/{name}", (string name) => $"Hello {name}"!);
app.Run();
Run your application and navigate to http://localhost:5000/openapi/v1.json
to view the generated OpenAPI document:
You can also generated OpenAPI documents at build-time using the Microsoft.Extensions.ApiDescription.Server
MSBuild configuration. Add the required dependency to your project:
// NOTE: This version needs to be updated before the release notes are published
$ dotnet add package Microsoft.Extensions.ApiDescription.Server --version 9.0.0-preview.4
In your application's project file, add the following:
<PropertyGroup>
<OpenApiDocumentsDirectory>$(MSBuildProjectDirectory)</OpenApiDocumentsDirectory>
<OpenApiGenerateDocuments>true</OpenApiGenerateDocuments>
</PropertyGroup>
Then, run dotnet build
and inspect the generated JSON file in your project directory.
ASP.NET Core's built-in OpenAPI document generation provides support for various customizations and options, including document and operation transformers and the ability to manage multiple OpenAPI documents for the same application.
To learn more about the available APIs, read the news docs on OpenAPI. To learn more about upcoming features in this space, follow the tracking issue.
Note: Fix is in the globally installed ANCM module which comes from the hosting bundle.
Fix for 503's during app recycle in IIS. By default there is now a 1 second delay between when IIS is notified of a recycle/shutdown and when ANCM will tell the managed server to start shutting down. The delay is configurable via the ANCM_shutdownDelay
environment variable or by setting the shutdownDelay
handler setting, both values are in milliseconds. The delay is mainly to reduce the likelihood of a race where IIS hasn't started queuing requests to go to the new app before ANCM starts rejecting new requests that come into the old app. Slower/machines with heavier CPU usage may want to adjust this value to reduce 503 likelihood.
Example of setting shutdownDelay
:
<aspNetCore processPath="dotnet" arguments="myapp.dll" stdoutLogEnabled="false" stdoutLogFile=".logsstdout">
<handlerSettings>
<!-- Milliseconds to delay shutdown by, this doesn't mean incoming requests will be delayed by this amount, but the old app instance will start shutting down after this timeout occurs -->
<handlerSetting name="shutdownDelay" value="5000" />
</handlerSettings>
</aspNetCore>
@tdykstra Is this issue ok to close? Or are there still pending items?
No pending items.