GenerateBundle Task fails writing to singlefilehost.exe
Build Information
Build: https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_build/results?buildId=1146095 Build error leg or test failing: Microsoft.NET.Publish.Tests.dll.6.WorkItemExecution Pull request: https://github.com/dotnet/sdk/pull/50648
Error Message
Fill the error message using step by step known issues guidance.
{
"ErrorMessage": "",
"ErrorPattern": "The process cannot access the file '.*singlefilehost\\.exe' because it is being used by another process",
"BuildRetry": false,
"ExcludeConsoleLog": false
}
Known issue validation
Build: :mag_right: https://dev.azure.com/dnceng-public/public/_build/results?buildId=1146095
Error message validated: [The process cannot access the file '.*singlefilehost\.exe' because it is being used by another process]
Result validation: :white_check_mark: Known issue matched with the provided build.
Validation performed at: 9/12/2025 3:27:55 PM UTC
Report
Summary
| 24-Hour Hit Count | 7-Day Hit Count | 1-Month Count |
|---|---|---|
| 0 | 0 | 58 |
@dotnet/illink @agocke We are seeing failures where the GenerateBundle task is failing because it cannot access the singlnefilehost.exe file. We added retry logic because we thought maybe the issues were caused by defender. I thought that helped, but now the issue is happening pretty consistently, so it may be a product issue.
Interestingly, it seems to only fail on Mac OS and Linux when bundling a Windows app. I can't remember if it was failing on Windows before we added the retry logic or not.
Can someone investigate this?
I'm also wondering whether we should merge this codeflow PR which is blocked by the failure. That's where I've seen the failure most consistently recently, so it's possible there's a code change coming in that made the problem worse.
@jtschuster can you take a look?
Ping @jtschuster, any progress on this? I just noticed it in https://github.com/dotnet/sdk/pull/50876
I'm going to load balance a bit. @sbomer can you look instead?
Still investigating, but I am able to repro locally on a mac, but only on the first run of the test. Subsequent runs use the same path for the tmp project and don't repro. It's also interesting that it's failing on a read and not a write.
@jtschuster Any update on this? This is impacting a bunch of our CI runs.
Still investigating exactly what's going on, but my hunch is that there's an async memory map flush that keeps the apphost mapped from CreateAppHost. In particular it looks like this todo might fix the issue: https://github.com/dotnet/runtime/blob/b8c39cc4d102518f8608df70e6e3cb7a5c10d48d/src/installer/managed/Microsoft.NET.HostModel/AppHost/HostWriter.cs#L172 The ResourceUpdater.Update() maps the destination apphost back into memory, and if there's async flushing after Dispose(), we could run into issues.
As a workaround, it looks like adding --disable-build-servers to the PublishCommand.Execute() method does something to make the issue fail to repro for both me and @sbomer. I'm not sure why, though.
Thank you @jtschuster !
Going to leave this open for Build Analysis until the runtime code flows
Hmm maybe I shouldn't have used that wording in the commit
This is also impacting the main branch which is 11 - https://github.com/dotnet/sdk/pull/51074 do we need to port it there?
It looks like the commit is held up on flow to dotnet/dotnet https://github.com/dotnet/dotnet/pull/2627
It did make it to 10.0.1xx though: https://github.com/dotnet/dotnet/blob/release/10.0.1xx/src/runtime/src/installer/managed/Microsoft.NET.HostModel/ResourceUpdater.cs#L334-L345
I am experiencing this on .NET 10 RC 1, specifically on Linux publishing a Windows application.
EDIT: RC 2 still has the same issue, even with the retry logic
@RDI-Blake Thanks for the heads up. It looks like the fix just missed RC2, but it should be ready for the full release. If you're able to use one of the 10.0.100 daily builds do you still hit the failure?