MIOpen icon indicating copy to clipboard operation
MIOpen copied to clipboard

When will windows support be available as announced at CES 2024?

Open johnnynunez opened this issue 1 year ago • 15 comments

When will windows support be available as announced at CES 2024?

johnnynunez avatar Jan 12 '24 10:01 johnnynunez

Yeah really. I only see 1 PR open, and the windows project hasn't been filled out with anything. Does that mean it's soon or is there a whole bunch left to do?

It's crazy that we've been waiting for 8+ months with no pytorch support. I swapped from a 1070ti which ran pytorch fine to AMD expecting to be able to use it... Probably not goin gwith AMD in the future.

phanomgames avatar Jan 15 '24 18:01 phanomgames

It takes time! Just look at how long it took to add RDNA support to the ROCm platform. MIOpen wasn´t supported there as well in the begining, as the optimized kernels are often hand-written assembly for a specific architecture. No one can tell today, it´s ready when it´s ready!

Guess we will see full windows support in late 2024, as Microsoft is pushing forward with AI and probably will require PCs to have certain minimal inference performance requirements.

I like how they do a dual approach with the bought Xilinx Tech (now called XDNA 1 / XDNA 2) but still commited to AI on consumer GPUs (RDNA). This will allow some training on the GPUs for us, as the XDNA accelerators are not good at this.

Fully working PyTorch on ROCm on RDNA2+ will be cool, once it arrives.. Even on Linux it was a hit and miss thing for me with PyTorch on ROCm on RDNA..

Spacefish avatar Jan 23 '24 01:01 Spacefish

It takes time! Just look at how long it took to add RDNA support to the ROCm platform. MIOpen wasn´t supported there as well in the begining, as the optimized kernels are often hand-written assembly for a specific architecture. No one can tell today, it´s ready when it´s ready!

Guess we will see full windows support in late 2024, as Microsoft is pushing forward with AI and probably will require PCs to have certain minimal inference performance requirements.

I like how they do a dual approach with the bought Xilinx Tech (now called XDNA 1 / XDNA 2) but still commited to AI on consumer GPUs (RDNA). This will allow some training on the GPUs for us, as the XDNA accelerators are not good at this.

Fully working PyTorch on ROCm on RDNA2+ will be cool, once it arrives.. Even on Linux it was a hit and miss thing for me with PyTorch on ROCm on RDNA..

The support is already there, the point is that in the new release, they give the compiled exe, which is what I am referring to. 40 PR have been added from @apwojcik to the main.

johnnynunez avatar Jan 23 '24 08:01 johnnynunez

any news ?

Burane avatar Mar 08 '24 09:03 Burane

Looking forward to the release. PyTorch on Windows would help me not spend so much time dual booting 👍 Is there any way to compile/build my own builds while we wait for a release?

supernovae avatar Mar 24 '24 17:03 supernovae

Is there any way to compile/build my own builds while we wait for a release?

@apwojcik Can you help @supernovae with this? Or it's too complicated still?

atamazov avatar Apr 11 '24 17:04 atamazov

Is there any way to compile/build my own builds while we wait for a release?

@apwojcik Can you help @supernovae with this? Or it's too complicated still?

Still not binares for rocm6 for windows. I guess they are trying to fix the RDNA compilers etc. given the problem with tinygrad.

johnnynunez avatar Apr 12 '24 10:04 johnnynunez

is there any updates on the matter? maybe something on the current state of the project?

Picus303 avatar Jun 17 '24 13:06 Picus303

is there any updates on the matter? maybe something on the current state of the project?

AMD doesn't care about windows support, there's like 2 people working on it and they won't even update the project roadmap. just get an nvidia GPU.

phanomgames avatar Jun 19 '24 11:06 phanomgames

Meanwhile, almost literally the same minute https://community.amd.com/t5/ai/new-amd-rocm-6-1-software-for-radeon-release-offers-more-choices/ba-p/688840 (and yes that should include miopen too)

mirh avatar Jun 19 '24 15:06 mirh

Meanwhile, almost literally the same minute https://community.amd.com/t5/ai/new-amd-rocm-6-1-software-for-radeon-release-offers-more-choices/ba-p/688840 (and yes that should include miopen too)

With ROCm 6.1.3, we are making it even easier to develop for AI with Beta-level support for Windows® Subsystem for Linux®, also known as WSL 2.

johnnynunez avatar Jun 19 '24 15:06 johnnynunez

Ok, so, like.. Every single windows issue is closed by now. And I can just presume it should work if built from source? But the biggest blocker for an official release is probably the fact that rocm on WSL2 is still experimental, and for the time being it still depends on a very specific runtime version. So I guess that has to be handled first, before committing.

mirh avatar Sep 21 '24 00:09 mirh

i bought an 7900xtx more than a year ago with promise of AI and it still suck on amd on windows

Burane avatar Sep 24 '24 13:09 Burane

i bought an 7900xtx more than a year ago with promise of AI and it still suck on amd on windows

Rdna4 come with rt cores and matrix cores. Rocm will support it https://wccftech.com/amd-expanding-rocm-radeon-gpus-apus-design-centers-serbia-next-gen-udna-architecture/amp/

johnnynunez avatar Sep 24 '24 14:09 johnnynunez

Not sure how that has anything to do with anything. As I said, this is probably already possible if you can put up with compiling stuff from source.

mirh avatar Sep 24 '24 16:09 mirh

It's not, there's people who've built it for windows and there's tons of performance/bug issues. @mirh . Basically AMD has awful windows support, they've told us nothing about getting native support and basically just put in wsl 2 support, which is a joke for how long it's taken. I've simply swapped over to nvidia and have no issues with pytorch on windows.

phanomgames avatar Oct 07 '24 11:10 phanomgames

Oh, I see. https://github.com/ROCm/MIOpen/pull/3263 https://github.com/ROCm/ROCm/issues/3571 So, anyway, as for getting an official WSL2 release as I had guessed it's waiting on the mainlining of overall rocm support (which does seem close?)

mirh avatar Oct 07 '24 16:10 mirh

Hi all, sorry for the lack of official response on this. While this issue is in the MIOpen repo it seems like many of your comments are seeking information in the context of Pytorch support. We have support for Pytorch on Windows systems via WSL2. We do not have native Pytorch or MIOpen support on Windows at the moment, but we are aware that this is a need for many users and are working on it. Unfortunately there is no public information we can share about the timeline for this until an official announcement is made. For now WSL2 should be used for Pytorch applications on Windows systems.

As there is no further information we can share on this matter at the moment, I'm closing this issue for now. Feel free to comment if further guidance is required.

schung-amd avatar Apr 29 '25 18:04 schung-amd

Native windows can even wait eventually.. But I think a lot of people were grasping at straws to know the requirements for WSL2. Because even in the original announcement it was just ONE driver with ONE specific pytorch build for just ONE chip.

And like, I get if that was just your "official beta support" while you polish stuff (I don't know, to be sure you don't screw with corporate).. but my understanding was that even enthusiasts wishing to compile everything from source were still left in the dirt.

mirh avatar Apr 29 '25 23:04 mirh

@mirh Yes, unfortunately our WSL requirements are quite strict. A WSL-compatible Adrenalin driver version is required (the latest being 25.3.1 at the time of writing, 24.10.1 and 24.12.1 are previous driver versions which had WSL support), and the supported device list is narrow. The compatibility matrices at https://rocm.docs.amd.com/projects/radeon/en/latest/docs/compatibility/wsl/wsl_compatibility.hml list the officially supported combinations of ROCm version, Adrenalin driver, and software versions. If there's a lack of clarity in the compatibility matrices I can pass that on to our docs team.

schung-amd avatar Apr 30 '25 14:04 schung-amd