MIOpen
MIOpen copied to clipboard
When will windows support be available as announced at CES 2024?
When will windows support be available as announced at CES 2024?
Yeah really. I only see 1 PR open, and the windows project hasn't been filled out with anything. Does that mean it's soon or is there a whole bunch left to do?
It's crazy that we've been waiting for 8+ months with no pytorch support. I swapped from a 1070ti which ran pytorch fine to AMD expecting to be able to use it... Probably not goin gwith AMD in the future.
It takes time! Just look at how long it took to add RDNA support to the ROCm platform. MIOpen wasn´t supported there as well in the begining, as the optimized kernels are often hand-written assembly for a specific architecture. No one can tell today, it´s ready when it´s ready!
Guess we will see full windows support in late 2024, as Microsoft is pushing forward with AI and probably will require PCs to have certain minimal inference performance requirements.
I like how they do a dual approach with the bought Xilinx Tech (now called XDNA 1 / XDNA 2) but still commited to AI on consumer GPUs (RDNA). This will allow some training on the GPUs for us, as the XDNA accelerators are not good at this.
Fully working PyTorch on ROCm on RDNA2+ will be cool, once it arrives.. Even on Linux it was a hit and miss thing for me with PyTorch on ROCm on RDNA..
It takes time! Just look at how long it took to add RDNA support to the ROCm platform. MIOpen wasn´t supported there as well in the begining, as the optimized kernels are often hand-written assembly for a specific architecture. No one can tell today, it´s ready when it´s ready!
Guess we will see full windows support in late 2024, as Microsoft is pushing forward with AI and probably will require PCs to have certain minimal inference performance requirements.
I like how they do a dual approach with the bought Xilinx Tech (now called XDNA 1 / XDNA 2) but still commited to AI on consumer GPUs (RDNA). This will allow some training on the GPUs for us, as the XDNA accelerators are not good at this.
Fully working PyTorch on ROCm on RDNA2+ will be cool, once it arrives.. Even on Linux it was a hit and miss thing for me with PyTorch on ROCm on RDNA..
The support is already there, the point is that in the new release, they give the compiled exe, which is what I am referring to. 40 PR have been added from @apwojcik to the main.
any news ?
Looking forward to the release. PyTorch on Windows would help me not spend so much time dual booting 👍 Is there any way to compile/build my own builds while we wait for a release?
Is there any way to compile/build my own builds while we wait for a release?
@apwojcik Can you help @supernovae with this? Or it's too complicated still?
Is there any way to compile/build my own builds while we wait for a release?
@apwojcik Can you help @supernovae with this? Or it's too complicated still?
Still not binares for rocm6 for windows. I guess they are trying to fix the RDNA compilers etc. given the problem with tinygrad.
is there any updates on the matter? maybe something on the current state of the project?
is there any updates on the matter? maybe something on the current state of the project?
AMD doesn't care about windows support, there's like 2 people working on it and they won't even update the project roadmap. just get an nvidia GPU.
Meanwhile, almost literally the same minute https://community.amd.com/t5/ai/new-amd-rocm-6-1-software-for-radeon-release-offers-more-choices/ba-p/688840 (and yes that should include miopen too)
Meanwhile, almost literally the same minute https://community.amd.com/t5/ai/new-amd-rocm-6-1-software-for-radeon-release-offers-more-choices/ba-p/688840 (and yes that should include miopen too)
With ROCm 6.1.3, we are making it even easier to develop for AI with Beta-level support for Windows® Subsystem for Linux®, also known as WSL 2.
Ok, so, like.. Every single windows issue is closed by now. And I can just presume it should work if built from source? But the biggest blocker for an official release is probably the fact that rocm on WSL2 is still experimental, and for the time being it still depends on a very specific runtime version. So I guess that has to be handled first, before committing.
i bought an 7900xtx more than a year ago with promise of AI and it still suck on amd on windows
i bought an 7900xtx more than a year ago with promise of AI and it still suck on amd on windows
Rdna4 come with rt cores and matrix cores. Rocm will support it https://wccftech.com/amd-expanding-rocm-radeon-gpus-apus-design-centers-serbia-next-gen-udna-architecture/amp/
Not sure how that has anything to do with anything. As I said, this is probably already possible if you can put up with compiling stuff from source.
It's not, there's people who've built it for windows and there's tons of performance/bug issues. @mirh . Basically AMD has awful windows support, they've told us nothing about getting native support and basically just put in wsl 2 support, which is a joke for how long it's taken. I've simply swapped over to nvidia and have no issues with pytorch on windows.
Oh, I see. https://github.com/ROCm/MIOpen/pull/3263 https://github.com/ROCm/ROCm/issues/3571 So, anyway, as for getting an official WSL2 release as I had guessed it's waiting on the mainlining of overall rocm support (which does seem close?)
Hi all, sorry for the lack of official response on this. While this issue is in the MIOpen repo it seems like many of your comments are seeking information in the context of Pytorch support. We have support for Pytorch on Windows systems via WSL2. We do not have native Pytorch or MIOpen support on Windows at the moment, but we are aware that this is a need for many users and are working on it. Unfortunately there is no public information we can share about the timeline for this until an official announcement is made. For now WSL2 should be used for Pytorch applications on Windows systems.
As there is no further information we can share on this matter at the moment, I'm closing this issue for now. Feel free to comment if further guidance is required.
Native windows can even wait eventually.. But I think a lot of people were grasping at straws to know the requirements for WSL2. Because even in the original announcement it was just ONE driver with ONE specific pytorch build for just ONE chip.
And like, I get if that was just your "official beta support" while you polish stuff (I don't know, to be sure you don't screw with corporate).. but my understanding was that even enthusiasts wishing to compile everything from source were still left in the dirt.
@mirh Yes, unfortunately our WSL requirements are quite strict. A WSL-compatible Adrenalin driver version is required (the latest being 25.3.1 at the time of writing, 24.10.1 and 24.12.1 are previous driver versions which had WSL support), and the supported device list is narrow. The compatibility matrices at https://rocm.docs.amd.com/projects/radeon/en/latest/docs/compatibility/wsl/wsl_compatibility.hml list the officially supported combinations of ROCm version, Adrenalin driver, and software versions. If there's a lack of clarity in the compatibility matrices I can pass that on to our docs team.