ColossalAI icon indicating copy to clipboard operation
ColossalAI copied to clipboard

[PROPOSAL]: Add a note at the top of the guide that only Linux is support currently..

Open georgeunidev opened this issue 2 years ago • 3 comments

Proposal

Users will spend time following instructions on native windows as it's not called out that it's only linux arch supported currently

Self-service

  • [ ] I'd be willing to do some initial work on this proposal myself.

georgeunidev avatar Nov 12 '22 16:11 georgeunidev

Oh is that why I can't seem to install it on windows? Cuz it's not supported? That would have been nice to know before I spent an hour trying to figure this out. FTR I'm not savvy with any of this, following directions and stumbling through the dark.

In case there's interest, my error on windows:

Processing c:\colossalai Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'error' error: subprocess-exited-with-error

python setup.py egg_info did not run successfully. exit code: 1

[24 lines of output] Traceback (most recent call last): File "", line 2, in File "", line 34, in File "C:\colossalai\setup.py", line 120, in build_cuda_ext = check_cuda_availability(CUDA_HOME) and check_cuda_torch_binary_vs_bare_metal(CUDA_HOME) File "C:\colossalai\setup.py", line 63, in check_cuda_availability _, bare_metal_major, _ = get_cuda_bare_metal_version(cuda_dir) File "C:\colossalai\setup.py", line 16, in get_cuda_bare_metal_version raw_output = subprocess.check_output([cuda_dir + "/bin/nvcc", "-V"], universal_newlines=True) TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'

torch.version = 1.13.0+cpu

Warning: Torch did not find available GPUs on this system. If your intention is to cross-compile, this is not an error. By default, Colossal-AI will cross-compile for Pascal (compute capabilities 6.0, 6.1, 6.2), Volta (compute capability 7.0), Turing (compute capability 7.5), and, if the CUDA version is >= 11.0, Ampere (compute capability 8.0). If you wish to cross-compile for a single specific architecture, export TORCH_CUDA_ARCH_LIST="compute capability" before running setup.py.

[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed

Encountered error while generating package metadata.

See above for output.

myndxero avatar Nov 14 '22 11:11 myndxero

Hi @georgeunidev and @myndxero Thank you for your feedback. Colossal-AI is only Linux arch supported currently. Support for window is in our plans, but not a high priority, as most projects are linux-based

@georgeunidev Would you please submit a PR to point this out in the repo README? We very much welcome contributions from the open source community, thanks.

binmakeswell avatar Nov 15 '22 08:11 binmakeswell

Hi @georgeunidev and @myndxero Thank you for your feedback. Colossal-AI is only Linux arch supported currently. Support for window is in our plans, but not a high priority, as most projects are linux-based

@georgeunidev Would you please submit a PR to point this out in the repo README? We very much welcome contributions from the open source community, thanks.

I think that's really important, Because most people probably only have windows system like me, It's such an excellent project.Most people should not be rejected by this question.

yier2333 avatar Nov 16 '22 03:11 yier2333

Updated. Thanks.

binmakeswell avatar Apr 13 '23 10:04 binmakeswell