AITemplate icon indicating copy to clipboard operation
AITemplate copied to clipboard

IN PROGRESS, add an initial support of CMake and MSVC compiler

Open alexanderguzhva opened this issue 1 year ago • 11 comments

Summary: in progress. Some unit tests have started finish successfully on an AWS machine, both Linux and Windows one.

use AIT_USE_CMAKE_COMPILATION=1 environment flag

Linux

  • AWS g4dn.xlarge with 24GB hard drive is sufficient
  • it works with default CUDA drivers from 12.1.1 toolkit
  • no particular issues

Windows, MSVC

  • AWS g4dn.xlarge with 48 GB hard drive. Install MSVC community edition (not build tools, CMake won't find cuda compiler)
  • it works with default CUDA drivers from 12.1.1 toolkit
  • run from x64 Native tools command window
  • MSVC does not recognize inline PTX assembler marked as 'asm volatile' for CUTLASS 3.0. So, use CUTLASS 2.x

on Windows, GNU

todo

also todo:

  • CMake for ROCM
  • CMake for Windows, non-MSVC

Differential Revision: D44608330

alexanderguzhva avatar Apr 06 '23 22:04 alexanderguzhva

This pull request was exported from Phabricator. Differential Revision: D44608330

facebook-github-bot avatar Apr 06 '23 22:04 facebook-github-bot

This pull request was exported from Phabricator. Differential Revision: D44608330

facebook-github-bot avatar Apr 06 '23 22:04 facebook-github-bot

the PR is in a fairly early stage, there are a lot of things to fix and change :)

alexanderguzhva avatar Apr 06 '23 22:04 alexanderguzhva

This pull request was exported from Phabricator. Differential Revision: D44608330

facebook-github-bot avatar Apr 08 '23 00:04 facebook-github-bot

This pull request was exported from Phabricator. Differential Revision: D44608330

facebook-github-bot avatar Apr 22 '23 12:04 facebook-github-bot

This pull request was exported from Phabricator. Differential Revision: D44608330

facebook-github-bot avatar Apr 22 '23 22:04 facebook-github-bot

  • Fix CMake being located in a directory that contains spaces, like 'C:\Program Files\CMake'
  • Fix NVCC being unable to handle C++20
  • Fix library unloading function on Windows MSVC
  • Fix empty constant.bin embedding on Windows MSVC

alexanderguzhva avatar Apr 22 '23 22:04 alexanderguzhva

This pull request was exported from Phabricator. Differential Revision: D44608330

facebook-github-bot avatar Apr 22 '23 22:04 facebook-github-bot

This pull request was exported from Phabricator. Differential Revision: D44608330

facebook-github-bot avatar Apr 23 '23 12:04 facebook-github-bot

I have fixed compilation of Stable Diffusion demo on my local branch

X:\meta\AITemplate\examples\05_stable_diffusion>python scripts\demo.py

INFO:aitemplate.backend.build_cache_base:Build cache disabled
[06:06:22] X:\meta\AITemplate\examples\05_stable_diffusion\tmp\CLIPTextModel\model_container.cu:67: Device Runtime Version: 12010; Driver Version: 12010
[06:06:22] X:\meta\AITemplate\examples\05_stable_diffusion\tmp\CLIPTextModel\model_container.cu:81: Hardware accelerator device properties:
  Device:
     ASCII string identifying device: NVIDIA GeForce RTX 3090
     Major compute capability: 8
     Minor compute capability: 6
     UUID: GPU-3ee1c284-3569-1ed4-8be7-979740312cbf
     Unique identifier for a group of devices on the same multi-GPU board: 0
     PCI bus ID of the device: 2
     PCI device ID of the device: 0
     PCI domain ID of the device: 0
  Memory limits:
     Constant memory available on device in bytes: 65536
     Global memory available on device in bytes: 25769279488
     Size of L2 cache in bytes: 6291456
     Shared memory available per block in bytes: 49152
     Shared memory available per multiprocessor in bytes: 102400
[06:06:22] X:\meta\AITemplate\examples\05_stable_diffusion\tmp\CLIPTextModel\model_container.cu:85: Init AITemplate Runtime with 1 concurrency
[06:06:25] X:\meta\AITemplate\examples\05_stable_diffusion\tmp\UNet2DConditionModel\model_container.cu:67: Device Runtime Version: 12010; Driver Version: 12010
[06:06:25] X:\meta\AITemplate\examples\05_stable_diffusion\tmp\UNet2DConditionModel\model_container.cu:81: Hardware accelerator device properties:
  Device:
     ASCII string identifying device: NVIDIA GeForce RTX 3090
     Major compute capability: 8
     Minor compute capability: 6
     UUID: GPU-3ee1c284-3569-1ed4-8be7-979740312cbf
     Unique identifier for a group of devices on the same multi-GPU board: 0
     PCI bus ID of the device: 2
     PCI device ID of the device: 0
     PCI domain ID of the device: 0
  Memory limits:
     Constant memory available on device in bytes: 65536
     Global memory available on device in bytes: 25769279488
     Size of L2 cache in bytes: 6291456
     Shared memory available per block in bytes: 49152
     Shared memory available per multiprocessor in bytes: 102400
[06:06:25] X:\meta\AITemplate\examples\05_stable_diffusion\tmp\UNet2DConditionModel\model_container.cu:85: Init AITemplate Runtime with 1 concurrency
[06:06:25] X:\meta\AITemplate\examples\05_stable_diffusion\tmp\AutoencoderKL\model_container.cu:67: Device Runtime Version: 12010; Driver Version: 12010
[06:06:25] X:\meta\AITemplate\examples\05_stable_diffusion\tmp\AutoencoderKL\model_container.cu:81: Hardware accelerator device properties:
  Device:
     ASCII string identifying device: NVIDIA GeForce RTX 3090
     Major compute capability: 8
     Minor compute capability: 6
     UUID: GPU-3ee1c284-3569-1ed4-8be7-979740312cbf
     Unique identifier for a group of devices on the same multi-GPU board: 0
     PCI bus ID of the device: 2
     PCI device ID of the device: 0
     PCI domain ID of the device: 0
  Memory limits:
     Constant memory available on device in bytes: 65536
     Global memory available on device in bytes: 25769279488
     Size of L2 cache in bytes: 6291456
     Shared memory available per block in bytes: 49152
     Shared memory available per multiprocessor in bytes: 102400
[06:06:25] X:\meta\AITemplate\examples\05_stable_diffusion\tmp\AutoencoderKL\model_container.cu:85: Init AITemplate Runtime with 1 concurrency

CLIP works

tensor([[[-0.3887,  0.0229, -0.0522,  ..., -0.4902, -0.3064,  0.0674],
         [-0.3738, -1.4619, -0.3401,  ...,  0.9512,  0.1881, -1.1045],
         [-0.5186, -1.4736, -0.2878,  ...,  1.0498,  0.0699, -1.0342],
         ...,
         [ 0.4956, -0.9927, -0.6763,  ...,  1.6074, -1.0830, -0.1902],
         [ 0.4954, -0.9849, -0.6709,  ...,  1.6504, -1.1074, -0.1786],
         [ 0.4902, -0.8467, -0.5015,  ...,  1.6191, -1.0361, -0.2173]],

        [[-0.3887,  0.0229, -0.0522,  ..., -0.4902, -0.3064,  0.0674],
         [ 0.0278, -1.3291,  0.3137,  ..., -0.5273,  0.9863,  0.6665],
         [-0.2030,  0.4800,  1.5127,  ...,  0.1174,  1.0078, -0.1033],
         ...,
         [ 0.8833, -0.6074,  1.6621,  ..., -0.0296, -0.0363, -1.2656],
         [ 0.9160, -0.6055,  1.6094,  ..., -0.0311, -0.0511, -1.2725],
         [ 0.8296, -0.5845,  1.6670,  ...,  0.0148, -0.0023, -1.2568]]],
       device='cuda:0')

UNet does not work yet OSError: exception: access violation reading 0x00000000000000BE

X:\meta\AITemplate\examples\05_stable_diffusion\scripts\src\pipeline_stable_diffusion_ait.py:376
in __call__

  373 │   │   │   │   latent_model_input = latent_model_input / ((sigma**2 + 1) ** 0.5)
  374 │   │   │
  375 │   │   │   # predict the noise residual
❱ 376 │   │   │   noise_pred = self.unet_inference(
  377 │   │   │   │   latent_model_input, t, encoder_hidden_states=text_embeddings
  378 │   │   │   )
  379

X:\meta\AITemplate\examples\05_stable_diffusion\scripts\src\pipeline_stable_diffusion_ait.py:138
in unet_inference

  135 │   │   │   shape = exe_module.get_output_maximum_shape(i)
  136 │   │   │   shape[0] = self.batch * 2
  137 │   │   │   ys.append(torch.empty(shape).cuda().half())
❱ 138 │   │   exe_module.run_with_tensors(inputs, ys, graph_mode=False)
  139 │   │   noise_pred = ys[0].permute((0, 3, 1, 2)).float()
  140 │   │   return noise_pred
  141

C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\aitemplate\compiler\mode
l.py:550 in run_with_tensors

   547 │   │   │   outputs,
   548 │   │   │   name="outputs",
   549 │   │   )
❱  550 │   │   outputs_ait = self.run(
   551 │   │   │   _convert_tensor_args(inputs),
   552 │   │   │   _convert_tensor_args(outputs),
   553 │   │   │   stream_ptr=stream_ptr,

C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\aitemplate\compiler\mode
l.py:453 in run

   450 │   │   the maximum shape. The output memory blobs that are passed in to Run()
   451 │   │   should be interpreted and possibly truncated according to these sizes.
   452 │   │   """
❱  453 │   │   return self._run_impl(
   454 │   │   │   inputs, outputs, stream_ptr, sync, graph_mode, outputs_on_host=False
   455 │   │   )
   456

C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\aitemplate\compiler\mode
l.py:392 in _run_impl

   389 │   │   )
   390 │   │
   391 │   │   if not outputs_on_host:
❱  392 │   │   │   self.DLL.AITemplateModelContainerRun(
   393 │   │   │   │   self.handle,
   394 │   │   │   │   c_inputs,
   395 │   │   │   │   ctypes.c_size_t(len(inputs)),

C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\aitemplate\compiler\mode
l.py:194 in _wrapped_func

   191 │   │   │   method = getattr(self.DLL, name)
   192 │   │   │
   193 │   │   │   def _wrapped_func(*args):
❱  194 │   │   │   │   err = method(*args)
   195 │   │   │   │   if err:
   196 │   │   │   │   │   raise RuntimeError(f"Error in function: {method.__name__}")
   197

VAE (after swapping UNet to original) does not work yet either, we reach exe_module.run_with_tensors(inputs, ys, graph_mode=False) in vae_inference then demo.py exits with no error.

I will continue looking into this, and hopefully get UNet + VAE working soon:tm:

hlky avatar May 01 '23 05:05 hlky

Hi @alexanderguzhva!

Thank you for your pull request.

We require contributors to sign our Contributor License Agreement, and yours needs attention.

You currently have a record in our system, but the CLA is no longer valid, and will need to be resubmitted.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

facebook-github-bot avatar Jun 02 '23 07:06 facebook-github-bot