segment-anything icon indicating copy to clipboard operation
segment-anything copied to clipboard

torch compile tensorrt error

Open fumin opened this issue 1 year ago • 4 comments

🐛 Describe the bug

When compiling segment_anything with torch_tensorrt, I got the error We don't have an op for aten::floor_divide but it isn't a special case.

My reproducible code is:

from segment_anything import sam_model_registry, SamPredictor
import torch_tensorrt

sam = sam_model_registry["vit_b"](checkpoint="sam_vit_b_01ec64.pth")
input_signature = ([torch_tensorrt.Input(shape=frame.shape, dtype=torch.half)])
enabled_precisions = {torch.half,}
sam.image_encoder = torch_tensorrt.compile(sam.image_encoder, input_signature=input_signature, enabled_precisions=enabled_precisions)

The error message is:

Traceback (most recent call last):
  File "realtime.py", line 144, in <module>
    main()
  File "realtime.py", line 116, in main
    sam = newSam(frame)
  File "realtime.py", line 86, in newSam
    sam.image_encoder = torch_tensorrt.compile(sam.image_encoder, input_signature=input_signature, enabled_precisions=enabled_precisions)
  File "/home/topunion/.local/lib/python3.8/site-packages/torch_tensorrt/_compile.py", line 133, in compile
    return torch_tensorrt.ts.compile(
  File "/home/topunion/.local/lib/python3.8/site-packages/torch_tensorrt/ts/_compiler.py", line 139, in compile
    compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))
RuntimeError: 0 INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":615, please report a bug to PyTorch. We don't have an op for aten::floor_divide but it isn't a special case.  Argument types: int, int, 

Candidates:
	aten::floor_divide(Tensor self, Tensor other) -> Tensor
	aten::floor_divide.Scalar(Tensor self, Scalar other) -> Tensor
	aten::floor_divide.out(Tensor self, Tensor other, *, Tensor(a!) out) -> Tensor(a!)

Versions

Collecting environment information... PyTorch version: 2.0.1+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.25.0 Libc version: glibc-2.31

Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.29 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Nvidia driver version: 530.41.03 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 43 bits physical, 48 bits virtual CPU(s): 12 On-line CPU(s) list: 0-11 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 1 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 113 Model name: AMD Ryzen 5 3600 6-Core Processor Stepping: 0 Frequency boost: enabled CPU MHz: 2200.000 CPU max MHz: 3600.0000 CPU min MHz: 2200.0000 BogoMIPS: 7200.08 Virtualization: AMD-V L1d cache: 192 KiB L1i cache: 192 KiB L2 cache: 3 MiB L3 cache: 32 MiB NUMA node0 CPU(s): 0-11 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es

Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.24.1 [pip3] torch==2.0.1+cu118 [pip3] torch-tensorrt==1.4.0 [pip3] torchaudio==2.0.2+cu118 [pip3] torchvision==0.15.2+cu118 [pip3] triton==2.0.0 [conda] Could not collect

fumin avatar Jun 19 '23 15:06 fumin

I also met the same problem, it seems that attention module in ImageEncoderViT has some trouble

Youngluc avatar Jun 27 '23 14:06 Youngluc

It is because torch-trt dont support % op, you need to change % to int(a - b * torch.floor(torch.div(a, b))), the compute is the same, but can convert by torch-trt. Function window_partition in segment_anything/modeling/image_encoder.py contain % op.

Mythos-Rudy avatar Oct 12 '23 08:10 Mythos-Rudy

hi @Mythos-Rudy have been able to compile the encoder part to tensorrt?

adithya-Avataar avatar Oct 18 '23 06:10 adithya-Avataar

It is because torch-trt dont support % op, you need to change % to int(a - b * torch.floor(torch.div(a, b))), the compute is the same, but can convert by torch-trt. Function window_partition in segment_anything/modeling/image_encoder.py contain % op.

This is helpful for me, thank you.

I use this code below to replace % op:

def TakeRemainder(x: int, y: int) -> int:
    return x - y * int(x / y)

demuxin avatar May 15 '24 02:05 demuxin