taichi
taichi copied to clipboard
Autograd warning spam [control_flow_graph.cpp:get_store_forwarding_data@341] stmt 130 loaded in stmt 131 before storing.
Code below causes warnings below, I haven't been able to workaround the problem - unsure if it actually matters.
Taichi nightly (commit f74d75d9)
import taichi as ti
from taichi.math import vec2, vec3, mat3
import torch
@ti.kernel
def project_to_image_kernel(
position: ti.types.ndarray(vec3, ndim=1),
T_image_camera: ti.types.ndarray(ndim=2), # (3, 3)
points: ti.types.ndarray(vec2, ndim=1)):
for idx in range(position.shape[0]):
image_camera = mat3([T_image_camera[i, j]
for i in ti.static(range(3)) for j in ti.static(range(3))])
p = position[idx]
uv = (image_camera @ p) / p.z
points[idx] = uv.xy
ti.init(arch=ti.cuda, debug=True, log_level=ti.DEBUG)
position = torch.randn(100, 3).requires_grad_(True)
T_image_camera = torch.eye(3).requires_grad_(True)
point = torch.zeros(100, 2).requires_grad_(True)
project_to_image_kernel(position, T_image_camera, point)
point.grad = torch.ones_like(point)
project_to_image_kernel.grad(position, T_image_camera, point)
Console output with logging DEBUG:
[Taichi] version 1.7.0, llvm 15.0.4, commit f74d75d9, linux, python 3.10.11
[Taichi] Starting on arch=x64
(Snip)
[D 12/27/23 21:37:27.739 178147] [kernel_compilation_manager.cpp:compile_and_cache_kernel@244] Cache kernel 'project_to_image_kernel_c76_0' (key='T06d24e7044ed87004d85c05738e85ea0c89e55c3f8a958abf810a2ae5b36276d')
[D 12/27/23 21:37:27.862 178147] [kernel_compilation_manager.cpp:compile_and_cache_kernel@244] Cache kernel 'project_to_image_kernel_c77_0_reverse_grad' (key='T61b02bfef15d20dd0ec5492f552c7b0f191bc99a18d55f71245d2bbd327aeb8b')
**[W 12/27/23 21:37:28.029 178150] [control_flow_graph.cpp:get_store_forwarding_data@341] stmt 130 loaded in stmt 131 before storing.
(10 more lines like this)**
[D 12/27/23 21:37:28.040 178150] [control_flow_graph.cpp:determine_ad_stack_size@1471] Unable to determine the necessary size for autodiff stacks [$123]. Use configured size (CompileConfig::default_ad_stack_size) 32 instead.
Update: despite these warnings it still seems to produce a correct gradient (though whatever I'm doing in the "larger" version makes the gradient kernel take a long time to compile!)
Did you ever resolve what the issue was that was causing these warnings and long compile time?