taichi
taichi copied to clipboard
Productive, portable, and performant GPU programming in Python.
This is an single-buffer implementation of parallel scan (prefix sum). Reference: https://developer.download.nvidia.com/compute/cuda/1.1-Beta/x86_website/projects/scan/doc/sc https://github.com/NVIDIA/cuda-samples/blob/master/Samples/2_Concepts_and_Techniques/shfl_sc This will be useful for certain Taichi implementations such as PBD.
1. Change indicator label from `error-prone` to `full-ci` 2. Always run full ci test when dst branch is a rc branch
The test case for forward mode autodiff work after #5555 fixed. Related issue #5708
**Concisely describe the proposed feature** Current `clear_gradients` clears both adjoint and dual. **Describe the solution you'd like (if any)** Support clear adjoint or dual separately.
**Describe the bug** When using `print` function in ti.kernel, the log of data is no order(random order). **To Reproduce** ```py # sample code here import taichi as ti arch=ti.cuda ti.init(arch=arch)...
**Version** `taichi-nightly 1.0.5.post20220810` **Describe the bug** ```py import taichi as ti @ti.func def test_static_loop(): for i in ti.static(range(5)): x = 0.1 if x == 0.0: print("consition statement") break print("after condition")...
The IR `shift ptr` related to dynamic index should be supported in current autodiff.
Related issue = #5478
**Describe the bug** ```py # sample code here # sample code here import taichi as ti arch=ti.vulkan ti.init(arch=arch) ivec3 = ti.types.vector(3, ti.i32) g_init_builder = ti.graph.GraphBuilder() g_init_substep = g_init_builder.create_sequential() mod =...
GGUI caches vertex buffer with the hash of taichi field as key, `np.ndarray` is non-hashable but can be used as kernel argument. This PR added support for `np.ndarray` to be...