bullet3 icon indicating copy to clipboard operation
bullet3 copied to clipboard

Work in progress, camera array renderer - feature extraction plugin

Open dchichkov opened this issue 5 years ago • 21 comments

Erwin, please could you do initial review/advice on feature extraction integration? The code is more or less alive, needs your input.

  1. what is the best way to call the constructor EGLRendererTensorRT() and pass the model name?
  2. what is the best way to call copyCameraImageFeatures ?
  3. I would really appreciate a kick to the right direction, onto how to convince various build systems to find the headers. In my hacked version, I've just used - CMake's: FIND_PACKAGE(CUDA) INCLUDE_DIRECTORIES(${CUDA_INCLUDE_DIRS}) target_link_libraries(OpenGLWindow ${DL} ${CUDA_CUDART_LIBRARY} nvinfer) But I'm not sure what to do with other build systems
  4. It looks like TensorRT might want C++11 SET(CMAKE_CXX_STANDARD 11)

dchichkov avatar Sep 19 '18 05:09 dchichkov

It does seem to be alive:

./test.sh 
pybullet build time: Sep 20 2018 01:48:32
argv[0]=--window_backend=2
argv[1]=--render_device=0
argv[2]=--width=160
argv[3]=--height=160
Loaded EGL 1.4 after reload.
GL_VENDOR=NVIDIA Corporation
GL_RENDERER=Tesla M60/PCIe/SSE2
GL_VERSION=4.6.0 NVIDIA 396.44
GL_SHADING_LANGUAGE_VERSION=4.60 NVIDIA
Version = 4.6.0 NVIDIA 396.44
Vendor = NVIDIA Corporation
Renderer = Tesla M60/PCIe/SSE2
b3Warning[/home/ubuntu/github/bullet3/examples/SharedMemory/plugins/eglPlugin/eglRendererTensorRT.cpp,44]:
GLTensorRTLogger: 3 UFFParser: parsing input
b3Warning[/home/ubuntu/github/bullet3/examples/SharedMemory/plugins/eglPlugin/eglRendererTensorRT.cpp,44]:
...
...
b3Warning[/home/ubuntu/github/bullet3/examples/SharedMemory/plugins/eglPlugin/eglRendererTensorRT.cpp,44]:
GLTensorRTLogger: 3 Block size 69376
b3Warning[/home/ubuntu/github/bullet3/examples/SharedMemory/plugins/eglPlugin/eglRendererTensorRT.cpp,44]:
GLTensorRTLogger: 3 Block size 0
b3Warning[/home/ubuntu/github/bullet3/examples/SharedMemory/plugins/eglPlugin/eglRendererTensorRT.cpp,44]:
GLTensorRTLogger: 3 Total Activation Memory: 6183168
b3Warning[/home/ubuntu/github/bullet3/examples/SharedMemory/plugins/eglPlugin/eglRendererTensorRT.cpp,44]:
GLTensorRTLogger: 3 Data initialization and engine generation completed in 0.0241021 seconds.
ven = NVIDIA Corporation
inputDataHost: 0.000000 0.847059 0.000000 0.000000 0.600000

905:0.454386 633:0.038937 906:0.021344 795:0.020545 907:0.018450 
test.py:29: RuntimeWarning: the frombuffer defaults may change in a future release; for portability, change the call to read:
  frombuffer(mode, size, data, 'raw', mode, 0, 1)
  Image.frombuffer('RGBA', (width,height), rgba, 'raw').save("0." + sys.argv[1] + ".png")
Destroy EGL OpenGL window.

dchichkov avatar Sep 20 '18 06:09 dchichkov

Thanks Dmitry,

what is the best way to call the constructor EGLRendererTensorRT() and pass the model name?

The 'executePluginCommand' allows the user to pass string, array of integers and array of floats. It is up to you to create a protocol for this.

what is the best way to call copyCameraImageFeatures ?

You could add some interface in the UrdfRenderingInterface.h. By the way, does TensorRT have to be in the same plugin as EGL? Or could it be a separate plugin, just using CUDA/TensorRT? If so, we can easier make customizations, a separate API etc.

I would really appreciate a kick to the right direction, onto how to convince various build systems to find the headers.

In premake4, we explicitly enable a feature (--enable_tensorrt) and pass the include and library directories. I can look into that. For pybullet pip version is will be more complicated. Since many people don't have the CUDA/RT libraries installed, we cannot enable it unless we can avoid compile/link errors.

Let's first get the cmake support working.

SET(CMAKE_CXX_STANDARD 11)

Yes, if (and only if) tensorrt is enabled, we enable this flag. We do the same for the GRPC plugin, it requires C++11 as well. It would be best to add some flag in cmake to enable/disable TensorRT support.

Please undo the minor change in PhysicsServerCommandProcessor to avoid the conflict in the pull request.

erwincoumans avatar Sep 20 '18 14:09 erwincoumans

By the way, does TensorRT have to be in the same plugin as EGL? Or could it be a separate plugin, just using CUDA/TensorRT? If so, we can easier make customizations, a separate API etc.

It doesn't have to be in the same plugin. It requires correctly initialized OpenGL (to get pointer to pixels in the GPU). We should be either in the the same process as eglMakeCurrent() call, or bind/initialize EGL in the same way.

The cooperation that it requires from eglPlugin is getCameraImage(width,height, viewMatrix, projectionMatrix) call with a flag that asks it to do only rendering and omit any pixel transfers calls. Or alternatively, support for getCameraImage(width,height, viewMatrix[], projectionMatrix[]) call that renders multiple views stacked vertically. Ideally with single-pass/multi-viewport rendering, but even sequential render() calls and a batch inference call do significantly speed up the process (10x faster for thirty 160x160 cameras).

dchichkov avatar Sep 20 '18 17:09 dchichkov

Please, could you give a bit of code-review?

You could add some interface in the UrdfRenderingInterface.h.

Ok. I've sketched "Render Camera Array / Run Inference" interface, along the lines: p.getCameraArrayImage(cameraArraySize, w, h, viewMatrices[cameraArraySize][16], projection[cameraArraySize][16], lightDir[3], lightColor[3], lightDist, hasShadow, lightAmbientCoeff, lightDiffuseCoeff, lightSpecularCoeff, renderer) it will return cameraArraySize, width, height, rgb, featureLength, features.

Renders images from a 'camera array', with specified camera matrices, with single-pass/multi-viewport rendering. And then runs inference in a batch, in TensorRT. I've pushed an early sketch, but both single-pass/multi-viewport rendering and batch inference steps had been working. Compiles, but doesn't yet link, because I've eviscerated GLInstancingRenderer. Would really appreciate early feedback / code review. Thanks!

Sample camera array rendering: 0 plugin

dchichkov avatar Sep 27 '18 06:09 dchichkov

eviscerated GLInstancingRenderer. Would really appreciate early feedback / code review. Thanks!

Ah, I didn't realize your pull request contains that many changes, it start looking like a refactoring of existing files.

My biggest concern is that things keep on compiling/linking/running if the new features are not available, in particular CUDA, TensorRT and geometry shaders. TinyRendererPlugin and eglRendererPlugin can co-exist, and since eglRenderer isn't working in many cases (driver issues etc), it has to be enabled using the 'loadPlugin' command before activating it.

I'll look more into the patch soon.

Can you give a paragraph or so what feature(s) you add, and how they interact?

  1. TensorRT support, feature extraction, what does it bring
  2. getCameraImageArray
  3. geometry shaders, why/how
  4. refactoring GLInstancingRenderer why/how

If those 4 steps are the main ones, ideally the work for each step would have been split into its own separate patch, so they can be applied/reviewed one by one, if at all possible.

Anway, let's iterate and see how/if we can get this in.

Finally, I ran a clang-format on the source tree a few days ago, the patch needs to take it into account. What is a good way to hand this? Rebase?

erwincoumans avatar Sep 28 '18 15:09 erwincoumans

  • TensorRT support, feature extraction, what does it bring

It allows to run a DNN (for example a TensorFlow model) on a rendered image, directly in the GPU memory.

Currently, in case of GPU based OpenGL renderer, images are being rendered into the GPU memory and then copied into CPU. Multiple memory copy operations are being done on the images in the CPU (reverse pass over memory to flip the image, RGBA->RGB, shared memory, copying in chunks, etc). In the RL case, or image post-processing pipeline (DNN post-processing, to make it more realistic, etc) cases, image is often copied back to GPU. The module allows to run these operations directly in the GPU memory, and then only copy DNN output. For some RL cases, this optimizes pipeline significantly, as extracted embedding is much smaller / can easily be stored in the exp. replay buffers, etc.

It doesn't have to be in the same plugin. It requires correctly initialized OpenGL (to get pointer to pixels in the GPU). We should be either in the the same process as eglMakeCurrent() call, or bind/initialize EGL in the same way.

The cooperation that it requires from eglPlugin is getCameraImage(width,height, viewMatrix, projectionMatrix) call with a flag that asks it to do only rendering and omit any pixel transfers calls. Or alternatively, support for getCameraImageArray(arraySize, width,height, viewMatrix[], projectionMatrix[]) call that renders multiple views stacked vertically with single-pass/multi-viewport rendering.

  • getCameraImageArray

This allows to render images from an array of cameras (specified by an array of view/projection matrices), rather than a single camera. In case of a OpenGL implementation - in a single pass of GPU render calls. Useful, if you have multiple robots present in a single environment, or a single robot has an array of cameras. Single pass - multiple view rendering is much more efficient way of rendering.

Then it can calls a TensorRT model, on the rendered array of images. And apply DNN processing in a batch mode. This also gives a significant speedup (easily can give 10x for small images), over running DNN processing on a single image (due to various overheads and under-utilization of the system).

  • geometry shaders, why/how

It uses single-pass multiple viewport rendering - see:

https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_viewport_array.txt In the geometry step, render calls are being pushed in parallel into multiple views, with the array of camera matrices indexed by viewport instance ID.

  • refactoring GLInstancingRenderer why/how

There are two changes:

  1. refactoring, only allowing single pass rendering (no shadows, etc);
  2. making that single pass render into multiple views (with ARB_viewport_array / geometry shader);

This is work in progress. I was thinking of keeping the existing code, and adding a simplified (no shadows,etc) rendering mode for a camera array call. I'm not sure of usefulness of multiple pass/faking shadows/etc in the long-term (raytracing doesn't require it). And it was increasing the complexity of the shader. Your advice on this will be much appreciated.

(please note, all of this might be wrong in many places - I'm a deep learning adept, not graphics adept! - but in the tests it seemed to work as intended).

Finally, I ran a clang-format on the source tree a few days ago, the patch needs to take it into account. What is a good way to hand this? Rebase?

I'll run it with the same settings, see if we get the same result? It'd be good to know exact version of clang-format.

dchichkov avatar Sep 28 '18 18:09 dchichkov

I was thinking of keeping the existing code, and adding a simplified (no shadows,etc) rendering mode for a camera array call.

We need to keep the current features of the GLInstancingRenderer, various teams/people are using and relying on it. Creating yet another simplified renderer add maintenance burden though. Isn't there a way to share the code paths and just add some shaders? The shadow pass is optional, there is a boolean for that. Also, I'm currently adding support for segmentation mask in the GLInstancingRenderer (people use that in TinyRenderer at the moment, so if/when they want to switch to EGL, they need it too).

Please keep my concern into account related to keep things compiling/linking/running if the new features are not available, in particular CUDA, TensorRT and geometry shaders.

I'm searching for a nice open source real-time raytracer that benefits RTX/CUDA (with denoising). Are you aware of any?

erwincoumans avatar Sep 28 '18 23:09 erwincoumans

Isn't there a way to share the code paths and just add some shaders? The shadow pass is optional, there is a boolean for that.

Yes, thank you for the feedback. I'll do exactly that. In the GLInstancingRenderer, I've only removed the code temporarily - it's difficult to work directly on the original GLInstancingRenderer codebase - it's large.

I'm searching for a nice open source real-time raytracer that benefits RTX/CUDA (with denoising). Are you aware of any?

Let me get back on that. Quick google search gives https://github.com/NVIDIAGameWorks/Falcor and https://github.com/world8th/vRt (it looks like there's work in progress, see "make Turing capable traffic of stack" recent change, etc). Anything open-source / cross vendor likely will be Vulkan based. There's also some low level raytracing sample code around, for example see page 45: http://on-demand.gputechconf.com/gtc/2018/presentation/s8521-advanced-graphics-extensions-for-vulkan.pdf

AFAIK denoising with DNN is fairly advanced - it needs training of the denoising DNN, you can't just use a generic one (I might be wrong here).

dchichkov avatar Sep 29 '18 01:09 dchichkov

I added segmentation mask rendering in OpenGL GLInstancingRenderer. Also added glViewport to resize to destination size, makes it much faster (glReadPixels was reading the source width). This will likely conflict with this PR. Hopefully we can sort it out somehow. https://github.com/bulletphysics/bullet3/pull/1905

erwincoumans avatar Sep 30 '18 14:09 erwincoumans

I've integrated back original GLInstancingRenderer.cpp code, I think there shouldn't be incompatible changes in the GLInstancingRenderer now. Still need to further test. Also need to rebase / integrate your clang-format / latest viewport cleanups, etc.

Seem more or less alive. Performance is also looking promising - on R2D2 and 16-way rendering, 700 FPS - so it is 10k FPS total. Batch inference also seem to run, but resulting output not validated at all.

0 array

dchichkov avatar Oct 04 '18 02:10 dchichkov

Test code:

cameraArraySize,width,height = 16,160, 160
viewMatrices, projectionMatrices = [], []
for yaw in range(0,10 * cameraArraySize,10):
        viewMatrix = p.computeViewMatrixFromYawPitchRoll(cameraTargetPosition = [0,0,0], distance = 1, yaw = yaw, pitch = -10, roll = 0, upAxisIndex = 2)
        projectionMatrix = p.computeProjectionMatrixFOV(fov = 60, aspect = width / height, nearVal = 0.01, farVal = 100)
        viewMatrices.append(viewMatrix)
        projectionMatrices.append(projectionMatrix)

viewMatrices = np.array(viewMatrices, dtype=np.float32)
projectionMatrices = np.array(projectionMatrices, dtype=np.float32)

plane = p.loadURDF("plane.urdf")
p.loadURDF("plane.urdf",[0,0,-1])
p.loadURDF("r2d2.urdf")
p.loadURDF("duck_vhacd.urdf")

cameraArraySize,width,height,rgb,featureLength,features = p.getCameraArrayImage(cameraArraySize=cameraArraySize,width=width,height=height,featureLength=2000,viewMatrices=viewMatrices,projectionMatrices=projectionMatrices)

dchichkov avatar Oct 04 '18 02:10 dchichkov

It seem to be code complete and both single-pass/multiple viewport rendering and batch inference are working. Still some minor cleanup left. TODOs:

  • [ ] decide if the approach of a single plugin is right / etc;
  • [ ] have a go-ahead on the final integration;
  • [ ] add model, layers names as parameters;
  • [ ] pass in other parameters (workspace size / FP16 / etc);
  • [ ] rebase onto clang-reformatted code, integrate other changes;
  • [ ] test resizing, make sure it plays well with regular rendering calls;
  • [ ] add support for .plan files, to avoid model optimization every run;
  • [ ] add CMake build parameter, to enable TensorRT feature;
  • [ ] add support of other build systems;
  • [ ] add tests;
  • [ ] writeup / documentation / installation;

Tested with CUDA 9.2 / TensorRT 4 and CUDA 10.0 / TensorRT 5. No significant performance difference between TensorRT 4 and 5, on SqueezeNet. Depending on the complexity of the scene and the network, either inference of rendering is a limiting factor.

I'll work on TODOs ~ Oct 15th, right now focusing on transitioning my RL environment to use this code. I'd really appreciate any review / feedback, before I do a run of cleanup.

dchichkov avatar Oct 06 '18 01:10 dchichkov

Thanks a lot for the work! I'll look at it soon. Also, you are in the Bay Area right? We could also meet in person to discuss.

On Fri, 5 Oct 2018 at 18:32, Dmitry Chichkov [email protected] wrote:

It seem to be code complete and both single-pass/multiple viewport rendering and batch inference are working. Still some minor cleanup left. TODOs:

  • decide if the approach of a single plugin is right / etc;
  • have a go-ahead on the final integration;
  • add model, layers names as parameters;
  • pass in other parameters (workspace size / FP16 / etc);
  • rebase onto clang-reformatted code, integrate other changes;
  • test resizing, make sure it plays well with regular rendering calls;
  • add support for .plan files, to avoid model optimization every run;
  • add CMake build parameter, to enable TensorRT feature;
  • add support of other build systems;
  • add tests;
  • writeup / documentation / installation;

Tested with CUDA 9.2 / TensorRT 4 and CUDA 10.0 / TensorRT 5. No significant performance difference between TensorRT 4 and 5, on SqueezeNet. Depending on the complexity of the scene and the network, either inference of rendering is a limiting factor.

I'll work on TODOs ~ Oct 15th, right now focusing on transitioning my training to use this code. I'd really appreciate any review / feedback, before I do a run of cleanup.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/bulletphysics/bullet3/pull/1885#issuecomment-427536108, or mute the thread https://github.com/notifications/unsubscribe-auth/AAsR3HADlK5M0usCTFWwx-sE3fNl5pvKks5uiAgTgaJpZM4Wva61 .

erwincoumans avatar Oct 06 '18 03:10 erwincoumans

My pleasure! It's a nice codebase to work with. Brings in memories of Carmack's code. And sure, I'd be happy to meet over a cup of coffee or lunch (pm me over gmail, it's the same as my github handle).

Thanks a lot for the work! I'll look at it soon. Also, you are in the Bay Area right? We could also meet in person to discuss.

dchichkov avatar Oct 06 '18 17:10 dchichkov

Is there a pybullet example/test program with a small description what it is trying to do?

Does the getCamereImageArray work without TensorRT as well? If so, I would prefer to create the getCamereImageArray as a separate PR.

Then the 'TensorRT' as a follow-up PR, which is more tricky, due to CUDA dependency (which may not be available on platforms etc). I am very careful not to break anything, not backward incompatible changes if possible (since we have many internal tests/projects that use it, and they are not supposed to break).

erwincoumans avatar Oct 06 '18 18:10 erwincoumans

We should be either in the the same process as eglMakeCurrent() call, or bind/initialize EGL in the same way.

All plugins are executed in the same thread, so if there is a eglPlugin and tensorRT plugin, one just has to make sure eglPlugin is initialized first. Only 1 thread makes EGL/OpenGL calls, so we don't need to worry about makecurrent. (We only use eglPlugin in a headless environment (DIRECT, SHARED_MEMORY_SERVER, GRPC, TCP, UDP etc modes, not in GUI / VR mode).

erwincoumans avatar Oct 06 '18 18:10 erwincoumans

Is there a pybullet example/test program with a small description what it is trying to do? Does the getCamereImageArray work without TensorRT as well?

getCamereImageArray works without TensorRT. It returns rendered images and an empty featureVector. It is still might be useful in that mode. I.e. allows 700*16 FPS (see image in the comment above) rendered and delivered to python/numpy.

To build it without TensorRT, just comment out "#define BT_USE_TENSOR_RT" in the eglRendererVisualShapeConverter.cpp.

Code below should produce an array of images similar to the one with R2D2s above: ` import pybullet as p, numpy as np, PIL

cameraArraySize,width,height = 16,160, 160 viewMatrices = np.array([p.computeViewMatrixFromYawPitchRoll(cameraTargetPosition = [0,0,0], distance = 1, yaw = yaw, pitch = -10, roll = 0, upAxisIndex = 2) for yaw in range(0,10 * cameraArraySize,10)] , dtype=np.float32) projectionMatrices = np.array([p.computeProjectionMatrixFOV(fov = 60, aspect = width / height, nearVal = 0.01, farVal = 100)] * cameraArraySize, dtype=np.float32)

p.loadURDF("plane.urdf",[0,0,-1]) p.loadURDF("r2d2.urdf")

cameraArraySize,width,height,rgb,featureLength,features = p.getCameraArrayImage(cameraArraySize=cameraArraySize,width=width,height=height,featureLength=1001,viewMatrices=viewMatrices,projectionMatrices=projectionMatrices)

PIL.Image.frombuffer('RGB', (width,height*cameraArraySize), rgb, 'raw').save("out.png") `

dchichkov avatar Oct 06 '18 23:10 dchichkov

could shape recognition be used to process the 'game' avatars vision state and navigate the map visually?

something like D* but using shape recognition and repeated trials?

'neural game ai' that act like people do?

On Sat, Oct 6, 2018 at 4:15 PM Dmitry Chichkov [email protected] wrote:

Is there a pybullet example/test program with a small description what it is trying to do? Does the getCamereImageArray work without TensorRT as well?

getCamereImageArray works without TensorRT. It returns rendered images and an empty featureVector. It is still might be useful in that mode. I.e. allows 700*16 FPS (see image in the comment above) rendered and delivered to python/numpy.

To build it without TensorRT, just comment out "#define BT_USE_TENSOR_RT" in the eglRendererVisualShapeConverter.cpp.

Code below should produce an array of images similar to the one with R2D2s above: ` import pybullet as p, numpy as np, PIL

cameraArraySize,width,height = 16,160, 160 viewMatrices = np.array([p.computeViewMatrixFromYawPitchRoll(cameraTargetPosition = [0,0,0], distance = 1, yaw = yaw, pitch = -10, roll = 0, upAxisIndex = 2) for yaw in range(0,10 * cameraArraySize,10)] , dtype=np.float32) projectionMatrices = np.array([p.computeProjectionMatrixFOV(fov = 60, aspect = width / height, nearVal = 0.01, farVal = 100)] * cameraArraySize, dtype=np.float32)

p.loadURDF("plane.urdf",[0,0,-1]) p.loadURDF("r2d2.urdf")

cameraArraySize,width,height,rgb,featureLength,features = p.getCameraArrayImage(cameraArraySize=cameraArraySize,width=width,height=height,featureLength=1001,viewMatrices=viewMatrices,projectionMatrices=projectionMatrices)

PIL.Image.frombuffer('RGB', (width,height*cameraArraySize), rgb, 'raw').save("out.png") `

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/bulletphysics/bullet3/pull/1885#issuecomment-427612882, or mute the thread https://github.com/notifications/unsubscribe-auth/AG25WYvaa4gzaTk7BIh8QV5zXjkpuIQLks5uiTmDgaJpZM4Wva61 .

BluePrintRandom avatar Oct 07 '18 01:10 BluePrintRandom

A bit more complete example of "camera array" rendering use case (rendering only, no TensorRT inference in this example).

#!/usr/bin/python3
#
# git clone https://github.com/dchichkov/bullet3
# cd bullet3
# ./build_cmake_pybullet_double.sh
# cd ..
# export PYTHONPATH=bullet3/build_cmake/examples/pybullet

import pybullet as p, numpy as np

p.connect(p.DIRECT, options='--width=160 --height=160 --cameraArraySize=16')

cameraArraySize,width,height = 16,160, 160
viewMatrices, projectionMatrices = [], []
for yaw in range(0,10 * cameraArraySize,10):
        viewMatrix = p.computeViewMatrixFromYawPitchRoll(cameraTargetPosition = [0,0,0], distance = 1, yaw = yaw, pitch = -10, roll = 0, upAxisIndex = 2)
        projectionMatrix = p.computeProjectionMatrixFOV(fov = 60, aspect = width / height, nearVal = 0.01, farVal = 100)
        viewMatrices.append(viewMatrix)
        projectionMatrices.append(projectionMatrix)

viewMatrices = np.array(viewMatrices, dtype=np.float32)
projectionMatrices = np.array(projectionMatrices, dtype=np.float32)

plane = p.loadURDF("plane.urdf")
p.loadURDF("plane.urdf",[0,0,-1])
p.loadURDF("r2d2.urdf")
p.loadURDF("duck_vhacd.urdf")

cameraArraySize,width,height,rgb,featureLength,features = p.getCameraArrayImage(cameraArraySize=cameraArraySize,width=width,height=height,featureLength=2000,viewMatrices=viewMatrices,projectionMatrices=projectionMatrices)

from PIL import Image
print(rgb.shape)
Image.frombuffer('RGB', (width,height*cameraArraySize), rgb, 'raw').save("out.png")

dchichkov avatar Jan 08 '19 02:01 dchichkov

Doesn't seem like there is much interest in arrays of cameras and fast render. Not a single attempt to use this in a year. Surprising.

dchichkov avatar Apr 29 '20 18:04 dchichkov

for stuff like stadia this could be amazing :D

On Wed, Apr 29, 2020 at 11:16 AM Dmitry Chichkov [email protected] wrote:

Doesn't seem like there is much interest in arrays of cameras and fast render. Not a single attempt to use this in a year. Surprising.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/bulletphysics/bullet3/pull/1885#issuecomment-621378341, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABW3SWODE4QHFRPXTHPAHD3RPBVGFANCNFSM4FV5V22Q .

BluePrintRandom avatar Apr 29 '20 20:04 BluePrintRandom