Coherence
Coherence copied to clipboard
Add support for a texture pipeline
I would like to be able to edit a texture directly within Blender and pipeline those pixels into Unity - the same way viewport pixels pipeline from Unity.
Few concerns though:
- Textures can be huge - what's the size limit on something like this? I don't want to pump 4K textures this way.
- Would this slow down other operations going in the opposite direction? (A user wouldn't be manipulating a mesh while also manipulating a texture, so the only concern would be viewport messages)
Some notes on this:
- 2K textures aren't unreasonable to expect, so we should plan for that
- Can't find any sort of event that a
bpy.types.Image
has been modified within Blender.
For the 2nd point, I've tried:
- Listening for depsgraph updates while in texture paint mode
- I get notified in
bpy.app.handlers.depsgraph_update_pre
but only after we've drawn a line - No call to
bpy.app.handlers.depsgraph_update_post
- and that's the one with the actual depsgraph as an argument
- I get notified in
- Adding a modal operator to monitor for click events, mouse movement, etc
- Click and drag (like when someone is dragging a brush in texture paint mode) - the modal gets the initial click and then no event updates until the mouse is released (which is a
MOUSEMOVE
event). So realtime doesn't work here. - A
bpy.app.timers
instance can still run while the user is dragging the mouse, but I can't find any global source of mouse positions to compare and detect that they're actively painting or not. Just Assuming that aLEFTMOUSE
event while in the paint mode seems too inaccurate to use.
- Click and drag (like when someone is dragging a brush in texture paint mode) - the modal gets the initial click and then no event updates until the mouse is released (which is a
From a Blender discord chat, people have suggested wrapping the default brushes with custom operators but that doesn't seem efficient here. And paint operators may be C ops anyway which I don't think can be wrapped that way?
Eevee and Cycles are both somehow getting live updates on drawing in the VP. So there has to be something to hook that works, unless they're just binding that same texture in GL memory.
Absolute worst case scenario:
- Determine the active image we're painting on
- While active, every tick we upload the pixels (or, the raw pointer) to the DLL
- DLL scans from last upload and makes a determination about whether there were any changes
- Different algorithms could be used here to optimize, like skip every N pixels while scanning
- If DLL detects any changes, it sends to Unity
Working with something like 2K textures, it might make sense to scan cells of the texture for changes and only send that cell if it detects anything. Unity will then have multiple Texture2D
instances - one per cell - and update the appropriate cell texture from the transfer. Then do a Graphics.CopyTexture
into the main composite texture.
The cell breakdown idea makes sense, in which case the DLL will always need to run a comparison anyway to identify changed cells. So it's just a matter of how often we're running that process based on Blender events.
Actively a work in progress, some notes so far:
- Getting pixel data from Blender sucks - there's an implicit memcpy that has to happen at a minimum that can't be avoided and after that it still needs to be copied Python -> C#. There's no direct
as_pointer()
access for pixels like other data structures - and getting theImage
struct and pixels from there would be a huge mess. - Hilariously, there's a loop in there to convert bytes to float when it's not a floating point image. Which isn't the default setting when creating a new image - so people will probably inadvertently kill performance further by using defaults.
- 2.83+ added foreach_get support, which should be faster than
image.pixels
. Could probably load it into Numpy (+1 memcpy maybe?) and then give the numpy pointer to C#. In any case - this will be a feature that won't be supported in 2.82 so I might just bump up minimum requirements to just 2.91 and call it a day...
Disregard - it's fast:
import bpy
import numpy as np
from timeit import repeat
image = bpy.data.images['Untitled']
w, h = image.size
buffer = np.empty(w * h * 4, dtype=np.float32)
def read_pixels():
image.pixels.foreach_get(buffer)
ptr = buffer.ctypes.data
# Do stuff
print(
np.average(repeat(read_pixels, number=1, repeat=1000))
)
Averaging 0.001s from foreach_get -> numpy ctypes pointer for a 1024x1024 RGBA as float.
4K RGBA float texture is about 0.02 - not amazing but at that point the bottleneck will be shuffling it off to Unity anyway.
Working, but not as optimized as I'd like. I'll need to increase buffer node limits to handle 2k/4k textures, and perf will probably suffer.
I'm having headaches with color space transformations though. It'll transfer sRGB and then the render texture will do a linear conversion that I can't seem to turn off (possibly related Unity ticket).
My other issue is just general UX - in the Image Editor you can pick/create an image, set a sync slot, and get to work. But you also need to do some extra work to be able to paint in the SpaceView3D. In its current form, you'll also need to connect a Render Texture asset with the intended material channel for the mesh you're working on.
I'm going to have to write up workflow guidance but I'd also like to figure out if there's a better way. I know previously I had planned to add sync as part of the material - so you pick a material channel and it automatically binds. But that restricts usage to just materials and not for general purpose Render Texture magic (e.g. writing to heightmaps or other cool things).