gl-agg icon indicating copy to clipboard operation
gl-agg copied to clipboard

'module' object has no attribute 'GL_RGBA32F_ARB'

Open mdboom opened this issue 11 years ago • 11 comments

I'm getting this traceback running demo-arc.py.

I have PyOpenGL 3.1.0a1 (what pip installed for me today). This is Python 2.7 on Fedora 18.

Am I doing something wrong here? Is there anything else that would be useful for debugging purposes?

Traceback (most recent call last):
  File "_ctypes/callbacks.c", line 314, in 'calling callback function'
  File "demo-arc.py", line 38, in on_display
    paths.draw()
  File "/home/mdboom/python/lib/python2.7/site-packages/glagg/collection.py", line 349, in draw
    self.upload()
  File "/home/mdboom/python/lib/python2.7/site-packages/glagg/collection.py", line 329, in upload
    gl.glTexImage2D( gl.GL_TEXTURE_2D, 0, gl.GL_RGBA32F_ARB,
AttributeError: 'module' object has no attribute 'GL_RGBA32F_ARB'

mdboom avatar Mar 07 '13 15:03 mdboom

Could you try GL_RGBA32F instead or get OpenGL.GL.GL_RGBA* completion (from IPython) to see what is available ?

Hopefully, there should be something like GL_RGBA32F

Do you know what is your graphic card ?

rougier avatar Mar 07 '13 16:03 rougier

(Let me know if you'd rather take this to e-mail -- not sure how "in the weeds" this will get...)

I've got an Intel i915 integrated mobile chip.

I changed all occurrences of GL_RGBA32F_ARB to GL_RGBA32F in collections.py, and now I've hit something new:

Traceback (most recent call last):
  File "_ctypes/callbacks.c", line 314, in 'calling callback function'
  File "demo-arc.py", line 38, in on_display
    paths.draw()
  File "/home/mdboom/python/lib/python2.7/site-packages/glagg/collection.py", line 349, in draw
    self.upload()
  File "/home/mdboom/python/lib/python2.7/site-packages/glagg/collection.py", line 335, in upload
    shape[1]//4, shape[0], 0, gl.GL_RGBA, gl.GL_FLOAT, data )
  File "latebind.pyx", line 32, in OpenGL_accelerate.latebind.LateBind.__call__ (src/latebind.c:667)
  File "wrapper.pyx", line 315, in OpenGL_accelerate.wrapper.Wrapper.__call__ (src/wrapper.c:5478)
OpenGL.error.GLError: GLError(
        err = 1281,
        description = 'invalid value',
        baseOperation = glTexImage2D,
        pyArgs = (
                GL_TEXTURE_2D,
                0,
                GL_RGBA32F,
                6,
                12,
                0,
                GL_RGBA,
                GL_FLOAT,
                array([  0.00000000e+00,   0.00000000...,
        ),
        cArgs = (
                GL_TEXTURE_2D,
                0,
                GL_RGBA32F,
                6,
                12,
                0,
                GL_RGBA,
                GL_FLOAT,
                array([  0.00000000e+00,   0.00000000...,
        ),
        cArguments = (
                GL_TEXTURE_2D,
                0,
                GL_RGBA32F,
                6,
                12,
                0,
                GL_RGBA,
                GL_FLOAT,
                array([  0.00000000e+00,   0.00000000...,
        )
)

I've put up a gist with my glxinfo and lshw over here: https://gist.github.com/mdboom/5109600

mdboom avatar Mar 07 '13 16:03 mdboom

I've made some progress on this.

I'm (re-)learning OpenGL as I go here, so pardon my silly assumptions and maybe you can offer some advice.

It seems that my video card does not support floating point textures. In collections.py it appears as if what's being generated aren't really textures as such, but textures are being used as a way to send floating point data describing the collection to the video card.

I can replace GL_RGBA32F in the calls to glTexImage2D to GL_RGBA8 or GL_RGBA and it will quite happily convert the floating point input in the range (0, 1) to values in the range (0, 255) on the video card side. The problem is that much of the input data is not in the range (0, 1). I can (as a hack) divide by the expected max and multiply it back within the shader fragment, and approximately replicate the results on this card (albeit with a lot of floating-point rounding error). So, it's at least theoretically possible to get some images out of this video card.

What really needs to happen, I believe, is to have a way of encoding the floats as ints in the texture and getting them back out the other side. It should be possible, at least if one can get away fixed point values (Agg uses fixed point arithmetic at the rendering level, so it should be sufficient assuming one is really careful):

http://www.gamedev.net/topic/442138-packing-a-float-into-a-a8r8g8b8-texture-shader/

Another possibility is to pad out the uniform values ahead of time, since uploading floating-point vertex buffers (as in vertex_buffer.py) works fine -- it's just the floating point textures that don't seem to work.

mdboom avatar Mar 18 '13 20:03 mdboom

I assumed wrongly that float textures we more or less widely deployed but since you're not the first to report the problem, I'll need to dig into this 4-bytes coding as a replacement.

These float textures are mainly used to pass individual parameters to each object. The regular "OpenGL" way would be to set some uniforms but then I cannot have different parameters for each object of a collection (the goal is to be able to issue a single OpenGL call to render all of them at once).

The work around your proposed should work nicely and the script from gamedev seems to be working and should be translatable for numpy array as well (encode).

This would require to rewrite the part in the shader that extract those values (shoudl be quite easy). Of course this will make things a bit slower but it's better than having just nothing.

import math
import numpy as np

def encode(V, Vmin=-100, Vmax=100):
    n = len(V)
    V = V.reshape(n,1)

    V = np.maximum(np.minimum(Vmax,V),Vmin)
    V = (V-Vmin)/(Vmax-Vmin)
    shift = np.array([256*256*256, 256*256, 256, 1])
    mask  = np.array([0, 1.0, 1.0, 1.0])/256.0
    comp = V * shift
    comp = np.modf(comp)[0]
    comp -= mask* (np.array([comp[:,0], comp[:,0], comp[:,1], comp[:,2]]).T)
    return comp.reshape(n,4)

def decode(V ,Vmin=-100,Vmax=100):
    shift = np.array([1.0/(256.0*256.0*256.0), 1.0/(256.0*256.0), 1.0/256.0, 1.])
    shift = shift.reshape(1,4)
    value = ((V*shift).sum(axis=1)).ravel()
    return (Vmin + value*(Vmax-Vmin)).reshape(len(V))

Z = np.random.uniform(-1,1,10)
print Z -  decode(encode(Z))

rougier avatar Mar 19 '13 12:03 rougier

Thanks for looking into this.

It should be theoretically possible to do what you're doing now when floating-point textures are available and fallback to the encoding approach you outlined above when not...

I think these Intel graphics chips are still fairly popular (my machine is less than a year old) because they get better battery life than higher-end models from Nvidia.

mdboom avatar Mar 19 '13 13:03 mdboom

From this article: http://www.phoronix.com/scan.php?page=news_item&px=MTI3OTY, it seems that intel driver actually supports floating-point texture but this was disabled by default until quite recently. I wonder now if it's worth the trouble. Do you know what is your version of mesa ?

Intel Driver Now Enables Floating-Point Textures Posted by Michael Larabel on January 21, 2013

Intel's Mesa DRI driver now is unconditionally enabling floating-point textures. Up to this point, the floating-point textures feature of GL3 hasn't been enabled by default due to patent worries.

...

rougier avatar Mar 22 '13 13:03 rougier

Thanks for the pointer. Indeed that's good news. It looks like I just got Mesa 9.1 in my set up yum updates today (I'm on Fedora 18), but it looks like Fedora has actually patched Mesa to still not include the floating point textures. I'll try to get to the bottom of this and see if there's a workaround or something.

mdboom avatar Mar 22 '13 14:03 mdboom

Fedora patched Mesa to remove float texture ? That's a bit weird but I guess this is related to patent problems.

rougier avatar Mar 22 '13 14:03 rougier

Here's the patch:

http://pkgs.fedoraproject.org/cgit/mesa.git/tree/intel-revert-gl3.patch?h=f18&id=1c91a873d813987b3c2e4c0a31ab8e88b0ddc448

I don't know if it's related to patent problems or related to not wanting to change the behavior of the driver mid-release. I'm going to try asking someone who would know.

mdboom avatar Mar 22 '13 14:03 mdboom

I've filed: https://bugzilla.redhat.com/show_bug.cgi?id=924812

mdboom avatar Mar 22 '13 14:03 mdboom

Recompiling the Mesa RPMS from source and disabling the intel-revert-gl3 patch gives me a working gl-agg! The question still stands as to why that patch is there.

mdboom avatar Mar 22 '13 17:03 mdboom