bpy_lambda icon indicating copy to clipboard operation
bpy_lambda copied to clipboard

Build based on Blender 2.83.1

Open salimhb opened this issue 4 years ago • 26 comments

Add script to create an AWS Lambda layer

salimhb avatar Jul 02 '20 08:07 salimhb

Thanks for working through these issues to move to Blender 2.8! I got everything working from your build and tested several scripts. I had success on almost all, but I was not able to get rendering working. The error is "Unable to open display". Some quick googling shows that this is a common error with headless rendering- most specifically with EEVEE, but I tried CYCLES as well. Were you able to get rendering working? Thanks...

markhohmann avatar Jul 03 '20 13:07 markhohmann

Update: Successful rendering on CYCLES engine but not yet on EEVEE. Appears to be an issue with EEVEE requiring OpenGL 3.3. support. Interested to see if you are able to get EEVEE working. Thanks again for the 2.83.1 update.

markhohmann avatar Jul 03 '20 19:07 markhohmann

Actually, I'm having trouble running this at all on AWS Lambda. It complains about requiring numpy. When I add numpy, together with bpy_lambda, they total over 250MB and get rejected by AWS. I'm looking into Zappa at the moment. Is it working for you on AWS Lambda?

salimhb avatar Jul 03 '20 20:07 salimhb

I did get it working on AWS Lambda. My bpy_lambda_layer.zip is 75MB and that's the only layer I require for a few of the scripts I run, some other scripts require other layers and push it over 250MB but I can isolate these. Perhaps your specific scripts are requiring numpy? I'm happy to send over my bpy_lambda_layer.zip if you'd like to see how it zipped up or use it in your lambda. Again, thanks for all of the debugging to get it working in 2.8x.

markhohmann avatar Jul 03 '20 21:07 markhohmann

Yes, same size of bpy_lambda_layer.zip for me. It's numpy that adds another big increase and for Lambda what matters is the unpacked size. Regarding OpenGL, I don't know what the issue could be. Is it handled by GLEW? Does it work if you use the python module compiled on a Win or Mac machine?

The part of my script requiring numpy is bpy.ops.export_scene.gltf, it has this import in gltf2_blender_image.py#L18

@bonflintstone I found that it works fine if we drop gltf and use bpy.ops.export_scene.obj instead.

salimhb avatar Jul 04 '20 10:07 salimhb

@bcongdon @markhohmann I needed Collada support and made a new build to support that. You can check the changes here https://github.com/railslove/bpy_lambda/compare/blender-2.83.1...railslove:blender-2.83.1-collada Do you think that it's worth adding it to this PR? It does increase the size of the Lambda layer to 245 MB, which is very close to the AWS limit of 250 MB. But either way, I gave up on using it as a layer as I also needed Numpy. I put the zip file on S3 and downloaded/extracted/imported it during runtime. It's a 4 seconds loss, but it's not repeated as the Lambda stays warm. Do you think that it's useful to add the code below to the readme as an example to do this runtime import?

def import_blender():
    global bpy
    if bpy:
        print('bpy already loaded')
        return

    # download Blender python module package from S3
    s3_client.download_file('bucket-name', 'bpy_lambda_layer.zip', '/tmp/bpy_lambda_layer.zip')

    # Extract it to /tmp
    with zipfile.ZipFile('/tmp/bpy_lambda_layer.zip', 'r') as zip_ref:
        zip_ref.extractall('/tmp')

    # Add tmp path to search path and import the Blender python module
    sys.path.insert(0, '/tmp/python/')
    from bpy_lambda import bpy

salimhb avatar Jul 20 '20 09:07 salimhb

I'd be in favor of putting an explanation like that in the README. Probably not in favor of checking it in, as -- like you said -- that puts you quite close to the layer size limit

bcongdon avatar Jul 20 '20 20:07 bcongdon

Agreed, I think it's worth including, thanks for sharing.

markhohmann avatar Jul 20 '20 21:07 markhohmann

Thanks @salimhb! :+1: Tried your changes and they worked for me. Had to comment out the REMESH part of test.py but otherwise it's working for our needs.

dmarcelino avatar Jul 28 '20 11:07 dmarcelino

Hi! I'm struggeling with the file size of the bpy_lambda_layer.zip. After successful build, the bpy_lambda directory has a size of 217MB which should be ok according to the AWS guidelines. After running the lambda_layer.sh script, the zip file is about 73MB, which causes an RequestTooLarge error when uploading the layer because of the 50MB file size limit.
Does anyone know, how to reach the 50MB file size limit of the zip file?
@markhohmann if I understand your post correct, you got it up and running? Can you please describe how? Thanks!

manuelbostanci avatar Aug 19 '20 21:08 manuelbostanci

@manuelbostanci The 50MB limit is for direct upload of lambda deployment packages. For layers, AWS suggests the limit for direct upload of lambda layers is 10MB (maybe the hard limit is actually 50MB as appears you've discovered.) To upload larger files, look at uploading them to S3 and then point to that S3 location. If you are using the AWS console, you should see "Upload a file from Amazon S3" as a choice. If you are using the cli, you specify with the --content flag.

markhohmann avatar Aug 19 '20 23:08 markhohmann

@manuelbostanci The 50MB limit is for direct upload of lambda deployment packages. For layers, AWS suggests the limit for direct upload of lambda layers is 10MB (maybe the hard limit is actually 50MB as appears you've discovered.) To upload larger files, look at uploading them to S3 and then point to that S3 location. If you are using the AWS console, you should see "Upload a file from Amazon S3" as a choice. If you are using the cli, you specify with the --content flag.

@markhohmann Thanks! The upload via --content was the missing part! ;)

manuelbostanci avatar Aug 20 '20 09:08 manuelbostanci

HI @bcongdon @markhohmann, is this PR planned to be merged? If there's still pending stuff with it, is there any way I can help? I'm looking to test this out to benchmark timing and cost for a pretty big render, and I am pretty sure I'd need 2.8 support.

Thanks.

asyrique avatar Sep 03 '20 01:09 asyrique

Hello @asyrique you can checkout this branch and build it yourself to run benchmarks. We are actually discussing improving the test.py. So if you have suggestions to use from your benchmarks, that would be great input.

salimhb avatar Sep 03 '20 08:09 salimhb

Thanks for the work on this!

I'm getting the following error though when running ./build.sh via https://github.com/railslove/bpy_lambda/tree/blender-2.83.1

[ 76%] Building CXX object src/libOpenImageIO/CMakeFiles/OpenImageIO.dir/imagebufalgo_compare.cpp.o
[ 77%] Building CXX object src/libOpenImageIO/CMakeFiles/OpenImageIO.dir/imagebufalgo_copy.cpp.o
[ 78%] Building CXX object src/libOpenImageIO/CMakeFiles/OpenImageIO.dir/imagebufalgo_deep.cpp.o
[ 79%] Building CXX object src/libOpenImageIO/CMakeFiles/OpenImageIO.dir/imagebufalgo_draw.cpp.o
c++: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <http://bugzilla.redhat.com/bugzilla> for instructions.
make[2]: *** [src/libOpenImageIO/CMakeFiles/OpenImageIO.dir/imagebuf.cpp.o] Error 4
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [src/libOpenImageIO/CMakeFiles/OpenImageIO.dir/all] Error 2
make: *** [all] Error 2
ERROR! OpenImageIO-1.8.13 failed to compile, exiting
The command '/bin/sh -c cd ~/blender-git &&     ./blender/build_files/build_environment/install_deps.sh     --no-sudo     --no-confirm     --skip-numpy     --skip-openvdb     --skip-ffmpeg     --skip-usd' returned a non-zero code: 1```

tiivik avatar Sep 16 '20 11:09 tiivik

@tiivik try increasing the memory available for Docker.

salimhb avatar Sep 16 '20 14:09 salimhb

@tiivik try increasing the memory available for Docker.

Indeed that helped! 🤦 Thanks!

tiivik avatar Sep 16 '20 14:09 tiivik

@bcongdon @markhohmann I needed Collada support and made a new build to support that. You can check the changes here railslove/[email protected]:blender-2.83.1-collada Do you think that it's worth adding it to this PR? It does increase the size of the Lambda layer to 245 MB, which is very close to the AWS limit of 250 MB. But either way, I gave up on using it as a layer as I also needed Numpy. I put the zip file on S3 and downloaded/extracted/imported it during runtime. It's a 4 seconds loss, but it's not repeated as the Lambda stays warm. Do you think that it's useful to add the code below to the readme as an example to do this runtime import?

def import_blender():
    global bpy
    if bpy:
        print('bpy already loaded')
        return

    # download Blender python module package from S3
    s3_client.download_file('bucket-name', 'bpy_lambda_layer.zip', '/tmp/bpy_lambda_layer.zip')

    # Extract it to /tmp
    with zipfile.ZipFile('/tmp/bpy_lambda_layer.zip', 'r') as zip_ref:
        zip_ref.extractall('/tmp')

    # Add tmp path to search path and import the Blender python module
    sys.path.insert(0, '/tmp/python/')
    from bpy_lambda import bpy

Hey sorry to bug with this a little more but how were you able to include numpy packaged with blender? I understand it will make the unzipped package exceed the AWS limit but I'm okay with that.

removing --skip-numpy ends me up with

  5650K .......... .......... .......... .......... .......... 98% 4.66M 0s
  5700K .......... .......... .......... .......... .......... 99% 8.52M 0s
  5750K .......... ..                                         100% 65.9M=2.2s

2020-09-17 15:56:45 (2.59 MB/s) - '/root/src/blender-deps/numpy-1.17.0.tar.gz' saved [5901035/5901035]

Unpacking Numpy-1.17.0
/opt/lib/python-3.7.4/bin/python3: error while loading shared libraries: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory
ERROR! Numpy-1.17.0 failed to compile, exiting

tiivik avatar Sep 17 '20 16:09 tiivik

@bcongdon @markhohmann I needed Collada support and made a new build to support that. You can check the changes here railslove/[email protected]:blender-2.83.1-collada Do you think that it's worth adding it to this PR? It does increase the size of the Lambda layer to 245 MB, which is very close to the AWS limit of 250 MB. But either way, I gave up on using it as a layer as I also needed Numpy. I put the zip file on S3 and downloaded/extracted/imported it during runtime. It's a 4 seconds loss, but it's not repeated as the Lambda stays warm. Do you think that it's useful to add the code below to the readme as an example to do this runtime import?

def import_blender():
    global bpy
    if bpy:
        print('bpy already loaded')
        return

    # download Blender python module package from S3
    s3_client.download_file('bucket-name', 'bpy_lambda_layer.zip', '/tmp/bpy_lambda_layer.zip')

    # Extract it to /tmp
    with zipfile.ZipFile('/tmp/bpy_lambda_layer.zip', 'r') as zip_ref:
        zip_ref.extractall('/tmp')

    # Add tmp path to search path and import the Blender python module
    sys.path.insert(0, '/tmp/python/')
    from bpy_lambda import bpy

Hey sorry to bug with this a little more but how were you able to include numpy packaged with blender? I understand it will make the unzipped package exceed the AWS limit but I'm okay with that.

removing --skip-numpy ends me up with

  5650K .......... .......... .......... .......... .......... 98% 4.66M 0s
  5700K .......... .......... .......... .......... .......... 99% 8.52M 0s
  5750K .......... ..                                         100% 65.9M=2.2s

2020-09-17 15:56:45 (2.59 MB/s) - '/root/src/blender-deps/numpy-1.17.0.tar.gz' saved [5901035/5901035]

Unpacking Numpy-1.17.0
/opt/lib/python-3.7.4/bin/python3: error while loading shared libraries: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory
ERROR! Numpy-1.17.0 failed to compile, exiting

Okay, I was downloading and unpacking the numpy layer but not importing it. GLTF exporting which requires numpy does work, thanks for the instructions so far!

tiivik avatar Sep 17 '20 16:09 tiivik

Use the sample code mentioned in my previous comment to download and import bpy in runtime. Then just add the scipy/numpy layer offered by AWS. It's the only reliable way to get numpy support on Lambda.

salimhb avatar Sep 19 '20 14:09 salimhb

Use the sample code mentioned in my previous comment to download and import bpy in runtime. Then just add the scipy/numpy layer offered by AWS. It's the only reliable way to get numpy support on Lambda.

Thanks. I actually got a better performance when using bpy as a lambda layer and download numpy during runtime. With provisioned concurrency enabled from cold start to end of the test script execution 2 seconds.

tiivik avatar Sep 21 '20 06:09 tiivik

Thanks. I actually got a better performance when using bpy as a lambda layer and download numpy during runtime. With provisioned concurrency enabled from cold start to end of the test script execution 2 seconds.

That's great! Where do you download numpy from or did you compile it yourself? I couldn't find any numpy package compatible with Lambda. That's why I resorted to using the one offered by AWS as a layer.

salimhb avatar Sep 21 '20 10:09 salimhb

Thanks. I actually got a better performance when using bpy as a lambda layer and download numpy during runtime. With provisioned concurrency enabled from cold start to end of the test script execution 2 seconds.

That's great! Where do you download numpy from or did you compile it yourself? I couldn't find any numpy package compatible with Lambda. That's why I resorted to using the one offered by AWS as a layer.

I actually created a tool for this awhile back, see here https://github.com/tiivik/LambdaZipper/ I used it to create a lambda layer for the 2.79 version of blender before as well as other layers.

tiivik avatar Sep 21 '20 10:09 tiivik

Some struggles continue.

Were you succesfully able to export to .gltf?

bpy.ops.export_scene.gltf(use_selection=True, filepath="/tmp/export.glb") works OK

However the following results in export error and corrupt GLTF file. bpy.ops.export_scene.gltf(export_format="GLTF_EMBEDDED", use_selection=True, filepath="/tmp/export.gltf")

  File "/opt/python/bpy_lambda/2.83/scripts/addons/io_scene_gltf2/__init__.py", line 76, in on_export_format_changed

    operator = sfile.active_operator
AttributeError: 'NoneType' object has no attribute 'active_operator'
File "/opt/python/bpy_lambda/2.83/scripts/addons/io_scene_gltf2/__init__.py", line 73, in on_export_format_changed

I've tried it with a blender 2.83.1 and 2.83.6 lambda layers as well as numpy layers 1.17.0 and 1.17.4.

Is there a lib I'm missing? This warning is also always thrown:

'/opt/python/bpy_lambda/2.83/python/lib/python3.7/site-packages/libextern_draco.so' does not exist, draco mesh compression not available

tiivik avatar Sep 22 '20 16:09 tiivik

'/opt/python/bpy_lambda/2.83/python/lib/python3.7/site-packages/libextern_draco.so' does not exist, draco mesh compression not available

It's a known issue for blender compiled as bpy. You can disable it with this parameter export_draco_mesh_compression_enable=False for bpy.ops.export_scene.gltf

salimhb avatar Sep 23 '20 14:09 salimhb

Thx @salimhb for this awesome PR! Why is this not yet merged 🤷 ? I believe there's a bug there though - it was giving me: ... /usr/lib64/libGLEW.so.2.2: no such file or directory

Changing this line helped:

-    cp -L /usr/lib64/libGLEW.so.2.2 . && \
+    cp -L /usr/local/lib64/libGLEW.so.2.2 . && \

adamczykjac avatar Sep 11 '21 20:09 adamczykjac