BlenderProc icon indicating copy to clipboard operation
BlenderProc copied to clipboard

Memory leak when iterating scenes; bproc.clean_up() becomes progressively slower

Open rb4005 opened this issue 6 months ago • 1 comments

Describe the issue

Description: I’m seeing a steadily increasing memory footprint when running BlenderProc in a loop to generate many scenes. Even though I call bproc.clean_up(True) (and force a Python gc.collect()), RAM usage, GPU memory and the time spent inside bproc.clean_up() grow with each iteration until the script ultimately crashes. I suspect that some internal data (Cycles render buffers) is never freed.

Expected Behavior:

  • Memory usage should remain roughly constant after each iteration (aside from small fluctuations).
  • bproc.clean_up(True) should consistently take a small, bounded amount of time (< 0.1 s).

Actual Behavior:

  • The RSS steadily increases by ~5–20 MB per iteration.
  • bproc.clean_up(True) time grows linearly until it dominates the iteration time.
  • The script eventually aborts due to out‐of‐memory or unacceptable slowdown.

Minimal code example

import blenderproc as bproc
import bpy
import os
import numpy as np
import random
import pandas as pd
import tqdm

# Nach dem Rendering
import gc
import time
import psutil

# import debugpy
# debugpy.listen(5678)
# debugpy.wait_for_client()

# Sammeln Sie alle Metriken in einer Liste
metrics_list = []

def log_performance_metrics(iteration, comment, output_dir):
    process = psutil.Process(os.getpid())
    mi = process.memory_info().rss / 1024 / 1024
    objs = len(bpy.data.objects)
    meshes = len(bpy.data.meshes)
    now = time.time()
    if hasattr(log_performance_metrics, "last"):
        it = now - log_performance_metrics.last
    else:
        it = 0.0
    log_performance_metrics.last = now

    metrics_list.append(
        {
            "comment": comment,
            "iteration": iteration,
            "memory_usage_mb": round(mi, 2),
            "blender_objects": objs,
            "blender_meshes": meshes,
            "timestamp": round(now, 2),
            "iteration_time": round(it, 2),
            "bpy.data.meshes": len(bpy.data.meshes),
            "bpy.data.objects": len(bpy.data.objects),
            "bpy.data.images": len(bpy.data.images),
            "bpy.data.materials": len(bpy.data.materials),
            "bpy.data.textures": len(bpy.data.textures),
            "Node Groups": len(bpy.data.node_groups),
        }
    )

    # Am Ende des Skripts (oder nach x Schleifen) einmal schreiben:
    df = pd.DataFrame(metrics_list)
    df.to_csv(
        os.path.join(output_dir, "performance_log.csv"),
        sep=";",
        decimal=",",  # Komma als Dezimaltrenner
        index=False,
    )
    return metrics_list[-1]

if __name__ == "__main__":
    output_dir_path = r"D:\output"
    os.makedirs(output_dir_path, exist_ok=True)
    n_scenes = 2000
    bproc.init()
    bproc.renderer.enable_depth_output(
        activate_antialiasing=False, output_dir=None, output_key="depth"
    )
    resolutions_render = [
        (1920, 1080),
        (1600, 1200),
        (1280, 960),
        (1280, 720),
        (1024, 1024),
        (640, 480),
    ]
    start_index = 0
    for iteration in tqdm.tqdm(range(n_scenes)):
        iter_start_time = time.time()
        start_metrics = log_performance_metrics(iteration, "0.)", output_dir_path)
        print(
            f"0.) Iteration {iteration}: Memory: {end_metrics['memory_usage_mb']:.2f} MB, Objects: {end_metrics['blender_objects']}, Time: {time.time() - iter_start_time:.2f}s"
        )
        resolution_render = random.choice(resolutions_render)
        bproc.clean_up(True)
        gc.collect()
        end_metrics = log_performance_metrics(iteration, "1.)", output_dir_path)
        
        print(
            f"1.) Iteration {iteration}: Memory: {end_metrics['memory_usage_mb']:.2f} MB, Objects: {end_metrics['blender_objects']}, Time: {time.time() - iter_start_time:.2f}s"
        )
        bproc.camera.set_resolution(resolution_render[0], resolution_render[1])
        bproc.utility.reset_keyframes()
        light = bproc.types.Light()
        light.set_location([0, 2, 4])
        light.set_energy(np.random.uniform(400, 500))
        poi = [0, 0, 0]
        location_cam = np.array([0, 0, 1])
        print("location_cam", location_cam)
        rotation_matrix = bproc.camera.rotation_from_forward_vec(
            poi - location_cam, inplane_rot=np.random.uniform(-0.7854, 0.7854)
        )
        cam2world_matrix = bproc.math.build_transformation_mat(
            location_cam, rotation_matrix
        )
        bproc.camera.add_camera_pose(cam2world_matrix)

        end_metrics = log_performance_metrics(iteration, "2.)", output_dir_path)
        print(
            f"2.) Iteration {iteration}: Memory: {end_metrics['memory_usage_mb']:.2f} MB, Objects: {end_metrics['blender_objects']}, Time: {time.time() - iter_start_time:.2f}s"
        )
        bproc.object.create_primitive("MONKEY", scale=[0.5, 0.5, 0.5])
        end_metrics = log_performance_metrics(iteration, "3.)", output_dir_path)
        print(
            f"3.) Iteration {iteration}: Memory: {end_metrics['memory_usage_mb']:.2f} MB, Objects: {end_metrics['blender_objects']}, Time: {time.time() - iter_start_time:.2f}s"
        )
        bproc.renderer.enable_segmentation_output(
            map_by="category_id", default_values={"category_id": 2}
        )
        bproc.renderer.set_output_format(enable_transparency=True)
        end_metrics = log_performance_metrics(iteration, "4.)", output_dir_path)
        print(
            f"4.) Iteration {iteration}: Memory: {end_metrics['memory_usage_mb']:.2f} MB, Objects: {end_metrics['blender_objects']}, Time: {time.time() - iter_start_time:.2f}s"
        )
        data = bproc.renderer.render()
        tree = bpy.context.scene.node_tree
        tree.nodes.remove(
            tree.nodes[-1]
        )  # ich muss das machen, weil enable_segmentation_output immer neue nodes erstellt.
        tree.nodes.remove(tree.nodes[-1])
        end_metrics = log_performance_metrics(iteration, "5.)", output_dir_path)
        print(
            f"5.) Iteration {iteration}: Memory: {end_metrics['memory_usage_mb']:.2f} MB, Objects: {end_metrics['blender_objects']}, Time: {time.time() - iter_start_time:.2f}s"
        )

Files required to run the code

No response

Expected behavior

  • Memory usage should remain roughly constant after each iteration (aside from small fluctuations).
  • bproc.clean_up(True) should consistently take a small, bounded amount of time (< 0.1 s).

BlenderProc version

BlenderProc 2.8.0

rb4005 avatar May 29 '25 22:05 rb4005

Hey @rb4005,

I cannot really reproduce your problem. When running your code, the memory increased only for about 20MB per 1000 iterations. To be fair I added the following lines to speed up the rendering:

bproc.renderer.set_max_amount_of_samples(10)
bproc.renderer.set_denoiser(None)

However I doubt that these changes would have something to do with the memory leak.

Maybe this problem is specific to your system (windows)? In general I would recommend to not run too many iterations in one blenderproc run but instead call your script multiple time, as the blender startup time is not too large.

cornerfarmer avatar Jun 04 '25 17:06 cornerfarmer

Hey @rb4005,

I cannot really reproduce your problem. When running your code, the memory increased only for about 20MB per 1000 iterations. To be fair I added the following lines to speed up the rendering:

bproc.renderer.set_max_amount_of_samples(10)
bproc.renderer.set_denoiser(None)

However I doubt that these changes would have something to do with the memory leak.

Maybe this problem is specific to your system (windows)? In general I would recommend to not run too many iterations in one blenderproc run but instead call your script multiple time, as the blender startup time is not too large.

the crash is happening for me in the code bop_bvh_tree = bproc.object.create_bvh_tree_multi_objects(sampled_target_bop_objs + sampled_distractor_bop_objs) I printed the CPU ram usage before this line of code and I noticed a trend of increasing RAM usage. The most scene iteration I got was 50. I am using the BOP dataset generation code BTW.

Shivam7Sharma avatar Jul 31 '25 23:07 Shivam7Sharma

Seems like this is also identified in blender community. https://projects.blender.org/blender/blender/pulls/132040

Seekerzero avatar Sep 04 '25 02:09 Seekerzero