rend3 icon indicating copy to clipboard operation
rend3 copied to clipboard

`Option::unwrap()` failed in `rend3::managers:material::apply_buffer_cpu()`

Open kpreid opened this issue 2 years ago • 2 comments

I haven't got a self-contained repro yet, but it involves loading an empty glTF scene with unused materials.

Backtrace:

   3: core::option::Option<T>::unwrap
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/option.rs:931:21
   4: rend3::managers::material::apply_buffer_cpu::{{closure}}
             at /Users/kpreid/.cargo/git/checkouts/rend3-e03f89403de3386a/a68c76a/rend3/src/managers/material.rs:319:48
   5: rend3::util::freelist::buffer::FreelistDerivedBuffer::apply::{{closure}}
             at /Users/kpreid/.cargo/git/checkouts/rend3-e03f89403de3386a/a68c76a/rend3/src/util/freelist/buffer.rs:94:24
   6: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &mut F>::call_once
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/ops/function.rs:305:13
   7: core::option::Option<T>::map
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/option.rs:1072:29
   8: <core::iter::adapters::map::Map<I,F> as core::iter::traits::iterator::Iterator>::next
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/iter/adapters/map.rs:103:26
   9: <core::iter::adapters::enumerate::Enumerate<I> as core::iter::traits::iterator::Iterator>::next
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/iter/adapters/enumerate.rs:47:17
  10: rend3::util::scatter_copy::ScatterCopy::execute_copy
             at /Users/kpreid/.cargo/git/checkouts/rend3-e03f89403de3386a/a68c76a/rend3/src/util/scatter_copy.rs:108:28
  11: rend3::util::freelist::buffer::FreelistDerivedBuffer::apply
             at /Users/kpreid/.cargo/git/checkouts/rend3-e03f89403de3386a/a68c76a/rend3/src/util/freelist/buffer.rs:101:9
  12: rend3::managers::material::apply_buffer_cpu
             at /Users/kpreid/.cargo/git/checkouts/rend3-e03f89403de3386a/a68c76a/rend3/src/managers/material.rs:318:5
  13: rend3::managers::material::MaterialManager::evaluate
             at /Users/kpreid/.cargo/git/checkouts/rend3-e03f89403de3386a/a68c76a/rend3/src/managers/material.rs:230:21
  14: rend3::renderer::eval::evaluate_instructions
             at /Users/kpreid/.cargo/git/checkouts/rend3-e03f89403de3386a/a68c76a/rend3/src/renderer/eval.rs:186:5
  15: rend3::renderer::Renderer::evaluate_instructions
             at /Users/kpreid/.cargo/git/checkouts/rend3-e03f89403de3386a/a68c76a/rend3/src/renderer/mod.rs:482:9
  16: gltf_render::GltfRend3Renderer::render_to_rend3
             at ./tests/gltf-render.rs:167:31

Code:

https://github.com/BVE-Reborn/rend3/blob/a68c76a2809cf545484d727e32a222400134d085/rend3/src/managers/material.rs#L319

rend3 version: a68c76a2809cf545484d727e32a222400134d085

kpreid avatar Jan 08 '24 00:01 kpreid

I had the exactly same panic, but in apply_buffer_gpu. In my case I was setting up a lot of things in setup() and it seemed that limiting materials being created to 101 (which means the internal id will be 0..100), I could avoid this panic.

Wildly guessing but maybe something overflows as the intermediary buffer until eval() is 100?

Was still the case as of 14507468b04fcbb3e50f457a9ddde860a11c5b03 (rend3-hp), but I didn't test even more recent versions

Edit: Different observation. If I load a different model, it crashes again, so the material count that is tolerated may be lowere there for whatever reason!

MeFisto94 avatar Feb 03 '25 00:02 MeFisto94

Heureka, solved, kind of. I looked at the wrong things for a while until I realized why data_vec looked how it looked: When I didn't cap the material count, it was missing the first entry, which was the placeholder material that was used.

So, the problem is when you add a material and drop it in the same frame, rend3 tried to upload it into the buffer, but it was already removed from data_vec again. That's why I added a simple drop_index, that prevents it from being fed into the apply functions by the freelist buffer.

One could argue that calling drop_index from update_material too, as it could ensure the index is only present once (think: add -> update -> update, we'd run the serialization three times).

Probably the same applies to all other managers, too. We just didn't have the correct access pattern.

Also in use_index I think I found an off-by-one error and solved it similar to the logic for the data_vec reallocation in MaterialManager#add: If we had a size of 1 but an index of 1, we wouldn't re-allocate to 2. That means in practice we could've left out one material every time. I guess in practice that again didn't show because usually we weren't close to POT material counts. But I am not sure about my finding because nothing had crashed or was visible errorneously before. I also specifically didn't read the contents of ScatterCopy.

diff --git a/rend3/src/managers/material.rs b/rend3/src/managers/material.rs
index 343f604..6282dcc 100644
--- a/rend3/src/managers/material.rs
+++ b/rend3/src/managers/material.rs
@@ -201,6 +201,7 @@ impl MaterialManager {
 
         let archetype = self.archetypes.get_mut(&type_id).unwrap();
         let bind_group_index = (archetype.remove_data)(&mut archetype.data_vec, handle);
+        archetype.buffer.drop_index(handle.idx);
 
         if let ProfileData::Cpu(index) = bind_group_index {
             self.texture_deduplicator.remove(index);
diff --git a/rend3/src/util/freelist/buffer.rs b/rend3/src/util/freelist/buffer.rs
index 80d1791..46960e0 100644
--- a/rend3/src/util/freelist/buffer.rs
+++ b/rend3/src/util/freelist/buffer.rs
@@ -46,13 +46,20 @@ impl FreelistDerivedBuffer {
     }
 
     pub fn use_index(&mut self, index: usize) {
-        if index > self.reserved_count {
-            self.reserved_count = index.next_power_of_two();
+        if index >= self.reserved_count {
+            self.reserved_count = (index + 1).next_power_of_two();
         }
 
         self.stale.push(index);
     }
 
+    /// Drops an index from the stale buffer, the list marked for updating this frame.
+    /// This is required when use_index was called in the same frame as something was dropped, to prevent updating
+    /// an already removed element.
+    pub fn drop_index(&mut self, index: usize) {
+        self.stale.retain(|&x| x != index);
+    }
+
     pub fn apply<T, F>(
         &mut self,
         device: &Device,

Edit: The only other type affected could be the object, but due to it's use of mapping, it can skip Nones. I don't understnad the context: https://github.com/BVE-Reborn/rend3/blob/d088a841b0469d07d5a7ff3f4d784e97b4a194d5/rend3/src/managers/object.rs#L340

MeFisto94 avatar Feb 16 '25 16:02 MeFisto94