Torstein Grindvik

Results 39 issues of Torstein Grindvik

Controversion effect, but the learning experience is probably great. Some information here: https://developer.nvidia.com/gpugems/gpugems3/part-iv-image-effects/chapter-27-motion-blur-post-processing-effect We need to be able to access a history of previously rendered frames, I think? Will be...

enhancement

# Objective Allow use of `bevy_core` types without needing `bevy_reflect`. ## Solution Make `bevy_reflect` within `bevy_core` optional. It's compiled in by default. Turn on reflect in dependencies as well when...

A-Core
S-Ready-For-Final-Review
A-Reflection

# Objective Allow use of `bevy_input` types without needing `bevy_reflect`. ## Solution Make `bevy_reflect` within `bevy_input` optional. It's compiled in by default. Turn on reflect in dependencies as well when...

A-Input
C-Code-Quality
A-Reflection

## Bevy version Whichever Bevy version runs on the website, 0.13.2 ## \[Optional\] Relevant system information Linux, firefox ## What you did https://bevyengine.org/examples/Shaders/extended-material/ ## What went wrong Crashes, console states:...

C-Bug
A-Rendering
O-Web
S-Ready-For-Implementation

# Objective The docs on SpatialBundle's pub const constructors mention that one is "visible" when it's actually inherited, which afaik means it's conditional on its parent's visibility. I feel it's...

C-Docs
D-Trivial
A-Rendering
S-Ready-For-Final-Review

When DetectChessboardCornersX is called via a pyramid detector `.process` will be called multiple times with images of varying dimensions. `borderBlur` needs to have `setImage` called in order for it to...

**Describe the bug** When doing a matmul involving self transposed and self with some "strange" sizes, the number of elements in the slice is wrong. It works with `.into_data` but...

**Describe the bug** Filing as bug since a solution which doesn't crash exists. Using CUDA backend I tried to do ```rust let out = imgs.grid_sample_2d(grid, InterpolateMode::Bilinear); ``` Where `imgs` has...

bug
enhancement

I have a backend which defaults to f32. I have a model which wants f16. I should be able to `Tensor::empty(...)` and get a f16 directly, doing `Tensor::empty(...).cast(...)` seems wasteful....

enhancement