godot
godot copied to clipboard
Add Linux camera support
I'm implementing camera support for Linux. Yes, I've seen this one.
It supports only cameras with V4L2_CAP_STREAMING capability as I had access only to such devices.
As I need some volunteers to review my PR or even better - test it - I've written a simple game which uses camera input for tracking head movement. Tracking is very simple and far from perfect but working.
You can download the Deep Space Immersion game from here.
Also there's simple Godot project for checking available cameras and video streams here.
I've extended CameraFeed interface a bit just to make the feed format changes from Godot possible.
Reviews, tests, comments really appreciated. :)
Short YouTube preview of gameplay:
- Production edit: This partially addresses https://github.com/godotengine/godot/issues/46531.
So you did it without libv4l2 which I suppose is good. But maybe it would be feasible to implement the possibility to use libv4l2 when it is installed to support more webcams. I did this by using an additional function structure (v4l2_funcs
) which contains the standard v4l2 functions, which can be substituted with the libv4l2 ones.
I did it like this:
// camera_x11.h
struct v4l2_funcs {
int (*open)(const char *file, int oflag, ...);
int (*close)(int fd);
int (*dup)(int fd);
int (*ioctl)(int fd, unsigned long int request, ...);
long int (*read)(int fd, void *buffer, size_t n);
void *(*mmap)(void *start, size_t length, int prot, int flags, int fd, int64_t offset);
int (*munmap)(void *_start, size_t length);
bool libv4l2;
};
class V4l2_Device {
private:
...
// the v4l2 functions (either libv4l2 or normal v4l2)
struct v4l2_funcs *funcs;
...
};
class CameraX11 : public CameraServer {
private:
struct v4l2_funcs funcs;
...
};
// camera_x11.cpp
CameraX11::CameraX11() {
// determine if libv4l2 is installed and
// set functions appropriately
libv4l2 = dlopen("libv4l2.so.0", RTLD_NOW);
if (libv4l2 == NULL) {
// the default v4l2 functions
this->funcs.open = &open;
this->funcs.close = &close;
this->funcs.dup = &dup;
this->funcs.ioctl = &ioctl;
this->funcs.read = &read;
this->funcs.mmap = &mmap;
this->funcs.munmap = &munmap;
this->funcs.libv4l2 = false;
#ifdef DEBUG_ENABLED
print_line("libv4l2.so not found. Try standard v4l2 instead.");
#endif
} else {
// the libv4l2 functions
this->funcs.open = (int (*)(const char *, int, ...))dlsym(libv4l2, "v4l2_open");
this->funcs.close = (int (*)(int))dlsym(libv4l2, "v4l2_close");
this->funcs.dup = (int (*)(int))dlsym(libv4l2, "v4l2_dup");
this->funcs.ioctl = (int (*)(int, unsigned long int, ...))dlsym(libv4l2, "v4l2_ioctl");
this->funcs.read = (long int (*)(int, void *, size_t))dlsym(libv4l2, "v4l2_read");
this->funcs.mmap = (void *(*)(void *, size_t, int, int, int, int64_t))dlsym(libv4l2, "v4l2_mmap");
this->funcs.munmap = (int (*)(void *, size_t))dlsym(libv4l2, "v4l2_munmap");
this->funcs.libv4l2 = true;
#ifdef DEBUG_ENABLED
print_line("libv4l2 found.");
#endif
}
And after that I can pass the function structure to the V4l2_Device Object and use them as the normal v4l2 functions. In this way it would use libv4l2 if installed and if not the standard v4l2.
Thanks for feedback @Schmetzler.
I think the best way to go is to provide minimal working solution with complete interface (I've added methods for feed format manipulation, still some parameters should be discussed).
I'm not sure how many cameras work with standard v4l2 and how many more libv4l2 will add to this number. Maybe implementing libv4l2 is not worth at all? Less code is better and cameras are cheap, you can always get something working. Or maybe it should provide only libv4l2?
I've tested this code on bunch of different cameras, all of them supporting V4L2_CAP_STREAMING and single planar. I'm not sure if there's need for supporting other modes, I'm trying to get some volunteers to test the solution and report some feedback. Maybe I'll get someone with different setup. Or maybe nowadays all cameras will do it as mine? We'll see.
As to struct filled with function pointers, what do you think of refactoring it to class hierarchy, like V4l2Camera and Libv4l2Camera? Probably it would be more readable.
I'm not sure how many cameras work with standard v4l2 and how many more libv4l2 will add to this number. Maybe implementing libv4l2 is not worth at all? Less code is better and cameras are cheap, you can always get something working. Or maybe it should provide only libv4l2?
I guess in the end you can make every camera work with only v4l2, but it would mean that every possible mode a camera may have should be implemented (I mean the color modes YUYV and so on). Libv4l2 does this transformation to RGB so you always can access at least an RGB image from the camera (as I am not firm with the other color modes I implemented the libv4l2 approach), but made it the way that it can theoretically function without that extra library. So in the end it could even safe some complexity.
I also only had a camera with STREAMING, but implemented the other modes either way, as I found a tutorial how to access the images in those modes.
Quick reminder I was the one that created the PR #47967
As to struct filled with function pointers, what do you think of refactoring it to class hierarchy, like V4l2Camera and Libv4l2Camera? Probably it would be more readable.
That maybe viable. But I am not sure if this would add more complexity than necessary.
v4l2 lists many pixel modes but I think vast majority of cameras support YUYV or JPEG. At least I did not stumble upon camera missing one of these. I'm not sure if it's worth to complicate code and include libv4l2 support for hard to estimate increase in device coverage. Also modes other than streaming which can't be even properly tested by us.
Probably decoding YUYV to RGB, grayscale, separated planes or just copying it plus decoding JPEG to RGB is subset which will cover most of real life situations.
Added exported game to itch.io.
Any update on this?
Nope, but it takes time. If you want to speed it up please build this branch yourself or use image from github and test it on your setup then leave a comment if it was working for you or you had any problems.
Couldn't get the head tracking to work properly, but can confirm that every usb cam I had access to worked fine out of the box. EDIT: linked issue #46531
For the record, I apologize for the lack of review of this PR after all this time. I'm aware of it and https://github.com/godotengine/godot/pull/47967 (and a couple similar PRs for Windows), just didn't find time to prioritize those yet. Hopefully after the 4.0 release we can have a pass on camera support for all platforms and assess those PRs.
No problem, we're all used to waiting. :) I think it's not only review, there will be some decisions needed on camera API.
@Calinou Added these three lines to codespell exclude file, not sure where to put the file though. When retesting please pull latest CameraTest as I fixed bug with yuyv output format. You should be able to see yuyv output in color now.
When fixed please squash your commits into one, see here
Memory leak in godot camera branch
Description
Adding the camera material in a 3d scene on any object causes a memory leak.
Steps to Reproduce
- Making a 3d node
- Adding camera texture to albedo of 3d object
- Observing memory usage godot memory usage increases by 1GB every 10-15 seconds
Screenshots
Environment
- Godot Version: 4.3.dev Camera Branch
- Operating System: Arch Linux X11
- Kernel: 6.7.5
- PC-Specs
- CPU: AMD Ryzen 5 5600X (12) @ 3.700GHz
- GPU: NVIDIA GeForce RTX 4060 Ti 16GB
- Memory: 32015MiB
Godot Project Repository
Checked on CameraTest project and can confirm that there's a memory leak. Not sure how to fix it.
CameraFeed::set_YCbCr_img(const Ref<Image> &p_rgb_img) calls RenderingServer::get_singleton()->texture_2d_update and p_rgb_img passed as parameter is not freed.
When using RenderingServer::get_singleton()->texture_replace memory is freed as expected.
I managed to mitigate the memory leak (at least with JpegBufferDecoder
devices since I don't have any other device to perform tests).
I'm not confident in C++ and Ref<T> management but here is the minor refactor I've done :
In all ::decode
implementation functions from buffer_decoder.cpp
// Replacing the existing :
// Ref<Image> image = memnew(Image(width, height, false, Image::FORMAT_L8, image_data));
// By the following :
if (image.is_valid()) {
image->set_data(width, height, false, Image::FORMAT_L8, image_data);
}
else {
image = (Ref<Image>) memnew(Image(width, height, false, Image::FORMAT_L8, image_data));
}
while I moved the Ref<Image> image
as a protected
class attributes in buffer_decoder.h
.
I mimicked this refactor principle for all ::decode
method.
This results to a leak of only one Image:
Leaked instance: Image:-9222496825599064925 - Resource path:
using the --verbose
launch option on a debug export. (instead of thousand leaks before these modifications)
I did not manage to fix the last Ref<Image> instance leak (for now).
Here some valgrind
sneak peeks on a debug export build (project running for 10-20s).
Before refactoring :
[...]
==2320356== 490,737,312 bytes in 117 blocks are possibly lost in loss record 2,420 of 2,422
==2320356== at 0x4E9E7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==2320356== by 0x4790F6: Memory::alloc_static(unsigned long, bool) [clone .constprop.0] (memory.cpp:75)
==2320356== by 0xD61C73: Error CowData<signed char>::resize<false>(long) (cowdata.h:340)
==2320356== by 0x31D31EF: UnknownInlinedFun (vector.h:95)
==2320356== by 0x31D31EF: jpeg_load_image_from_buffer(Image*, unsigned char const*, int) (image_loader_jpegd.cpp:65)
==2320356== by 0x31D49F1: _jpegd_mem_loader_func(unsigned char const*, int) [clone .lto_priv.0] (image_loader_jpegd.cpp:131)
==2320356== by 0xD22793: Image::_load_from_buffer(Vector<unsigned char> const&, Ref<Image> (*)(unsigned char const*, int)) (image.cpp:3938)
==2320356== by 0x3675F5F: JpegBufferDecoder::decode(StreamingBuffer) (buffer_decoder.cpp:183)
==2320356== by 0x36386F5: UnknownInlinedFun (camera_feed_linux.cpp:196)
==2320356== by 0x36386F5: UnknownInlinedFun (camera_feed_linux.cpp:42)
==2320356== by 0x36386F5: CameraFeedLinux::update_buffer_thread_func(void*) (camera_feed_linux.cpp:36)
==2320356== by 0xDA542C: Thread::callback(unsigned long, Thread::Settings const&, void (*)(void*), void*) (thread.cpp:64)
==2320356== by 0x37CCBF3: execute_native_thread_routine (in <redacted>)
==2320356== by 0x50DB608: start_thread (pthread_create.c:477)
==2320356== by 0x4FF6352: clone (clone.S:95)
==2320356==
[...]
==2320356== 541,069,344 bytes in 129 blocks are indirectly lost in loss record 2,421 of 2,422
==2320356== at 0x4E9E7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==2320356== by 0x4790F6: Memory::alloc_static(unsigned long, bool) [clone .constprop.0] (memory.cpp:75)
==2320356== by 0xD61C73: Error CowData<signed char>::resize<false>(long) (cowdata.h:340)
==2320356== by 0x31D31EF: UnknownInlinedFun (vector.h:95)
==2320356== by 0x31D31EF: jpeg_load_image_from_buffer(Image*, unsigned char const*, int) (image_loader_jpegd.cpp:65)
==2320356== by 0x31D49F1: _jpegd_mem_loader_func(unsigned char const*, int) [clone .lto_priv.0] (image_loader_jpegd.cpp:131)
==2320356== by 0xD22793: Image::_load_from_buffer(Vector<unsigned char> const&, Ref<Image> (*)(unsigned char const*, int)) (image.cpp:3938)
==2320356== by 0x3675F5F: JpegBufferDecoder::decode(StreamingBuffer) (buffer_decoder.cpp:183)
==2320356== by 0x36386F5: UnknownInlinedFun (camera_feed_linux.cpp:196)
==2320356== by 0x36386F5: UnknownInlinedFun (camera_feed_linux.cpp:42)
==2320356== by 0x36386F5: CameraFeedLinux::update_buffer_thread_func(void*) (camera_feed_linux.cpp:36)
==2320356== by 0xDA542C: Thread::callback(unsigned long, Thread::Settings const&, void (*)(void*), void*) (thread.cpp:64)
==2320356== by 0x37CCBF3: execute_native_thread_routine (in <redacted>)
==2320356== by 0x50DB608: start_thread (pthread_create.c:477)
==2320356== by 0x4FF6352: clone (clone.S:95)
==2320356==
[...]
==2320356== 541,185,456 (116,112 direct, 541,069,344 indirect) bytes in 246 blocks are definitely lost in loss record 2,422 of 2,422
==2320356== at 0x4E9E7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==2320356== by 0x4790F6: Memory::alloc_static(unsigned long, bool) [clone .constprop.0] (memory.cpp:75)
==2320356== by 0x3675EF6: UnknownInlinedFun (regex.cpp:40)
==2320356== by 0x3675EF6: JpegBufferDecoder::decode(StreamingBuffer) (buffer_decoder.cpp:182)
==2320356== by 0x36386F5: UnknownInlinedFun (camera_feed_linux.cpp:196)
==2320356== by 0x36386F5: UnknownInlinedFun (camera_feed_linux.cpp:42)
==2320356== by 0x36386F5: CameraFeedLinux::update_buffer_thread_func(void*) (camera_feed_linux.cpp:36)
==2320356== by 0xDA542C: Thread::callback(unsigned long, Thread::Settings const&, void (*)(void*), void*) (thread.cpp:64)
==2320356== by 0x37CCBF3: execute_native_thread_routine (in <redacted>)
==2320356== by 0x50DB608: start_thread (pthread_create.c:477)
==2320356== by 0x4FF6352: clone (clone.S:95)
==2320356==
[...]
==2320356== LEAK SUMMARY:
==2320356== definitely lost: 117,340 bytes in 270 blocks
==2320356== indirectly lost: 541,080,165 bytes in 189 blocks
==2320356== possibly lost: 490,737,312 bytes in 117 blocks
==2320356== still reachable: 486,745 bytes in 3,330 blocks
==2320356== suppressed: 0 bytes in 0 blocks
==2320356==
==2320356== Use --track-origins=yes to see where uninitialised values come from
==2320356== For lists of detected and suppressed errors, rerun with: -s
==2320356== ERROR SUMMARY: 376 errors from 19 contexts (suppressed: 6 from 4)
After refactoring :
[...]
==2349846== 4,194,336 bytes in 1 blocks are possibly lost in loss record 2,421 of 2,421
==2349846== at 0x4E9F7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==2349846== by 0x4790F6: Memory::alloc_static(unsigned long, bool) [clone .constprop.0] (memory.cpp:75)
==2349846== by 0xD61473: Error CowData<signed char>::resize<false>(long) (cowdata.h:340)
==2349846== by 0x31D48CF: UnknownInlinedFun (vector.h:95)
==2349846== by 0x31D48CF: jpeg_load_image_from_buffer(Image*, unsigned char const*, int) (image_loader_jpegd.cpp:65)
==2349846== by 0x31D60D1: _jpegd_mem_loader_func(unsigned char const*, int) [clone .lto_priv.0] (image_loader_jpegd.cpp:131)
==2349846== by 0xD21F93: Image::_load_from_buffer(Vector<unsigned char> const&, Ref<Image> (*)(unsigned char const*, int)) (image.cpp:3938)
==2349846== by 0x3676FC1: JpegBufferDecoder::decode(StreamingBuffer) (buffer_decoder.cpp:228)
==2349846== by 0x3639EA5: UnknownInlinedFun (camera_feed_linux.cpp:196)
==2349846== by 0x3639EA5: UnknownInlinedFun (camera_feed_linux.cpp:42)
==2349846== by 0x3639EA5: CameraFeedLinux::update_buffer_thread_func(void*) (camera_feed_linux.cpp:36)
==2349846== by 0xDA4C1C: Thread::callback(unsigned long, Thread::Settings const&, void (*)(void*), void*) (thread.cpp:64)
==2349846== by 0x37CDEF3: execute_native_thread_routine (in <redacted>)
==2349846== by 0x50DC608: start_thread (pthread_create.c:477)
==2349846== by 0x4FF7352: clone (clone.S:95)
==2349846==
[...]
==2349846== LEAK SUMMARY:
==2349846== definitely lost: 1,724 bytes in 26 blocks
==2349846== indirectly lost: 10,813 bytes in 59 blocks
==2349846== possibly lost: 4,194,336 bytes in 1 blocks
==2349846== still reachable: 486,745 bytes in 3,330 blocks
==2349846== suppressed: 0 bytes in 0 blocks
==2349846==
==2349846== Use --track-origins=yes to see where uninitialised values come from
==2349846== For lists of detected and suppressed errors, rerun with: -s
==2349846== ERROR SUMMARY: 325 errors from 20 contexts (suppressed: 6 from 4)
1 block
versus <n webcam frames> block
@TontonSancho Applied your workaround. Still I think it's workaround but better done than perfect. Thanks for help.
After exporting a project on this branch on the exported version the camera is not working the object texture goes to pink. While in engine and in engine runtime everything works fine. No errors appear that I can see. And the memory leak seams to be fixed now.
After leting it run for a few minutes with webcam on I get this error
ERROR: Cannot query device. at: _query_device (modules/camera/camera_feed_linux.cpp:52) ERROR: Invalid format index. at: set_format (servers/camera/camera_feed.cpp:287) ERROR: Condition "!CameraFeed::set_format(p_index, p_parameters)" is true. Returning: false at: set_format (modules/camera/camera_feed_linux.cpp:288)
Going down memory lane for me a little. Overall looks good. I would like some more detail/explanation around the format additions, especially as we're adding an implementation on the base classes and then only implement it for Linux. In its current form it looks like calling any of these methods will break on the other platforms as formats are not initialised.
If it does need to be exposed on the base class, we should have these methods as virtual methods with dummy implementations, then move the actual implementation into CameraFeedLinux
.
@Exw27 Thanks for thorough testing, there was missing close on file descriptor. @BastiaanOlij Implemented your suggestions. Not sure about explanation, description in xml should be more detailed?
Diff to improve the includes (didn't check if all the system includes are needed):
diff --git a/modules/camera/buffer_decoder.cpp b/modules/camera/buffer_decoder.cpp
index 09e7b9156a..024a68f080 100644
--- a/modules/camera/buffer_decoder.cpp
+++ b/modules/camera/buffer_decoder.cpp
@@ -32,6 +32,8 @@
#include "servers/camera/camera_feed.h"
+#include <linux/videodev2.h>
+
BufferDecoder::BufferDecoder(CameraFeed *p_camera_feed) {
camera_feed = p_camera_feed;
width = camera_feed->get_format().width;
diff --git a/modules/camera/buffer_decoder.h b/modules/camera/buffer_decoder.h
index 0de61d883e..046129bd2e 100644
--- a/modules/camera/buffer_decoder.h
+++ b/modules/camera/buffer_decoder.h
@@ -33,10 +33,6 @@
#include "core/io/image.h"
#include "core/templates/vector.h"
-#include "servers/camera_server.h"
-
-#include <linux/videodev2.h>
-#include <stdint.h>
class CameraFeed;
diff --git a/modules/camera/camera_feed_linux.cpp b/modules/camera/camera_feed_linux.cpp
index c9e8b1fbf1..6d9e264872 100644
--- a/modules/camera/camera_feed_linux.cpp
+++ b/modules/camera/camera_feed_linux.cpp
@@ -30,6 +30,11 @@
#include "camera_feed_linux.h"
+#include <fcntl.h>
+#include <sys/ioctl.h>
+#include <sys/mman.h>
+#include <unistd.h>
+
void CameraFeedLinux::update_buffer_thread_func(void *p) {
if (p) {
CameraFeedLinux *camera_feed_linux = (CameraFeedLinux *)p;
diff --git a/modules/camera/camera_feed_linux.h b/modules/camera/camera_feed_linux.h
index 99ea6754f9..dc07ff2b59 100644
--- a/modules/camera/camera_feed_linux.h
+++ b/modules/camera/camera_feed_linux.h
@@ -36,11 +36,7 @@
#include "core/os/thread.h"
#include "servers/camera/camera_feed.h"
-#include <fcntl.h>
#include <linux/videodev2.h>
-#include <sys/ioctl.h>
-#include <sys/mman.h>
-#include <unistd.h>
struct StreamingBuffer;
diff --git a/modules/camera/camera_linux.cpp b/modules/camera/camera_linux.cpp
index 8940f4df14..0cfb6b7b9e 100644
--- a/modules/camera/camera_linux.cpp
+++ b/modules/camera/camera_linux.cpp
@@ -30,6 +30,14 @@
#include "camera_linux.h"
+#include "camera_feed_linux.h"
+
+#include <dirent.h>
+#include <fcntl.h>
+#include <sys/ioctl.h>
+#include <sys/stat.h>
+#include <unistd.h>
+
void CameraLinux::camera_thread_func(void *p_camera_linux) {
if (p_camera_linux) {
CameraLinux *camera_linux = (CameraLinux *)p_camera_linux;
diff --git a/modules/camera/camera_linux.h b/modules/camera/camera_linux.h
index 8db2e25311..66f6aa0ffb 100644
--- a/modules/camera/camera_linux.h
+++ b/modules/camera/camera_linux.h
@@ -31,17 +31,10 @@
#ifndef CAMERA_LINUX_H
#define CAMERA_LINUX_H
-#include "camera_feed_linux.h"
#include "core/os/mutex.h"
+#include "core/os/thread.h"
#include "servers/camera_server.h"
-#include <dirent.h>
-#include <fcntl.h>
-#include <linux/videodev2.h>
-#include <sys/ioctl.h>
-#include <sys/stat.h>
-#include <unistd.h>
-
class CameraLinux : public CameraServer {
private:
SafeFlag exit_flag;
Merge wise, we're in feature freeze for 4.3, but now I think this is ready to be included in the 4.4 milestone, to merge early on in that cycle.
Given how platform specific the code in this module is, I think we might want to do away with the module approach and actually move each platform's code in their respective platform folders. This isn't something to do in this PR though, we should either do it after, or before this is merged (and then rebased). WDYT @BastiaanOlij @bruvzg?
I definitely struggled with this part of it all when I originally wrote the server. I think some of it originally started in the platform folders. I think it makes the most sense to have the code there instead of modules. I guess it was moved there so it could be compiled out of the solution?
(and indeed, a discussion and action point for a separate PR, this PR looks good once the requested changes are done)
Ok, implemented next bunch of suggestions.
I would like to ask for your opinion on set_format
method.
What it does is setting one of predefined camera formats (resolution/framerate/compression) and in case of yuyv data also allows to set the output format - yuyv, separated, grayscale or rgb.
Setting output format is done by passing dictionary with key output
having predefined value.
It's ugly and probably this dictionary should be replaced by an enum.
I've made it this way 'cause I was not sure if there will be any other parameters needed and adding parameters in future will probably result in braking api. Dictionary allows to pass different set of data without breaking api.
Now it's good time to decide what this method's signature should look like.
Also current implementation allows only changing yuyv data, jpg based compression always results in rgb. I've made it this way because I think majority of users will want to use rgb and avoid writing shader to convert yuyv to rgb. Is there any sense in adding convertion from rgb to other formats?
@pkowal1982 I think I get the zest of what you're doing with the format setting. I'm still a little worried that there is too much implementation in the base class without any of the other operating systems doing something with this. That makes it difficult to create functionality that works on multiple platforms. But I also have to admit that I lack hands on experience to really have a good opinion on how this works or should work.
Other than that issue, I think this looks really really good and I'm for merging it as is. We might want to make the format stuff experimental and get more feedback by users.
I agree that if we can base the work on RGB we should. I was always concerned about the overhead impact of converting yuyv to RGB on the CPU hence going for the shader solution, but it puts more pressure on the user. This may be a discussion for later and see if we can do this for any camera feed, especially if we make good use of threads and accept the overhead of the CPU based conversion.
Just curious if this is still on track to be merged in 4.4? Not trying to be a nuisance, I'm just incredibly excited for this specific feature 🤓
I tried merging this locally, and I get this error when opening any project on Linux (Fedora 40 on Wayland):
ERROR: Cannot set format, error: 16.
at: set_format (modules/camera/camera_feed_linux.cpp:328)
So does this mean we will have camera support for Godot 4.4 Linux projects? Very exciting!!! And then I can finally proceed with my ideas for a new game using webcam :)
How about Windows camera support? Will that also be available with Godot 4.4?
How about Windows camera support?
See https://github.com/godotengine/godot/pull/49763. There is a version of that PR rebased against master
posted in the comments, but it didn't work when I tested it (and it uses a significantly different interface to access cameras).
Will that also be available with Godot 4.4?
We don't have an ETA for merging the Windows PR, since it's not in a mergeable state yet.