Prusa-Link
Prusa-Link copied to clipboard
Feature request: Camera Live instead of Snapshots
Hi, I'm very happy to see that camera support is appearing, especially self-detection on USB.
Unfortunately, as far as I can see at this moment there is no possibility to preview live, only after time or layer change.
Ideally, it would be possible to preview live at, say, 24 frames per second.
Do you plan to add such a possibility? Even if it was to be optional or only supported in PrusaLink (without the ability to connect this camera to Prusa Connect).
I think it would be a great improvement.
Hi, at this moment no.
But we want to create some "live" stream to better camera focus adjustment. That will bring new problems, like RPi Zero CPU load and so on. But for local usage only, it can be one of camera trigger scheme.
Hi, at this moment no.
But we want to create some "live" stream to better camera focus adjustment. That will bring new problems, like RPi Zero CPU load and so on. But for local usage only, it can be one of camera trigger scheme.
Ok, it is understandable. I'm just thinking that you allow usage of other systems. (Not official, but it is possible).
I'm currently using PrusaLink with RPI 3A+ so i have one spare USB port to use with a camera. From what I can see right now there is no problem with CPU load I didn't have opportunity to test it with live camera.
So I think that it would be logical to allow such thing as experimental option.
Can you direct me / give me a hint on how to change trigger scheme to make it more "live"?
There is no trigger scheme for this, The way it's set up, it just sends every picture it takes. There's no video support at all, just JPEG images taken then saved to RAM for the local API to get and sent to Connect backend for online viewing. That's it.
I have a problem. USB cameras output mostly in MJPEG and YUYV "uncompressed" while rpi cameras do YUYV only. The hardware encoder on the raspi can encode YUYV to MJPEG at up to 1920x1920. In the absence of a second CPU core, I forbid any resolution above this if the output is not already an MJPEG. If there are more cores or if there's no supported hw encoder (only raspi one is supported), I switch to libjpeg turbo.
Okay, well, that was a mouthful. Now put video into the mix and a rewrite the sending mechanism, so it sends a frame only when it's time to send it. Also make the web show the stream. You get like a month of additional work. You are not the first, nor the last who will ask for video. If I could flip a switch that says "Video" and that would be it, I'd do it. But there are a myriad of other things that need addressing before I'll be allowed to sink time into this.
There is no trigger scheme for this, The way it's set up, it just sends every picture it takes. There's no video support at all, just JPEG images taken then saved to RAM for the local API to get and sent to Connect backend for online viewing. That's it.
I have a problem. USB cameras output mostly in MJPEG and YUYV "uncompressed" while rpi cameras do YUYV only. The hardware encoder on the raspi can encode YUYV to MJPEG at up to 1920x1920. In the absence of a second CPU core, I forbid any resolution above this if the output is not already an MJPEG. If there are more cores or if there's no supported hw encoder (only raspi one is supported), I switch to libjpeg turbo.
Okay, well, that was a mouthful. Now put video into the mix and a rewrite the sending mechanism, so it sends a frame only when it's time to send it. Also make the web show the stream. You get like a month of additional work. You are not the first, nor the last who will ask for video. If I could flip a switch that says "Video" and that would be it, I'd do it. But there are a myriad of other things that need addressing before I'll be allowed to sink time into this.
We all know that there is no magical way to "flip a switch and that would be it". It is hard to maintain a software that could be used in so many configurations, especially since you need to have in mind that every hardware is different and have different capabilities.
In some way, maybe I wasn't clear. I'm not talking about enabling video support, maybe not for now (😅).
More like increase the image / snapshot frequency. Instead of one frame per 10 seconds more like x frame per one second where User can specify the number (x) of updates depending of his hardware capabilities.
I know that there is hardware / software / bandwidth limit imposed by nature. I'm not demanding nor advising changing the software to reflect image updates in PrusaConnect. For the time being I think that if you're allowed to use such function / feature (described by me earlier) User should be prevented from connecting it to PrusaConnect. To prevent overwhelming servers / excess bandwith consumtion.
Does it sound's more resonable?
Wouldn't it be possible to add IP cameras? PrusaLink and PrusaConnect would only have to directly display the webcam feed like Octoprint does. I am currently testing PrusaLink and also I have OctoPrint with two ESP32-CAM IP. It works very well and even with a less powerful Raspberry like the Pi Zero W Version 1, it's not a problem since it's the client browser that takes care of getting the stream. It does not go through a raspberry
It is possible, but impractical. If you only show the preview when you're looking at it, there's no way to have a robot look at it. No way to do a timelapse and no way to take a photo during an event like a crash. So I don't think we're going to go this way. Also because giving you a stream from an esp cam then limiting your raspberry pi camera framerate to one per 10s would look braindead.
The problem is not a lack of ideas. This is the second time I see this one btw. The problem is me. Make me do more stuff in less time. Somewhere along the way I started to dislike being alone in my room and I have a hard time getting that to change.
But like yes, good idea, I'm sure you could script that into a browser extension using chatgpt in an afternoon
i would also like live camera view
Hello, I've been running PrusaLink on my MK3S for a few weeks now via a directly connected RPI zero 2 W and I'm actually very satisfied with the possibilities. I recently connected an additional camera (Pi camera day and night vision, IR-Cut video camera 1080p HD webcam 5MP OV5647 sensor). The configuration went very well according to the Prusa guide. However, I miss the option of being able to carry out a focus adjustment using a live image from the camera, because that is extremely difficult with 10s still images. Furthermore, I noticed that the IR image is not displayed when there is no lighting. PrusaLink only returns a white image. The camera was previously tested on another RPI 3B and delivered flawless results there, even in the absence of light. I believe this could still be a PrusaLink software issue. It would be nice if these two previously mentioned points could be implemented in future versions.