Luxonis-Brandon
Luxonis-Brandon
Hi everyone! We stumbled upon your website today and love this project. So we make an open-source 'spatial AI' camera called DepthAI. We'd really like to integrate it with opendatacam...
## Start with the `why`: The `why` of this effort (and initial research) is that any many applications depth cameras (and even sometimes LIDAR) are not sufficient to successfully detect...
### Start with the `why`: In some physical installations it is advantages (or necessary) to install megaAI `upside-down`, primarily because with the default settings, the image is upside down when...
### Start with the `why`: All DepthAI units with onboard cameras come pre-calibration. However, these calibrations can reduce in quality over time from mechanical stressors (like shock, vibration, extreme temperature...
## Start with the `why`: In unstructured environments, DepthAI allows capabilities (spatial AI) which previously required multiple components (e.g. a depth camera, a host, and a neural processor) which then...
### Start with the `why`: In some cases it may be desirable to feed UVC (USB Video Class, e.g. a webcam) output to have compatibility with existing software stacks/etc. To...
### Start with the `why`: Algorithmic stereo depth is useful because it's fast and relatively inexpensive (computational-budget, power-budget, and latency-budget). However, it is easily fooled in specific scenes. And these...
### Start with the `why`: Now that DepthAI users are building increasingly complex pipelines, we are finding that the Image Manip node is being needed to do functions that are...
### Start with the `why`: For applications that involve encoded video as a output of the system it can be valuable to have on-video metrics stored into the encoded video...
### Start with the `why`: In some cases it may be desirable to feed RTSP (Real Time Streaming Protocol, i.e. IP Camera) output to have compatibility with existing software stacks/etc....