friend/omiai wearable as a listed audio device
that would be cool to see friend as audio device here? somehow
not sure if useful
or rather go to the sync integration path
cc @m13v
Yes, it would make sense to show it in the pipes I guess
On Fri, Aug 30, 2024 at 8:20 AM Louis Beaumont @.***> wrote:
Screenshot.2024-08-30.at.17.19.17.png (view on web) https://github.com/user-attachments/assets/0c6debd0-d447-45bc-8e3c-6511b7fd8bec
that would be cool to see friend as audio device here? somehow
not sure if useful
or rather go to the sync integration path
cc @m13v https://github.com/m13v
— Reply to this email directly, view it on GitHub https://github.com/mediar-ai/screenpipe/issues/249, or unsubscribe https://github.com/notifications/unsubscribe-auth/AY62CDEPWZJLUYWVCUJZ53TZUCEUZAVCNFSM6AAAAABNMV4EPSVHI2DSMVQWIX3LMV43ASLTON2WKOZSGQ4TONJRGE4TMNQ . You are receiving this because you were mentioned.Message ID: @.***>
yeah, you need to wait for version 2.
#467 first then we can ingest friend into screenpipe
I have a standalone sample program written in Rust to talk to Omi AI device (dev kit v1). I'll just post the code here. This Rust program connects to a BLE device Friend, discovers its characteristics, and subscribes to audio data notifications. It uses the Opus codec to decode incoming audio data into PCM format, enabling continuous audio processing. The program also monitors the device’s connection by periodically reading a heartbeat characteristic (like battery level) and attempts to reconnect if the connection is lost. Notifications are handled asynchronously, ensuring smooth and responsive audio data processing. need your help to integrate with sceenpipe-audio without changing too much to the current framework. Cause right now, other audio devices are taken care of by cpal.
@goodpeter-sun cool!
@cparish312 created a /add endpoint recently in screenpipe API but we cannot take raw audio for now, I guess if someone could send a PR that do this:
- user send audio data through
/add - screenpipe transcribe using the given transcription engine in parameters (if cloud, api key in body too)
- then rest same code, add to DB
after this i guess this could be a task that run in the background and constantly check if friend/omi device is here and if this is the case send the data
oh wait do they store data on the device?
otherwise would need to deploy a server that store the data then send back to screenpipe
maybe we could host a server that host your end-to-end encrypted friend/omi data and you get it back when you're on your computer which decrypt and sync to screenpipe
devkit v1 doesn't have a storage. devkit v2 has a SD memory. I am currently only playing with v1.