Paul Spedding
Paul Spedding
UPDATE: I changed USB ports and now it works but I'm getting a different error: > sonoff > Opening port /dev/ttyUSB0, baud 500000 > Reading data from CC1352P2_CC2652P_launchpad_coordinator_20220219.hex > Firmware...
Anyone find a solution to this?
Still an issue for me.
> Hi @SuperPauly! > > To clarify - you'd like to have a hosted solara app, where some computation (within a component?) is done on the client's machine? There isn't...
> Running python code **within** a Solara app in a context aware way is tricky. > > Depending on how much integration you would need between the surrounding app (if...
Any updates?
@menny can it be pushed? Super useful feature.
> > Importing bookmarks would be a big hurdle to finally use Vanadium instead of Brave or Cromite. I have tons of bookmarks and can't possibly add them all in...
As long as the model supports that language it should work. AFAIK the LLM is used to chunk so if you use a system prompt like this: ```python from litellm...
Agreed! I use Layla AI for Android's NPU for offline LLMs. It's much faster than the GPU regardless of using Vulcan or OpenCL. Not only inference but loading the model...