Shubham Panchal
Shubham Panchal
Yes @mhyeonsoo, this feature is on priority, but I will complete it within a few days now. The Kotlin bindings for llama.cpp are present in another project of mine, [SmolChat-Android](https://github.com/shubham0204/SmolChat-Android),...
Local LLMs are not supported in the app with LiteRT (Mediapipe LLM Inference API).
@karthik-prog-max Have you modified the versions of the dependencies in `libs.version.toml`? This error seems to originate in older versions of `activity-compose` * https://stackoverflow.com/a/73487708/13546426 * https://stackoverflow.com/a/75512454/13546426
@karthik-prog-max Can you try adding the fragment-ktx dependency as mentioned in this comment? https://github.com/google/dagger/issues/3484#issuecomment-1209599735og-max
Hilt has now been replaced by Koin for dependency injection, thus eliminating this issue.
@jeho-lee The latest update to the project adds a `Text` which shows inference latency in milli-seconds. This excludes the time taken to resize the image or the depth map and...
@stevhliu I will update the model card for `depth_anything`. PR: #37065
> To the folks who have been raising PR so far , just have a doubt did you get to install `flax` , `tf-keras` , `sentencepiece` etc. Before making the...
@stevhliu I have updated the model card for the `dinov2` model PR: #37104
@kuaashish The issue is resolved with the latest version! Thank you for the quick action!