LLamaSharp
LLamaSharp copied to clipboard
Added MAUI usage example (Android)
Added MAUI example (Android)
Disclaimer
I’m new to the project and I hope I have followed all the contribution guidelines and policies correctly. If not, please forgive me and kindly let me know what I should fix or improve.
Context
As suggested by @AmSmart in PR #1179, I extended the Mobile project developing a chatbot as a basic working app example using LlamaSharp on MAUI.
Important note on functionality (ISSUE)
I noticed that the example works correctly on an Android emulator (running on PC), but on a real Android device it crashes with the following error related to loading the CommunityToolkit.HighPerformance.dll dependency:
[monodroid-assembly] open_from_bundles: failed to load assembly CommunityToolkit.HighPerformance.dll
Loaded assembly: /data/data/com.llama.mobile/files/.__override__/CommunityToolkit.HighPerformance.dll [External]
[libc] Fatal signal 4 (SIGILL), code 0 (SI_USER) in tid 28509 (.NET TP Worker), pid 28397 (com.llama.mobile)
[Choreographer] Skipped 32 frames! The application may be doing too much work on its main thread.
@AmSmart, could you please check what is going on here?
A simple idea from building the app
While developing the app, it occurred to me that it might be useful to provide an API like LLamaWeights.LoadFromStream to load the model directly from a stream. This could be handy in cases where a small model is bundled with the APK. Currently, since loading requires a file, the model must be extracted from the APK and saved to the device storage, resulting in having two copies: one compressed inside the APK and one extracted. With a stream-based load, the app could load the model directly from the APK without extracting it. I understand that in a real-world scenario the model probably won't be shipped with the APK, but I thought it was an interesting possibility and wanted to hear your thoughts on this.
LLamaWeights.LoadFromStream I'd love to support that, but unfortunately it's not possible. The underlying API that llama.cpp offers is this: private static extern SafeLlamaModelHandle llama_model_load_from_file(string path, LLamaModelParams @params);, it requires a file path.
Oh ok then unfortunately there is nothing to do, thanks anyway for considering it
Sorry for letting this PR go cold. @AmSmart since you worked on Android support would you like to provide any review before merging this? It loosk fine to me, but I don't know anything about Android development!
I'll take a look and feedback shortly
I'll take a look and feedback shortly
If you can also solve the indicated issue I would really appreciate it.
This pull request has been automatically marked as stale due to inactivity. If no further activity occurs, it will be closed in 7 days.
Hi @AmSmart, do you have any updates on this PR?
This pull request has been automatically marked as stale due to inactivity. If no further activity occurs, it will be closed in 7 days.