Emmanuel Oluwagbemiga Adebiyi (Smart)
Emmanuel Oluwagbemiga Adebiyi (Smart)
cc: @martindevans
It's only CPU in this PR. As far as I know, Mobile GPU inference is still quite slow at the moment. Saw it was even slower than CPU inference in...
Great, thanks for the review. Will follow up with the requested changes later today.
Hello @martindevans, I've made updates to the PR. Looks like we're good to go. Not sure how you trigger the Unit Test pipelines in this repo, though, to get a...
Great. Glad to hear it
I'll take a look at it
Ahh, I see. Not surprising at all. Majority of Android devices are arm64, so you're right about it mostly being an acceptable/negligible loss. PS: I even had to disable building...
Are you building directly on my branch or did you merge to a staging branch? I'd like to reproduce the issue
Hi @martindevans Took a look at this and seem to have figured it out. Inspecting the platform folder ``../Llama.Mobile/Platforms/Android``, I noticed that some things were different from my local vs...