Koan-Sin Tan
Koan-Sin Tan
we can get them from the main inference group: https://github.com/mlcommons/inference/
we need 4.1 seeds for loadgen and stable diffusion. The main inference group have them already, let's reuse their seeds. https://github.com/mlcommons/inference/pull/1736
We use Stable Diffusion v1.5 models, [Open AI's CLIP model ](https://github.com/openai/CLIP), and COCO 2014 caption dataset. Do we need to put their licenses?
What can we do for the default backend? - Model: Let's start with llama 3.2 1B, preferably quantized ones - Runtime: - [MediaPipe](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples/llm_inference/android) and [AI-Edge-Torch](https://github.com/google-ai-edge/ai-edge-torch): our current default backend is...
check https://github.com/mlcommons/mobile_app_open/blob/master/.github/workflows/android-build-test.yml#L321-L3329-L330C46
- [x] create submission-v5.0 branch - [x] sync v5.0 branch to the mobile_app_closed repo - [x] update the timestep embedding pickle file to mobile_open
The app is not able to download large models from github release.
to copy contents from https://github.com/mlcommons/mobile_app_open/pull/1040#issuecomment-3454731672
I developed a simple program that utilizes Apple’s Metal Performance Shaders Graph (MPSGraph) to measure and estimate the compute capacities of Apple Neural Engine (ANE). The program can be found...
Recently, the LiteRT was updated to support Qualcomm QNN accelerator [1] and MediaTek NeuroPilot accelerator [2]. Pixel EdgeTPU support is currently under “experimental access” [3]. Let’s explore the capabilities of...