react-native-llm-mediapipe icon indicating copy to clipboard operation
react-native-llm-mediapipe copied to clipboard

Cannot read property 'createModel' of null

Open kumard3 opened this issue 1 year ago • 9 comments

hi getting this error while using the package , i might be using it wrong, can you help me with

    storageType: 'file',
    modelPath: './gemma-2b-it-gpu-int4.bin',
  });

kumard3 avatar Jun 25 '24 22:06 kumard3

That filepath needs to point to a location on your device.

For android, see here

For iOS, you need to find a way to push the file to local application storage, or perhaps an icloud location, if you are familiar with how that works, or potentially using the iOS platform APIs for accessing files.

The easiest way, on both platforms, is simply to bundle the model as an asset.

cdiddy77 avatar Jun 26 '24 11:06 cdiddy77

That filepath needs to point to a location on your device.

For android, see here

For iOS, you need to find a way to push the file to local application storage, or perhaps an icloud location, if you are familiar with how that works, or potentially using the iOS platform APIs for accessing files.

The easiest way, on both platforms, is simply to bundle the model as an asset.

Thank you I will try that, okay I have two more questions

  1. What is the type asset ?
  2. The .bin model, is the right one , right ?

kumard3 avatar Jun 26 '24 11:06 kumard3

Yes and yes.

On Wed, Jun 26, 2024 at 7:58 AM Kumar Deepanshu @.***> wrote:

That filepath needs to point to a location on your device.

For android, see here https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference/android#push_model_to_the_device

For iOS, you need to find a way to push the file to local application storage, or perhaps an icloud location, if you are familiar with how that works, or potentially using the iOS platform APIs for accessing files.

The easiest way, on both platforms, is simply to bundle the model as an asset.

Thank you I will try that, okay I have two more questions

  1. What is the type asset ?
  2. The .bin model, is the right one , right ?

— Reply to this email directly, view it on GitHub https://github.com/cdiddy77/react-native-llm-mediapipe/issues/7#issuecomment-2191510203, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABDG2KJY7E57JBCI5VGRNRLZJKUGDAVCNFSM6AAAAABJ4XDTX2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOJRGUYTAMRQGM . You are receiving this because you commented.Message ID: @.***>

cdiddy77 avatar Jun 27 '24 20:06 cdiddy77

getting this error createModel [Error: internal: Failed to initialize session: %sCan not open OpenCL library on this device - undefined symbol: clSetPerfHintQCOM]

i have tried it on physical pixel 7 and emulator pixel 8 pro.

kumard3 avatar Jun 30 '24 18:06 kumard3

Ever figure this out - also having a difficult time figuring out how to bundle the model as an asset or access via a file path to the model on the device.

To bundle, have downloaded a model and included in an assets folder, tried putting it in android/app/src/main/assets, tried putting it in a models/converted folder as well as in the same folder as the file calling the function.

With storageType of file - have put the model on my device and tried accessing with /data/local/tmp/llm/gemma-2b-it-cpu-int4.bin as well as moving it to other locations trying different variations of the file path.

Most interested in getting it working as a bundled asset though. Any help/pointes appreciated. Thanks.

ALuhning avatar Jul 21 '24 10:07 ALuhning

Hey guys, I just pushed a new PR attempting to fix the problem of OpenCL library and also commenting about where to put the model files to make it work. I hope this fixes your problems and clear up your doubts 😄

jrobles98 avatar Sep 15 '24 03:09 jrobles98

That filepath needs to point to a location on your device.

For android, see here

For iOS, you need to find a way to push the file to local application storage, or perhaps an icloud location, if you are familiar with how that works, or potentially using the iOS platform APIs for accessing files.

The easiest way, on both platforms, is simply to bundle the model as an asset.

@cdiddy77 I'm setting up my llmInference like this:

const llmInference = useLlmInference({
   storageType: 'file',
   modelPath: '/data/user/0/com.offlinellmpoc/files/gemma-2b-it-cpu-int4.bin',
 });

But my app is getting crashed. I'm not able to get what the issue is

alam65 avatar Oct 01 '24 13:10 alam65

That filepath needs to point to a location on your device.

For android, see here

For iOS, you need to find a way to push the file to local application storage, or perhaps an icloud location, if you are familiar with how that works, or potentially using the iOS platform APIs for accessing files.

The easiest way, on both platforms, is simply to bundle the model as an asset.

I am relatively new to react native and mobile app development. How does one bundle the model as an asset? According to google docs:

Note: During development, you can use adb to push the model to your test device for a simpler workflow. For deployment, host the model on a server and download it at runtime. The model is too large to be bundled in an APK.

I agree that bundling it as an asset is the best but I do not know how to do it. Can you show me how?

luey-punch avatar Nov 06 '24 02:11 luey-punch

@luey-punch I was able to do that by creating a folder named 'assets' inside android/app/src/main and moving the model inside it although it worked, my android was lagging a lot.

const {generateResponse} = useLlmInference({ storageType: 'asset', modelName: 'gemma.bin', });

shreykul avatar Nov 07 '24 06:11 shreykul