executorch
executorch copied to clipboard
checkpoint str has no attribute 'get'
I was following the llama2 7b guide, consenus not enough ram and other issues. tried the stories110M guide, worked all the way till I went to test it. I may remember lm_eval not being installed (its what my terminal said) not sure if that could be causing anything I am trying to eval model accuracy, and that is where this error is stemming from.
file I am using to save the .pte
import torch
with open("llama2_coreml_all.pte", 'wb') as file:
torch.save('f',"llama2_coreml_all.pte", _use_new_zipfile_serialization=True)
script and terminal info
❯ python -m examples.models.llama2.eval_llama -c llama2_coreml_all.pte -p params.json -t tokenizer.model -d fp32 --max_seq_len 512 --limit 100
Could not import fairseq2 modules.
2024-04-30:23:20:24,518 INFO [builder.py:80] Loading model with checkpoint=llama2_coreml_all.pte, params=params.json, use_kv_cache=False, weight_type=WeightType.LLAMA
Traceback (most recent call last):
File "/opt/anaconda3/envs/executorch/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/anaconda3/envs/executorch/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/Users/anthonymikinka/executorch/examples/models/llama2/eval_llama.py", line 29, in <module>
main() # pragma: no cover
File "/Users/anthonymikinka/executorch/examples/models/llama2/eval_llama.py", line 25, in main
eval_llama(modelname, args)
File "/Users/anthonymikinka/executorch/examples/models/llama2/eval_llama_lib.py", line 261, in eval_llama
eval_wrapper = gen_eval_wrapper(model_name, args)
File "/Users/anthonymikinka/executorch/examples/models/llama2/eval_llama_lib.py", line 209, in gen_eval_wrapper
manager: LlamaEdgeManager = _prepare_for_llama_export(model_name, args)
File "/Users/anthonymikinka/executorch/examples/models/llama2/export_llama_lib.py", line 629, in _prepare_for_llama_export
load_llama_model(
File "/Users/anthonymikinka/executorch/examples/models/llama2/builder.py", line 83, in load_llama_model
model, example_inputs, _ = EagerModelFactory.create_model(
File "/Users/anthonymikinka/executorch/examples/models/model_factory.py", line 44, in create_model
model = model_class(**kwargs)
File "/Users/anthonymikinka/executorch/examples/models/llama2/model.py", line 84, in __init__
if (not fairseq2_checkpoint) and checkpoint.get(
AttributeError: 'str' object has no attribute 'get'
@Jack-Khuu is the on-device evaluation ready?
edit: Acutally coreml should be able to run on Mac too, @antmikinka are you looking for on device evaluation, or just evaluate the coreml model either on Mac or iPone?
@cccclai
Yes, I'm trying to see an evaluation for the model on the Mac. I would like to put the model on my iPhone (iPhone 13 Pro) as well.
I was trying to determine what hardware (cpu/gpu/ane) was being utilized to compute the model.
Could not import fairseq2 modules.
Seems an issue with the executorch setup.
@Jack-Khuu is the on-device evaluation ready?
Eval is ready, but this error doesn't seem to be related to eval. It fails during load_llama_model, prior to eval I'll try to narrow it down and loop in core
I think it's related to how we expect eval to work with delegated model, in this case coreml
Just as an update so this doesn't go stale, investigating CoreML Eval is on our plate
Will update as things flesh out