Alex Cheema
Alex Cheema
One suggestion: I don't want to force torch as a dependency. If we could lean on the existing `InferenceEngine` infrastructure we have, that would be great. Perhaps each `InferenceEngine` can...
Getting this output now after ``` exo --inference-engine pytorch --run-model llama-3.1-8b ```
> > > > Getting this output now after > > ``` > > exo --inference-engine pytorch --run-model llama-3.1-8b > > ``` > > I'm assuming this comment was meant...
Just ping me when you want me to review the PR again.
Please fix merge conflicts.
@varshith15 do you know what's up here?
Very cool! I will add a $200 retrospective bounty as this is great. Two things: - Can you add `uv` as a prerequisite in the README? - Is it possible...
We have quite a few big PR's pending that will get merged in the next few weeks and this will cause a lot of conflicts so will merge this once...
Can you fix conflicts please @Yvictor
Looks good! Can you add this as an option to the cli too? `--inference-engine dummy`