MartialTerran
MartialTerran
Hi. According to Google Gemini Pro, compiling a Llama2.c model on a Windows 10 machine is "difficult" also because of high RAM memory requirements. And "Dependency Hell" due to "libraries...
Thank you for writing. Google Gemini tells me that the export,py will "compile" the specified AI model for inference on the selected snapdragon hardware. (See below). But, it is not...
I was able to "overcome" the supposed [base.stl ] glitch. The T1.XML model uses Trunk.stl or Waist.stl (which are in the "assets" folder) instead of base.stl. The problem I had...