jlsantiago
jlsantiago
Same problem here. With the same model it seems that the model detection is reduced in TensorFlowSharp
Thank you for your answer. I arrive to the same solution redoing the code following the python version: private static TFGraph ConstructGraphToNormalizeImage( out TFOutput input, out TFOutput output, TFDataType destinationDataType...
The image size of some models that I use is in the TensorFlow/Hub For example: https://github.com/tensorflow/hub/blob/master/docs/modules/image.md You have some general information and links to the doc, for example: https://www.tensorflow.org/hub/modules/google/imagenet/mobilenet_v1_100_224/classification/1
@hswlab, You could see LLamaSharp as the component that manages the LLM. It's easy to make that kind of things if you use Semantic Kernel in top of LLamaSharp. SK...
@dcostea > I'm wondering if there are plans to make llama.cpp compatible with multi-modal input (images) for use with models like llava. #609 includes llava support on InteractiveExectutor
It should be also in the root folder:
In my opinion it would be easier the alternative that Martin proposes. We can not run all the test in CI y we should verify all the test locally.
It seems to work on osx. I think it should be clearly described at some point how the cache is managed and where is located by default:
Could you share the link to the model that you are trying to load to make a test?
if the problem happens with llama.cpp examples (main) you should open the issue to llama.cpp.