kinchahoy
kinchahoy
I have the same issue in a different context. If I export the graph to PNG it works, so there has to be someway to get Jupyiter to show it.
+1 this is a real need. A lot of cheap HTPC keyboard+trackpad combos literally cannot generate / transmit the CTRL+ALT+SHIFT+Q needed to exit the session. Please give us an option...
Hey folks - any chance we can get this looked at? I tweaked the source and did a local build a few months ago to simplify the keybinding (just CTRL+ALT+key),...
Big +1 for any fairly recent vision LLM models. Ideally one of the smaller ones like nanoLLaVA etc.
Thank you! Is support for Phi-3.5 vision likely? (Sorry the multimodal world moves fast!) On Mon, Aug 26, 2024 at 3:14 PM Namzakku ***@***.***> wrote: > Huggingface now has supported...
> The `IQ1_S` quantization uses exactly that: tertiary values `-1, 0, 1`, so yes, it shouldn't be hard to adapt the existing code or implement new, if necessary. > >...
> > > The `IQ1_S` quantization uses exactly that: tertiary values `-1, 0, 1`, so yes, it shouldn't be hard to adapt the existing code or implement new, if necessary....
> > > > > The `IQ1_S` quantization uses exactly that: tertiary values `-1, 0, 1`, so yes, it shouldn't be hard to adapt the existing code or implement new,...
> These papers might be a practical approach for existing model conversion: [https://openreview.net/forum?id=FUnEkOkodU](Token-Scaled Logit Distillation for Ternary Weight Generative Language Models) [https://huggingface.co/papers/2306.01841](Binary and Ternary Natural Language Generation) These are a...
Do you have llamafiles for these models? Could you share?