Lőα∂íɴg…

Results 12 comments of Lőα∂íɴg…

after changing the mesh to (1, 6) i get this error: ``` INFO:jax._src.xla_bridge:Unable to initialize backend 'rocm': NOT_FOUND: Could not find registered platform with name: "rocm". Available platform names are:...

looks like i have to set widening_factor=6, num_kv_heads=6, in the TransformerConfig to the number of devices as well

@yarodevuci still downloading weights. i was under the impression that the test wiould download stuff (looks like i'm spoiled by the huggingface api which does it) will report tomorrow. right...

am at 272/300 gb right now. excitement starts to kick in, lets hope this thing runs. only having 6x 4090 (144GB VRAM) and 512GB RAM, if this isnt enough to...

ok, got a little further but still no cigar: ``` (.venv) loading@ai:~/PycharmProjects/grok-1$ python run.py │└───────────────────────────────────────────┴───────────────────────────────────────────┘│ 6 netns [netns] 1 root 0.0 ⡀⡀⡀⡀⡀ 0.0 │ │┌─┤net├────────────────────────────────────────────────────────────────────┤‹b eno2 n›├─┐│ 7 kworker/0:0-eve [kworker/0:0-events]...

may i jump in? i am looking for a way to throttle a queue without any keys or whatever. reason to do this is i have an api provider where...

@rfam13 you mind sharing what you did? did you just install localAI with that model and hooked it up with HA? was there some tweaking involved regarding the prompt format?

@FutureProofHomes impressive. what would you need history for then? isnt it constrained by the small context size anyway?

ah, got it, yeah, being able to refer to at least the last couple messages is important. re your context, the model itself only supports 8k context size: ![image](https://github.com/jekalmin/extended_openai_conversation/assets/133230022/334fbf71-795e-474f-9dfe-12cd47c7cf18) it...