futurefunk

Results 5 comments of futurefunk

hey @lucas-castalk are you referring to changing the embedding model or the LLM? the llama 3.2 LLM chat response took 3 secs, but adding memory with nomic-embed-text took 20 secs

Sorry I think I'm misunderstanding you @lmeyerov GFQL is a query language and to visualize the graph we still need hub right? In other words, there's no way to visualize...

@lmeyerov seems like Hub is also facing issues with rendering 1M edges. any chance this is account specific (using too much resources in a small amount of time)? sorry just...

hi @lmeyerov sorry to be a constant bother around this topic. i noticed in the past 2 days that visualizing the number of nodes was the bottleneck. i tried 95157...

@lmeyerov @aucahuasi gave this a shot a couple of days later and am still running into the gpu restarts