Pulp

Results 132 comments of Pulp

> Not true. That is only for sending, right? You can't select the inputs to the CoinJoin.

Have you tried the solution [here](https://github.com/imartinez/privateGPT/issues/276#issuecomment-1554262627) and [here](https://github.com/imartinez/privateGPT/issues/220#issuecomment-1550376561)? `ggml-gpt4all-l13b-snoozy.bin` I can confirm works.

Something like this: ``` PERSIST_DIRECTORY=db MODEL_TYPE=LlamaCpp MODEL_PATH=models/ggml-gpt4all-l13b-snoozy.bin EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2 MODEL_N_CTX=1000 ``` The model was download from the same website linked in the README.

To me works also if you change the `backend=` as shown in the link above, but that involves touching code so it might not be preferred by some. See also...

> llama-cpp-python==0.1.50 This version of the library no longer supports that model. See this https://github.com/imartinez/privateGPT/issues/220#issuecomment-1550376561 (it's the same I mentioned above), you have to downgrade your library version (use a...

I proposed something similar [back in the days](https://github.com/JoinMarket-Org/joinmarket-clientserver/pull/1182#issuecomment-1080897609), so I'm all for it. > Reality is that currently DarkScience IRC seems to be more stable than dnodes people are running....

> If we combine IRC peer discovery with direct onion messaging, there are no actual benefits on phasing out IRC, apart from code maintenance. The way I see it, eventually...

@Flimsy-Fox > I can confirm that I also have this same issue. Can you add more info? Are you also using Qubes? Which OS? > I remember with previous versions...

Interesting. What you propose is conceptually very similar to sending UTXOs to different mixdepths, which, if the goal is to keep UTXOs separate, still seems the way to go. If...

@FreddieChopin D'oh. Sorry, I thought you were OP! > still be better than merging everything in one step Probably, but it still seems awful to me. And if all the...