mongolu
mongolu
Hello Martin, Anton. I want to take the opportunity and thank Anton for this package (AS_READ_XLSX). Thank you Anton for this package and all you shared with us, the PL/SQL...
> Hi Mongolu, > > appreciate if you share your code with us so that we can also get benefits of your efforts. Hi guys. Sorry for not responding earlier,...
Hello. I am trying to put the package into APEX cloud app but I think it's too big because it throws INTERNAL ERROR. It took me a great deal of...
can we `/set parameter num_gpu 32` on runtime? it would save a lot of tries of "`ollama create [name] -f [modelfile]`". I'm using `litellm `and `autogen`, so I'm not sure...
Even better than 5his is to use one-click in a docker. I'm using it like 5his right now and it's self-updating every time I start the container. So 🥂 to...
I don't think this is the propper way of passing args to webui "`./start_linux.sh --listen`" Instead, do it by setting an env var OOBABOOGA_FLAGS or writing them in the CMD_FLAGS.txt....
I connected to the docker container and did upgrade the langchain package, like this ``` docker exec -it platform bash pip install langchain --upgrade ``` And restart the container. Now...
this is like it's expected because it's listed like this in platform/pyproject.toml
Sorry to intervene, I'm using it with docker on wsl2 and it's using GPUs
It can and it does