James Brown
James Brown
Can trick ollama to use GPU but loading model taking forever. Logs: ``` time=2024-03-10T22:51:10.851+08:00 level=INFO source=images.go:806 msg="total blobs: 34" time=2024-03-10T22:51:10.852+08:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0" time=2024-03-10T22:51:10.852+08:00 level=INFO source=routes.go:1082...
Please consider reopen this issue. I know how to use CPU only and how to trick ollama to use GPU. But when using GPU it will get stuck. I think...
Titanoboa allows importing contract via ABI and that enables resolving both Vyper and Solidity contracts. ```python import boa filename = "foo.json" boa.load_abi(src, name="Foo") ``` I mean something like [TypeChain](https://www.npmjs.com/package/typechain) in...
[related discussion](https://discuss.python.org/t/exec-with-return-keyword/19916/24)
目前可以通过shortcut启动 具体如下 ```bash am start -n org.autojs.autoxjs.v6/org.autojs.autojs.external.shortcut.ShortcutActivity -a android.intent.action.MAIN -e path "/storage/emulated/0/脚本/show_toast.js" ```
连接电脑的代码在[这里](https://github.com/kkevsekk1/AutoX/blob/5172fd92fbf778509b76e1a11bbfc9f5fdfaaf0c/app/src/main/java/org/autojs/autojs/ui/main/drawer/DrawerPage.kt) 需要修改才能通过intent传入websocket的服务器地址
Versions: X.Org X Server 1.21.1.3 Xvnc version TightVNC-1.3.10 PyVirtualDisplay 3.0 Python 3.9.12
upvote for this
@staticanime Solution: https://github.com/jacksonliam/mjpg-streamer/issues/182#issuecomment-2336583229 Note: This happens both on x86 and Raspberry Pi 3B+, so it is likely to be a kernel-related issue. The above solution works fine on x86. Maybe...
> Which parameter in vLLM server corresponds to TGI's --max-concurrent-requests? > When can the VLLM_ENGINE_MAX_CONCURRENT_REQUESTS parameter be used? Modify the code yourself before this pull request being merged or implemented....