heli
heli
Has any resolution?
I have the same problem, is anyone have a solution?
I encountered the same problem: labels in predictors not appeared in deployment!! ``` predictors: - name: default replicas: 1 labels: atms-app-v: "2" graph: ``` the labels of this deployment as...
I found the definition of `memcpy_htod_async` as here, you passed a fixed size of buf: `buf_wrapper.m_buf.len`: https://github.com/inducer/pycuda/blob/db6fb7edd8ed058f58df3d8b7e701a6843691a21/src/wrapper/wrap_cudadrv.cpp#L232 ``` void py_memcpy_dtoh_async(py::object dest, CUdeviceptr src, py::object stream_py) { py_buffer_wrapper buf_wrapper; buf_wrapper.get(dest.ptr(), PyBUF_ANY_CONTIGUOUS...
same question, a log of logs like these with interval 1s: ```{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2024-01-20T16:05:42+08:00"} {"level":"info","msg":"Running with config:\n{\n \"AcceptEnvvarUnprivileged\": true,\n \"NVIDIAContainerCLIConfig\": {\n \"Root\": \"\"\n },\n \"NVIDIACTKConfig\": {\n \"Path\": \"nvidia-ctk\"\n },\n...
@ramaraochavali Hello, I see that you implemented the logic for `loadbalancer` in this code . Could you help me understand this issue? https://github.com/istio/istio/blob/master/pilot/pkg/networking/core/loadbalancer/loadbalancer.go
> Append "--use_custom_all_reduce disable" to trtllm-build command can fix it. This is working for me on 8x4090 3Q. It works for me with TP=2 on 4090.
so have anyone resolved this issue?
@xioxin 你好,我在electron中想调用打印机把图片打印出来,但我现在测试直接在nodejs中调用print_raw.js这个例子测试打印文本,结果打印机没有响应,这是什么原因? ```js let data = "N\nS4\nD15\nq400\nR\nB20,10,0,1,2,30,173,B,barcode\nP0\n"; printer.printDirect({ data:data, printer: printerName, type: "RAW", success:function(){console.log("printed: success")}, error:function(err){console.error(err);} }) ``` 调用后返回的是: ``` printed: success ``` 但去没有执行打印,求助,谢谢