Prince Pereira
Prince Pereira
export MESA_GL_VERSION_OVERRIDE=3.3 Just checking if you tried this? Because it works for me
rosservice call /hdl_global_localization/set_engine FPFH_RANSAC ERROR: Incompatible arguments to call service: No key named [FPFH_RANSAC] Provided arguments are: * FPFH_RANSAC (type str) Service arguments are: [engine_name.data] how do i provide the...
or is there a way to start up global localization node with FPFH RANSAC instead of BBS
@tc-mb Any update on this? Reason being currently I have a task using vllm, I have enabled thinking via the extra_body parameter ```js const response = await this.client.chat.completions.create({ // eslint-disable-next-line...
Also for additional context, I am using bitsandbytes to quantize the model. Im not too sure if this could be affecting the hybrid thinking process as well.
Hi, thanks for the update. Actually I am already using this parameter, but I am running into the problem of not being able to ensure it does deep thinking. So...
For example: VLLM serving: ``` vllm serve models/MiniCPM-V-4_5 \ --dtype bfloat16 \ --gpu-memory-utilization 0.92 \ --quantization bitsandbytes \ --load-format bitsandbytes \ --max-model-len 6000 \ --max-num-seqs 10 \ --max-num-batched-tokens 8000 \...
edit: This is the reference [video](https://imgur.com/a/PV5fawI) I managed to get better results by doing a prefill for the assistant, so at least the model thinks and produces a good answer...
I change the chat template to ```jinja2 {%- set enable_thinking = enable_thinking | default(false) %} {%- if tools %} {{- 'system\n' }} {%- if messages[0].role == 'system' %} {{- messages[0].content...