Jimmy_L

Results 12 issues of Jimmy_L

版本v1.11,按照文档建立通信后启动键盘控制,速度调到0.3以上进入offboard并启动,升高后按s进入悬停模式,过一会无人机就开始自动乱飞乱撞。

用vins-fuison和ego跑三维路径规划,地图是indoor4;先是从起始点每隔一小段给一个目标点,让无人机走过拐角,同时rviz显示这段路的点云信息,然后再把目标点设在之前的起始点,规划的路径(红线)居然是直接穿过转角点云的,这是哪出问题了? ![Screenshot from 2023-03-23 15-09-06](https://user-images.githubusercontent.com/53935505/227130784-2b194690-724f-41a9-adbd-af86c6a7d363.png) ![Screenshot from 2023-03-23 15-25-28](https://user-images.githubusercontent.com/53935505/227133186-aa126472-27f7-4d0d-a09b-0afdb9bfe411.png)

### Checklist - [X] I have searched for [similar issues](https://github.com/isl-org/Open3D/issues). - [X] For Python issues, I have tested with the [latest development wheel](https://www.open3d.org/docs/latest/getting_started.html#development-version-pip). - [X] I have checked the [release...

question

If I reconstruction a 10cm^3 cube, will the output 3D model be the same size as the real world? Or is there a certain conversion ratio?

I refer to [https://github.com/langchain-ai/langserve/blob/main/examples/agent/server.py](url) to build my own RAG agent. test code as follows: ``` from fastapi import FastAPI from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder from langserve import add_routes from langchain_openai...

## Server test code as follows: ``` llm_end_point_url = "http://172.16.21.155:8000/v1/" model = ChatOpenAI(model="glm4v-9b",base_url=llm_end_point_url, api_key="api_key") ### embedding ### embedding_model = HuggingFaceEmbeddings(model_name='/root/ljm/bge/bge-large-zh-v1.5') ### milvus ### milvus_host = "***" milvus_port = *** collection_name...

看到仓库的GLM4 demo里面用fasiapi自定义了api,`langchain`接入部分只有简单QA。 本人也只是刚接触大模型一个月,想请教一下,如果接入`langchain`去搭建`Agent`调用`tool`的话,api里面还要定义哪些东西?

### Feature request / 功能建议 glm-4v使用vllm推理。 ### Motivation / 动机 目前想尝试服务器部署,使用多客户端对glm4v的api server发起请求,但是用Transformers推理返回结果太慢。看到glm4-chat用vllm推理的速度快了不少,因此希望4v也能支持vllm推理。 ### Your contribution / 您的贡献 *

### Is there an existing issue for the same feature request? - [x] I have checked the existing issues. ### Is your feature request related to a problem? ```Markdown ```...

### Your current environment Then I roughly wrote a concurrent test code The output of `concurrent_test.py` import asyncio import time import random from datetime import datetime import httpx import csv...

performance
stale