handsomeZhuang

Results 11 comments of handsomeZhuang

大佬好, 请问怎么生成带有原图像RGB的BEV图像?比如:相当于将原图像进行转成,生成BEV视角下的图像。谢谢!

> IPM环视拼接技术 我不需要拼接,只需要将车载相机视角图像转成BEV视角就可以,不知道BEVformer算法可否直接输出

> 我也有相同问题,请问是否解 > 我也有相同问题,请问是否解决 大佬好, 请问我跑完了之后,test里面只有json文件,没有BEV视图,是怎么回事?

> 非常感谢作者,按照指引已经可以跑通Test,但我的复现的结果BEV视图里没有车道线,和README里的视频差距较大。请教是什么原因呢? ![3e8750f331d7499e9b5123e9eb70f2e2_bev](https://user-images.githubusercontent.com/2637309/194602684-8346d1c8-24ca-4468-80ff-328fffe572d9.png) 大佬好, 请问你这个图怎么生成的?我这边只有json文件,没有BEV视图

> Hello, the segmentation code is in preparation. 你好,大佬, 请问怎么生成带有原始图像纹理的BEV视角?

![image](https://github.com/hpcaitech/Open-Sora/assets/139242546/6352c52e-c488-4b47-845f-8c57cbf3f344)数据处理跟训练同时进行吗?为什么不提前进行离线预处理数据呢?

how to produce the video using virtual views, and could you share codes with me?

> This PR is to fix the issue mentioned in #10 > > In the Graph structure of glass (hnsw), the neighbor's internal ID is stored in the neighbor list,...

> > 请问一下,你这里面的召回率精度有提升吗?我用main.cpp的例子跑出来的recall=81%左右? > > The recall is improved compared to the original. I just tested it. For the deep10m dataset, when ef=500, recall=97.8973%, and when ef=1000, recall=99.2578%. The deep10m...

> here: https://github.com/ThanatosShinji/onnx-tool?tab=readme-ov-file#results-of-onnx-model-zoo-and-sota-models ![image](https://github.com/ThanatosShinji/onnx-tool/assets/139242546/a484bcbb-1e17-4527-bbbb-65e133ec682d) do it refers to [encoder]566,371MACS/[decoder]1,271,959MACS every image?