Kun.Liam
Kun.Liam
> > 从测试文件,并来到集成文件。创建文件。问题就解决了。 > > 我尝试在文件夹中的任何地方创建 auto-gpt.json 文件,但仍然出现相同的错误 > > 警告:文件“auto-gpt.json”不存在。本地内存不会保存到文件中。 > > 如何? 使用命令行 touch auto-gpt.json
> 你有没有在openai账户配置支付信息?如果没有它,您将获得一个令牌,但 API 调用将失败。 Do I have to set up a paid account to use it?
**Question:** I found and solved the visualized code in 'tools/visualizations/vis_cam.py', but when I was using 'demo/demo.py', why can't I input video frames? **The command I used was:** `python demo/demo.py configs/recognition/user/tpn-slowonly_imagenet-pretrained-r50_8xb8-8x8x1-150e_kinetics400-rgb.py...
> The average clip operation is performed in the [base head](https://github.com/open-mmlab/mmaction2/blob/582b78fd6c3240500d5cacd292339d7d1ddbb056/mmaction/models/heads/base.py#L196) here. The multiple clips inference result depends on the temporal information richness, if multiple clips share similar information, the...
> Sorry for the late response. Generally speaking, multi-clip testing can bring improvement, but if the difference in temporal dimension is small, 8 (frames) x 8 (frame interval) has a...
> I guess that multi-clips inference could be helpful for your project, which could avoid the effect when all frames of a single clip are backgrounds. Thank you very much...