Tyler-D
Tyler-D
> Those are pretty much the only controls that I know of: The scale and the configs. > > To get a nice scale for your scene, I find it...
Hi, the documentation should be online soon. Please wait for the official announcement of TAO Toolkit 5.0
Hi, per my knowledge, the answer is no. DeepStream does not support optical flow model or joint model. I think you can ask in the [DeepStream forum](https://forums.developer.nvidia.com/c/accelerated-computing/intelligent-video-analytics/deepstream-sdk/15), they will take...
You could download the docker image directly here: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/containers/tao-toolkit-pyt/tags (action recognition is in this image) https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/containers/tao-toolkit-tf/tags
This is a very simple script, just split the video to frames using opencv. The possible reason I can come up with is that you cannot really access to the...
1) We only have official link to SHAD dataset. 2) If you want to use NVOF SDK to generate optical flow (and you have turing or ampere devices), you could...
But it is also fine to generate optical flow using opencv. TAO toolkit does not care where you get your optical flow vector. Some reference implementation: https://github.com/yjxiong/temporal-segment-networks
You can share your config here.
The config looks good. But you should save the config to `.yaml` file instead of `.txt`
remove the `=` in the model name `ar_model_epoch=09-val_loss=2.15.tlt`. It's a known issue we mentioned in notebook: > 2) "=" in the checkpoint file name should removed before using the checkpoint...