cvat icon indicating copy to clipboard operation
cvat copied to clipboard

Impossible to upload "CVAT for video" annotations

Open PMazarovich opened this issue 3 years ago • 3 comments

My actions before raising this issue

  • [x] Read/searched the docs
  • [x] Searched past issues

Expected Behaviour

Every (even large ones) file should be uploadable to CVAT without consuming all RAM.

Current Behaviour

Here is a file, 515 Mb. Uploading of it eventually consumes up to 12+ Gb of RAM. Not sure this is normal

Possible Solution

Steps to Reproduce (for bugs)

  1. Upload this file
  2. docker stats
  3. See, what happens

Context

Your Environment

CVAT 1.6.0 on docker. I tried with and without this fix (https://github.com/openvinotoolkit/cvat/pull/3658), but all the same.

Next steps

You may join our Gitter channel for community support. Screenshot from 2021-10-02 13-54-47

annotations.zip

PMazarovich avatar Oct 03 '21 07:10 PMazarovich

Hi, @PMazarovich, Can you please tell the number of frames in the task that you are uploading annotations into?

sizov-kirill avatar Oct 15 '21 07:10 sizov-kirill

@kirill-sizov , hello, 257723 frames

PMazarovich avatar Oct 15 '21 08:10 PMazarovich

I reproduced and investigated this issue, there is some of my suggestions and results:

1. Suggestion. To upload attached annotation file you need to increase timeouts in reading response and request, also it seems that video size may exceed the limit, and therefore this limit must also be increased, for this add the following configurations for the runmodwsgi command in the supervisord.conf file:

--limit-request-body 2073741824 --socket-timeout 600 --request-timeout 600

2. Result. After applying these configurations, I reproduced this issue. The RAM overloading occurs not during reading annotations from file, but during reading these annotations from DB. Also, I performed some experiments and found out that during serialization the consumed RAM memory is doubled.

3. Suggestion. @PMazarovich, while we continue to investigate the problem and develop methods for solving it, I can advise you to create a task and define a segment size for it. For example, if you create a task with your video and segment_size = 10000 then your task will have 26 jobs. To upload your annotation file use upload annotation action for the created task and after that explore/change the uploaded annotation referring to each one separately jobs. Such strategy of work will not lead to 12G+ RAM memory overloading, because each time server will try to get annotations just for one job.

sizov-kirill avatar Dec 14 '21 10:12 sizov-kirill