daos
daos copied to clipboard
DAOS-8465 test: Loop to fill and destroy container and pool.
Description: repeat following steps: (1)Create Pool and Container. (2)Fill the pool with random block data size, verify return code is as expected DER_NOSPACE -1007 when no more data can be written to the container. (3)destroy the container
Skip-unit-tests: true Test-tag: fill_cont_pool_stress Signed-off-by: Ding Ho [email protected]
Bug-tracker data: Ticket title is 'Automate - fill/delete loop test' Status is 'In Review' Labels: 'triaged' Job should run at elevated priority (1) https://daosio.atlassian.net/browse/DAOS-8465
Test stage checkpatch completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-9859/1/execution/node/137/log
Test stage Python Bandit check completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-9859/1/execution/node/134/log
Test stage checkpatch completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-9859/2/execution/node/125/log
Test stage checkpatch completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-9859/9/execution/node/138/log
Test stage Build DEB on Ubuntu 20.04 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-9859/9/execution/node/336/log
Test stage Build RPM on Leap 15 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-9859/9/execution/node/333/log
Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-9859/9/execution/node/329/log
Test stage checkpatch completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-9859/11/execution/node/145/log
PS, please install the team's githooks as Jenkins is complaining about for the future. :)
Noted: I reserved shared cluster-A with NVME for more testing, and found out that 1 server the minimum pool size is ~10GB, test to fill pool and repeat 100 times will consume 3+ hours. So I use 1 server and scm only. We can add more testcase with !mux for more servers and NVME if it is require.
Noted: I reserved shared cluster-A with NVME for more testing, and found out that 1 server the minimum pool size is ~10GB, test to fill pool and repeat 100 times will consume 3+ hours. So I use 1 server and scm only. We can add more testcase with !mux for more servers and NVME if it is require.
I think the minimum total NVMe is (1GiB * total_num_targets) + some_undisclosed_reserved_amount
==>Yes. By default, num_of_target is 8, in additional to scm, the minimum pool size is ~10GB.