test:Add Eagle tests with untrained heads
This MR adds unit tests to validate Eagle support for models with untrained Eagle heads. These are meant to be sanity tests which will find blatant issues such as missing Eagle support or issues with building/running eagle-enchanced models.
/bot run
Thanks for submitting this PR @brb-nv, I noticed that multiple E2E tests have been added. Do you have any rough estimation about the increased pre-merge time? I am asking this since we are paying attention to the CI running time to ensure the dev velocity. For sure it doesn't mean that we should never add new E2E tests, I just want to make sure we are using our CI resource in a “mean" way:)
June
/bot run
Thanks for submitting this PR @brb-nv, I noticed that multiple E2E tests have been added. Do you have any rough estimation about the increased pre-merge time? I am asking this since we are paying attention to the CI running time to ensure the dev velocity. For sure it doesn't mean that we should never add new E2E tests, I just want to make sure we are using our CI resource in a “mean" way:)
June
Hi June, thank you for the comment.
- The tests in this MR are being added to
qa/examples_test_list.txtand not L0. So, they shouldn't add to any pre-merge times. qa/examples_test_list.txtis run with a relatively low frequency (once or twice a week, I believe).
Please let me know if you think we can do things differently. I'm exploring ways to add tests for features instead of individual models.
PR_Github #198 [ run ] triggered by Bot
PR_Github #198 [ run ] completed with state FAILURE
Thanks for submitting this PR @brb-nv, I noticed that multiple E2E tests have been added. Do you have any rough estimation about the increased pre-merge time? I am asking this since we are paying attention to the CI running time to ensure the dev velocity. For sure it doesn't mean that we should never add new E2E tests, I just want to make sure we are using our CI resource in a “mean" way:) June
Hi June, thank you for the comment.
- The tests in this MR are being added to
qa/examples_test_list.txtand not L0. So, they shouldn't add to any pre-merge times.qa/examples_test_list.txtis run with a relatively low frequency (once or twice a week, I believe).Please let me know if you think we can do things differently. I'm exploring ways to add tests for features instead of individual models.
Thanks for the explanation, Balaram. I have no concern now.
Let's wait for the CI to run through now :)
June
/bot run
PR_Github #205 [ run ] triggered by Bot
PR_Github #205 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #217 completed with status: 'SUCCESS'
/bot reuse-pipeline
PR_Github #721 [ reuse-pipeline ] triggered by Bot
PR_Github #721 [ reuse-pipeline ] completed with state FAILURE
Can't reuse PR_Github #0 with status: UNKNOWN
/bot reuse-pipeline
PR_Github #725 [ reuse-pipeline ] triggered by Bot
PR_Github #725 [ reuse-pipeline ] completed with state SUCCESS
Can't reuse PR_Github #0 with status: UNKNOWN
/bot run
PR_Github #812 [ run ] triggered by Bot
PR_Github #812 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #658 completed with status: 'SUCCESS'
/bot reuse-pipeline
PR_Github #835 [ reuse-pipeline ] triggered by Bot
/bot reuse-pipeline
PR_Github #835 [ reuse-pipeline ] completed with state SUCCESS
Reusing PR_Github #812 for commit 450b024
PR_Github #837 [ reuse-pipeline ] triggered by Bot
PR_Github #837 [ reuse-pipeline ] completed with state SUCCESS
Reusing PR_Github #812 for commit ff2d1d5