Promote Vertex AI FeatureStore resources (GA only)
part of https://github.com/hashicorp/terraform-provider-google/issues/9298
If this PR is for Terraform, I acknowledge that I have:
- [x] Searched through the issue tracker for an open issue that this either resolves or contributes to, commented on it to claim it, and written "fixes {url}" or "part of {url}" in this PR description. If there were no relevant open issues, I opened one and commented that I would like to work on it (not necessary for very small changes).
- [x] Generated Terraform, and ran
make testandmake lintto ensure it passes unit and linter tests. - [x] Ensured that all new fields I added that can be set by a user appear in at least one example (for generated resources) or third_party test (for handwritten resources or update tests).
- [x] Ran relevant acceptance tests (If the acceptance tests do not yet pass or you are unable to run them, please let your reviewer know).
- [x] Read the Release Notes Guide before writing my release note below.
Release Note Template for Downstream PRs (will be copied)
`google_vertex_ai_featurestore` (ga only)
`google_vertex_ai_featurestore_entitytype` (ga only)
Hello! I am a robot who works on Magic Modules PRs.
I've detected that you're a community contributor. @slevenick, a repository maintainer, has been assigned to assist you and help review your changes.
:question: First time contributing? Click here for more details
Your assigned reviewer will help review your code by:
- Ensuring it's backwards compatible, covers common error cases, etc.
- Summarizing the change into a user-facing changelog note.
- Passes tests, either our "VCR" suite, a set of presubmit tests, or with manual test runs.
You can help make sure that review is quick by running local tests and ensuring they're passing in between each push you make to your PR's branch. Also, try to leave a comment with each push you make, as pushes generally don't generate emails.
If your reviewer doesn't get back to you within a week after your most recent change, please feel free to leave a comment on the issue asking them to take a look! In the absence of a dedicated review dashboard most maintainers manage their pending reviews through email, and those will sometimes get lost in their inbox.
Hi! I'm the modular magician. Your PR generated some diffs in downstreams - here they are.
Diff report:
Terraform GA: Diff ( 7 files changed, 1264 insertions(+), 11 deletions(-)) Terraform Beta: Diff ( 4 files changed, 4 insertions(+), 14 deletions(-)) TF Validator: Diff ( 4 files changed, 260 insertions(+), 3 deletions(-))
Tests analytics
Total tests: 2174
Passed tests 1934
Skipped tests: 238
Failed tests: 2
Action taken
Triggering VCR tests in RECORDING mode for the tests that failed during VCR. Click here to see the failed tests
TestAccFirebaserulesRelease_BasicRelease|TestAccComputeInstance_soleTenantNodeAffinities
Tests passed during RECORDING mode:
TestAccFirebaserulesRelease_BasicRelease[Debug log]
Tests failed during RECORDING mode:
TestAccComputeInstance_soleTenantNodeAffinities[Error message] [Debug log]
Please fix these to complete your PR View the build log or the debug log for each test
It looks like the two tests also fail in the other PRs regardless of the PR contents. BTW, I can't see the build log or the debug log either due to the permission errors. Is this an expected behavior?
[UPDATED] I found the following note on README.md. I understand that community developers don't need permission too see the logs.
The false positive rate on these tests is extremely high between changes in the API, Cloud Build bugs, and eventual consistency issues in test recordings so we don't expect contributors to wholly interpret the results- that's the responsibility of your reviewer.
Hi @slevenick, could you please review this PR?
Hmmm, it looks like some fields may not be supported in GA at the API level (or were renamed, removed or something else)
This test fails: TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample
Can you see what is going on with that test? I see this failure message:
"description": "Invalid JSON payload received. Unknown name \"monitoringInterval\" at 'entity_type.monitoring_config.snapshot_analysis': Cannot find field.",
@slevenick Thank you for your comment! I found out the following things:
monitoringIntervalis deprecated in v1beta1 like belowstalenessDaysandmonitoringIntervalDaysare supported in both v1 and v1beta1
I'll remove monitoringInterval in this PR and keep this PR only for GA. Apart from that, I'll create another PR to support the new fields. Please let me know if you'd recommend containing both changes in this PR. Thanks!
As for the error messages, I'm afraid I couldn't find the visible link to my account. Could you please give me the link to see the error? The following links returned permission errors to me.
Hi! I'm the modular magician. Your PR generated some diffs in downstreams - here they are.
Diff report:
Terraform GA: Diff ( 7 files changed, 1239 insertions(+), 17 deletions(-)) Terraform Beta: Diff ( 5 files changed, 4 insertions(+), 45 deletions(-)) TF Validator: Diff ( 4 files changed, 249 insertions(+), 3 deletions(-))
Yikes, so this is going to be a bit of a tricky situation. We have pretty strict guidelines around backwards compatibility, where we cannot remove a field until a major version change. That means that we can't remove the field from the beta provider, but we also can't have it present in the GA provider. Maybe we can mark that field as min_version: beta to prevent it from being in the GA provider, and then deprecate it so we can remove it in the next major version
Maybe we can mark that field as min_version: beta to prevent it from being in the GA provider, and then deprecate it so we can remove it in the next major version
Thank you for sharing the release policy. This approach looks good to me. I'll update the PR, and get back to you!
Tests analytics
Total tests: 2182
Passed tests 1942
Skipped tests: 238
Failed tests: 2
Action taken
Triggering VCR tests in RECORDING mode for the tests that failed during VCR. Click here to see the failed tests
TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample|TestAccComputeInstance_soleTenantNodeAffinities
Tests failed during RECORDING mode:
TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample[Error message] [Debug log]
TestAccComputeInstance_soleTenantNodeAffinities[Error message] [Debug log]
Please fix these to complete your PR View the build log or the debug log for each test
Hi! I'm the modular magician. Your PR generated some diffs in downstreams - here they are.
Diff report:
Terraform GA: Diff ( 7 files changed, 1241 insertions(+), 13 deletions(-)) Terraform Beta: Diff ( 5 files changed, 9 insertions(+), 18 deletions(-)) TF Validator: Diff ( 4 files changed, 249 insertions(+), 3 deletions(-))
Tests analytics
Total tests: 2192
Passed tests 1938
Skipped tests: 240
Failed tests: 14
Action taken
Triggering VCR tests in RECORDING mode for the tests that failed during VCR. Click here to see the failed tests
TestAccComputeInstance_soleTenantNodeAffinities|TestAccComputeForwardingRule_internalTcpUdpLbWithMigBackendExample|TestAccComputeGlobalForwardingRule_externalTcpProxyLbMigBackendExample|TestAccComputeForwardingRule_networkTier|TestAccComputeForwardingRule_update|TestAccComputeForwardingRule_forwardingRuleRegionalHttpXlbExample|TestAccComputeForwardingRule_forwardingRuleExternallbExample|TestAccClouddeployDeliveryPipeline_DeliveryPipeline|TestAccComputeRouterInterface_basic|TestAccComputeVpnTunnel_vpnTunnelBetaExample|TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample|TestAccSqlDatabaseInstance_mysqlMajorVersionUpgrade|TestAccComputeFirewallPolicyRule_update|TestAccComputeFirewallPolicy_update
Hello @slevenick, I removed monitoring_interval from the example, but it seems TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample still is failing. Could you help me what the error message is? I'll also try to reproduce the error locally. Thanks
Tests passed during RECORDING mode:
TestAccComputeForwardingRule_internalTcpUdpLbWithMigBackendExample[Debug log]
TestAccComputeGlobalForwardingRule_externalTcpProxyLbMigBackendExample[Debug log]
TestAccComputeForwardingRule_networkTier[Debug log]
TestAccComputeForwardingRule_update[Debug log]
TestAccComputeForwardingRule_forwardingRuleRegionalHttpXlbExample[Debug log]
TestAccComputeForwardingRule_forwardingRuleExternallbExample[Debug log]
TestAccClouddeployDeliveryPipeline_DeliveryPipeline[Debug log]
TestAccComputeRouterInterface_basic[Debug log]
TestAccComputeVpnTunnel_vpnTunnelBetaExample[Debug log]
TestAccSqlDatabaseInstance_mysqlMajorVersionUpgrade[Debug log]
TestAccComputeFirewallPolicyRule_update[Debug log]
TestAccComputeFirewallPolicy_update[Debug log]
Tests failed during RECORDING mode:
TestAccComputeInstance_soleTenantNodeAffinities[Error message] [Debug log]
TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample[Error message] [Debug log]
Please fix these to complete your PR View the build log or the debug log for each test
Thank you for letting me know about the error!
The stalenessDays and monitoringIntervalDays are the other fields, but they're new fields for the provider. The PR becomes not only GA. So I'd try to use monitoring_config.snapshot_analysis.disabled = true.
Apart from that, I added google_vertex_ai_featurestore_entitytype_feature to the beta provider in https://github.com/GoogleCloudPlatform/magic-modules/pull/6568/. It's also a GA-ready feature store resource. So I'll make it promoted in this PR.
Hi! I'm the modular magician. Your PR generated some diffs in downstreams - here they are.
Diff report:
Terraform GA: Diff ( 10 files changed, 1783 insertions(+), 21 deletions(-)) Terraform Beta: Diff ( 7 files changed, 14 insertions(+), 33 deletions(-)) TF Validator: Diff ( 5 files changed, 339 insertions(+), 3 deletions(-))
Tests analytics
Total tests: 2193
Passed tests 1947
Skipped tests: 240
Failed tests: 6
Action taken
Triggering VCR tests in RECORDING mode for the tests that failed during VCR. Click here to see the failed tests
TestAccFirebaserulesRelease_BasicRelease|TestAccComputeInstance_soleTenantNodeAffinities|TestAccCGCSnippet_eventarcWorkflowsExample|TestAccSqlDatabaseInstance_mysqlMajorVersionUpgrade|TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample|TestAccVertexAIFeaturestoreEntitytypeFeature_vertexAiFeaturestoreEntitytypeFeatureExample
Tests passed during RECORDING mode:
TestAccFirebaserulesRelease_BasicRelease[Debug log]
TestAccSqlDatabaseInstance_mysqlMajorVersionUpgrade[Debug log]
Tests failed during RECORDING mode:
TestAccComputeInstance_soleTenantNodeAffinities[Error message] [Debug log]
TestAccCGCSnippet_eventarcWorkflowsExample[Error message] [Debug log]
TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample[Error message] [Debug log]
TestAccVertexAIFeaturestoreEntitytypeFeature_vertexAiFeaturestoreEntitytypeFeatureExample[Error message] [Debug log]
Please fix these to complete your PR View the build log or the debug log for each test
Still seeing:
~ monitoring_config {
~ snapshot_analysis {
- monitoring_interval = "0s" -> null
# (1 unchanged attribute hidden)
}
}
}
Are you able to run these tests yourself?
Thank you for your comment. I tried reproducing the error with the auto-generated branch on my local machine like below, but the test passed on my machine. So I might have overlooked something. I attached test.log.
➜ terraform-provider-google git:(auto-pr-6565) git show HEAD | head -n 1
commit b769b21056499b22faf81f5f0bf14a1d74b20042
➜ terraform-provider-google git:(auto-pr-6565) TF_LOG=TRACE make testacc GOOGLE_ORG=test GOOGLE_BILLING_ACCOUNT=test GOOGLE_USE_DEFAULT_CREDENTIALS=true GCLOUD_PROJECT=kouzoh-p-kohama GCLOUD_REGION=us-central1 GCLOUD_ZONE=us-central1-a TEST=./google TESTARGS='-run=TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample' | tee test.log
go generate ./...
TF_ACC=1 TF_SCHEMA_PANIC_ON_ERROR=1 go test ./google -v -run=TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample -timeout 240m -ldflags="-X=github.com/hashicorp/terraform-provider-google/version.ProviderVersion=acc"
=== RUN TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample
=== PAUSE TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample
=== CONT TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample
...
--- PASS: TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample (96.11s)
PASS
ok github.com/hashicorp/terraform-provider-google/google (cached)
Thank you for your comment. I tried reproducing the error with the auto-generated branch on my local machine like below, but the test passed on my machine. So I might have overlooked something. I attached test.log.
➜ terraform-provider-google git:(auto-pr-6565) git show HEAD | head -n 1 commit b769b21056499b22faf81f5f0bf14a1d74b20042 ➜ terraform-provider-google git:(auto-pr-6565) TF_LOG=TRACE make testacc GOOGLE_ORG=test GOOGLE_BILLING_ACCOUNT=test GOOGLE_USE_DEFAULT_CREDENTIALS=true GCLOUD_PROJECT=kouzoh-p-kohama GCLOUD_REGION=us-central1 GCLOUD_ZONE=us-central1-a TEST=./google TESTARGS='-run=TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample' | tee test.log go generate ./... TF_ACC=1 TF_SCHEMA_PANIC_ON_ERROR=1 go test ./google -v -run=TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample -timeout 240m -ldflags="-X=github.com/hashicorp/terraform-provider-google/version.ProviderVersion=acc" === RUN TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample === PAUSE TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample === CONT TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample ... --- PASS: TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample (96.11s) PASS ok github.com/hashicorp/terraform-provider-google/google (cached)
It's failing on the beta provider specifically, so maybe it's a beta vs GA difference?
It's failing on the beta provider specifically, so maybe it's a beta vs GA difference?
Ah, I could reproduce the error with the beta provider locally. I'll look into it. Thanks!
Hi there, I'm the Modular magician - !
Tests analytics
Total tests: 2197
Passed tests 1950
Skipped tests: 240
Failed tests: 7
Action taken
Triggering VCR tests in RECORDING mode for the tests that failed during VCR. Click here to see the failed tests
TestAccComputeInstance_soleTenantNodeAffinities|TestAccCGCSnippet_eventarcWorkflowsExample|TestAccFirebaserulesRelease_BasicRelease|TestAccBillingSubaccount_renameOnDestroy|TestAccBillingSubaccount_basic|TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample|TestAccVertexAIFeaturestoreEntitytypeFeature_vertexAiFeaturestoreEntitytypeFeatureExample
Tests passed during RECORDING mode:
TestAccFirebaserulesRelease_BasicRelease[Debug log]
TestAccBillingSubaccount_renameOnDestroy[Debug log]
TestAccBillingSubaccount_basic[Debug log]
TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample[Debug log]
TestAccVertexAIFeaturestoreEntitytypeFeature_vertexAiFeaturestoreEntitytypeFeatureExample[Debug log]
Tests failed during RECORDING mode:
TestAccComputeInstance_soleTenantNodeAffinities[Error message] [Debug log]
TestAccCGCSnippet_eventarcWorkflowsExample[Error message] [Debug log]
Please fix these to complete your PR View the build log or the debug log for each test
Hi @slevenick I added the default value of the monitoring_interval and I confirmed the tests passed. Could you review the change when you have time? Thanks!