airbyte
airbyte copied to clipboard
Normalization integration tests do not delete schemas that they create
trafficstars
Normalization integration tests create target schemas at destinations and never delete them. This behavior pollutes our DW accounts (Redshift, Snowflake, BigQuery) with over a thousand random schemas (integration_test_***). In case of Redshift, this causes the following problem:
***"type":"TRACE","trace":***"type":"ERROR","emitted_at":1.666222038961E12,"error":***"message":"Something went wrong in the connector. See the logs for more details.","internal_message":"java.sql.SQLException: [Amazon](500310) Invalid operation: 1040 \nDetails: Maximum tables limit exceeded. The maximum number of tables per cluster is 9900 for this instance type. The limit includes permanent and temporary tables. (pid:7865);","stack_trace":"java.sql.SQLException: [Amazon](500310) Invalid operation: 1040 \nDetails: Maximum tables limit exceeded. The maximum number of tables per cluster is 9900 for this instance type. The limit includes permanent and temporary tables. (pid:7865);\n\tat com.amazon.redshift.client.messages.inbound.ErrorResponse.toErrorException(Unknown Source)\n\tat com.amazon.redshift.client.PGMessagingContext.handleErrorResponse(Unknown Source)\n\tat com.amazon.redshift.client.PGMessagingContext.handleMessage(Unknown Source)\n\tat com.amazon.jdbc.communications.InboundMessagesPipeline.getNextMessageOfClass(Unknown Source)\n\tat com.amazon.redshift.client.PGMessagingContext.doMoveToNextClass(Unknown Source)\n\tat com.amazon.redshift.client.PGMessagingContext.getErrorResponse(Unknown Source)\n\tat com.amazon.redshift.client.PGClient.handleErrorsScenario3(Unknown Source)\n\tat com.amazon.redshift.client.PGClient.handleErrors(Unknown Source)\n\tat com.amazon.redshift.dataengine.PGQueryExecutor$CallableExecuteTask.call(Unknown Source)\n\tat com.amazon.redshift.dataengine.PGQueryExecutor$CallableExecuteTask.call(Unknown Source)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\nCaused by: com.amazon.support.exceptions.ErrorException: [Amazon](500310) Invalid operation: 1040 \nDetails: Maximum tables limit exceeded. The maximum number of tables per cluster is 9900 for this instance type. The limit includes permanent and temporary tables. (pid:7865);\n\t... 13 more\n","failure_type":"system_error"***