airbyte icon indicating copy to clipboard operation
airbyte copied to clipboard

[destination-snowflake] executes excessive metadata queries

Open barakavra opened this issue 10 months ago • 2 comments

Connector Name

destination-snowflake

Connector Version

3.7.0

What step the error happened?

None

Relevant information

i was monitoring the Snowflake queries from the Airbyte platform and there is a great performance improvement that should be implemented on the Snowflake Destination code. So there are 3 queries here that are executed many times during sync while you can cash the query results instead, all of them are related to metadata of the destination table, so for example instead of executing the following query 409860 times a week ( <NUM_OF_STREAMS><NUM_OF_CONNECTIONS> <NUM_OF_REFRESHES>= 409860) it could be <NUM_OF_REFRESHES>*<NUM_OF_CONNECTIONS> if we cash it in the beginning of each refresh per connector

SELECT table_schema, table_name, column_name, data_type, is_nullable
FROM information_schema.columns
WHERE table_catalog = ?
  AND table_schema IN (?)
  AND table_name IN (?)
ORDER BY table_schema, table_name, ordinal_position;
by altering to
SELECT table_schema, table_name, column_name, data_type, is_nullable
FROM information_schema.columns
WHERE table_catalog = ?
  AND table_schema IN (?)
ORDER BY table_schema, table_name, ordinal_position;

as its only a small dataset the cost will worth it and wrap it with exception will guarantee that even if schema changes happens during the refresh the load will still executed as planned

image (27) image (26) image (25)

Relevant log output

No response

Contribute

  • [x] Yes, I want to contribute

barakavra avatar Apr 14 '24 11:04 barakavra

Thanks for reporting this @barakavra I added the suggestion in the team backlog for further discussion.

marcosmarxm avatar Apr 15 '24 16:04 marcosmarxm

thanks @marcosmarxm, just to stress this out, altering this behaviour could spare Snowflakers around 30 daily credits of snowflake (±30K/$ per year)

barakavra avatar Apr 30 '24 07:04 barakavra

Thanks for the info @barakavra @airbytehq/destinations can you take a look in this issue?

marcosmarxm avatar Apr 30 '24 14:04 marcosmarxm

Hello @marcosmarxm, do you have any news regarding this issue? It prevents us from upgrading to the latest version of the Snowflake destination (we have ~2k connectors relying on it)

pcorbel avatar Jun 17 '24 12:06 pcorbel

Since this issue was filed, we did reduce the number of 'metadata queries' to one-per-stream per sync. This can probably be reduced even further to a single query for all streams in the sync.

evantahler avatar Jun 17 '24 23:06 evantahler

indeed - the following query is still running for hours (and we have only ±20 connectors) SELECT column_name, data_type, is_nullable FROM information_schema.columns WHERE table_catalog = ? AND table_schema = ? AND table_name = ? ORDER BY ordinal_position; although i am green in the AIrbyte platform BE, my thoughts are - as Airbyte is the sole owner of the metadata of these tables/stages (as it should be) , I dont see any reason not to have this data stored when settings are saved, and queried only once out of Airbyte internal database (postgress) when connection is trigger. this is from just yesterday - image

barakavra avatar Jun 18 '24 06:06 barakavra

Grooming notes:

  • we can also issue metadata-only queries that don't require the warehouse to be running (e.g. show tables, not select_from information schema)
    • to test this, in the snowflake UI don't use a warehouse and try to execute the query
  • we should keep the query per stream

evantahler avatar Jun 18 '24 20:06 evantahler

@evantahler kindly research metadata-only queries costs, it has snowflake's "cloud service credits", in case of high volume of queries can cost more than the warehouse itself. attached are our snowflake daily ±costs - i.e. on JDBC kafka connect connector, becore each flush a select dual command - removing this command saved us about 30% of the costs.

barakavra avatar Jun 18 '24 21:06 barakavra

Hi @evantahler @marcosmarxm - Is there any timeline on when this might be resolved? We're running into the same problem when using Snowflake as the destination. The queries below ran for over 7 hours the other day:

SELECT column_name, data_type, is_nullable
FROM information_schema.columns
WHERE table_catalog = ?
  AND table_schema = ?
  AND table_name = ?
ORDER BY ordinal_position;
SELECT SCHEMA_NAME
FROM information_schema.schemata
WHERE schema_name = ?
AND catalog_name = ?;

We're trying to finalize Airbyte as our ELT solution, and a resolution for this would be great to see! Thank you!

dpolman07 avatar Jul 09 '24 16:07 dpolman07

Some additional notes from @stephane-airbyte:

Basically there's 2 types of compute: warehouse cost and "cloud services" cost. SELECT (almost) always uses a warehouse, while DESC/SHOW only use service cost (there's not even a need for a running warehouse). In our case, there's a significant difference between doing a SHOW TABLES and SHOW TABLES LIKE 'xxx' IN SCHEMA 'bbb'. Think of it in term of index utilization. The 1st one will just bring all the data (up to 10k rows), after doing some internal joins (in FoundationDB) and process it in cloud, while the 2nd one (because it's not using a range query, but only an exact match) should do 2-3 index lookups (one for the database, one for the schema and one for the table) and bring all that data directly into cloud-services which has nothing to do after that. If we were to use a '%abc' in the LIKE clause, all the filtering would be done in cloud compute, which can add up in term of costs

evantahler avatar Jul 29 '24 20:07 evantahler

The code in question can be found here: https://github.com/airbytehq/airbyte/blob/224db75b2283b1b0efa2b0d21b5b8fb420beff38/airbyte-integrations/connectors/destination-snowflake/src/main/kotlin/io/airbyte/integrations/destination/snowflake/typing_deduping/SnowflakeDestinationHandler.kt#L498-L517

evantahler avatar Jul 29 '24 23:07 evantahler