airbyte icon indicating copy to clipboard operation
airbyte copied to clipboard

🐛 Airbyte Core: Large schema fetching failure

Open po3na4skld opened this issue 4 years ago • 34 comments

Enviroment

  • Airbyte version: 0.26.4-alpha
  • OS Version / Instance: macOS 11.2.3
  • Deployment: Docker
  • Source Connector and version: Salesforce 0.2.3
  • Destination Connector and version: Local CSV 0.2.7, BigQuery 0.3.7
  • Severity: High
  • Step where error happened: Setup new connection

Current Behavior

When fetching the large schema from the source, it fails with the next error I found in doker-compose up logs output: It has nothing to do with the source itself. I used this just to catch this issue

org.glassfish.jersey.server.ServerRuntime$Responder writeResponse
SEVERE: An I/O error has occurred while writing a response message entity to the container output stream.

It runs unlit the end of the daily request quota. Then schema fetching fails due to limits.

Expected Behavior

The size of the schema shouldn't affect the UI work.

Logs

error_log.txt

Steps to Reproduce

  1. docker compose up airbyte project
  2. create source Salesforce
  3. create local CSV or BigQuery destination
  4. set up a connection
  5. see the error in docker compose up command output

I also tried to do this with filtered out streams and it worked well.

Are you willing to submit a PR?

No

po3na4skld avatar Jul 06 '21 09:07 po3na4skld

@sherifnada @po3na4skld my initial guess seem right that this is the same issue we have in https://github.com/airbytehq/airbyte/pull/4175/files I think the fix can be the same

keu avatar Jul 06 '21 09:07 keu

@sherifnada updated the title and the description

po3na4skld avatar Jul 06 '21 16:07 po3na4skld

Related oncall issues:

  • https://github.com/airbytehq/oncall/issues/212
  • https://github.com/airbytehq/oncall/issues/308

grishick avatar Jul 01 '22 18:07 grishick

Is this still an issue?

evantahler avatar Aug 01 '22 21:08 evantahler

Is this still an issue?

Yes, unless someone explicitly fixed it recently. I haven't seen any related changes or tests.

grishick avatar Aug 01 '22 22:08 grishick

Possible Solutions:

  • We could fail, but explicitly (e.g. https://github.com/airbytehq/airbyte-internal-issues/issues/599)
  • We could increase the message size allowed as a temporal response (e.g. https://github.com/airbytehq/airbyte/issues/3943#issuecomment-912986722)
  • We could move the large payloads elsewhere (jobs database?) and not pass these large responses through temporal

There's some good additional information (Temporal message size for one) in https://github.com/airbytehq/airbyte/issues/3943, which has been closed as a duplicate to this issue.

evantahler avatar Aug 01 '22 23:08 evantahler

@evantahler you could also add support for $ref and optimize the payload

keu avatar Aug 02 '22 08:08 keu

Linking https://github.com/airbytehq/airbyte/pull/15888 which is part of the way we solve this problem

evantahler avatar Sep 28 '22 15:09 evantahler

Any update on this or is the process still to create a seperate user with limited scope?

cody-scott avatar Sep 20 '23 14:09 cody-scott

cc @malikdiarra, as the Compose team is looking into this

evantahler avatar Sep 20 '23 15:09 evantahler

Any update on this or is the process still to create a seperate user with limited scope?

+1 on this. Also having the same issue trying to fetch 1200+ tables on Oracle DB.

arthurbarros avatar Nov 11 '23 20:11 arthurbarros

+1 Even I am having issue for MYSQL DB when it is trying discover_schema: 502 Bad Gateway. I had 1000+ tables in DB.

surajmaurya14 avatar Nov 17 '23 05:11 surajmaurya14

+1 Even I am having issue for MYSQL DB when it is trying discover_schema: 502 Bad Gateway. I had 1000+ tables in DB.

@malikdiarra Any update for this?

surajmaurya14 avatar Nov 17 '23 05:11 surajmaurya14

Same boat as @arthurbarros

Any news on this? I tried everything and no environment variables is changing the fact that Discovery fails after 5 minutes.

image

I also tried setting the worker's missing environment variables in .env and linking them in docker-compose.yaml for the worker

# Worker
ACTIVITY_CHECK_TIMEOUT=15
ACTIVITY_DISCOVERY_TIMEOUT=30

image

There's also the BASIC_AUTH_PROXY_TIMEOUT environment variable (seen in linked issue airbytehq/airbyte#19201) for nginx timeouts, but that's set to 900 so 15 minutes.

image

I also tried setting WORKLOAD_API_READ_TIMEOUT_SECONDS=1200 with no success;

Using Airbyte v0.50.34

philippeboyd avatar Nov 20 '23 22:11 philippeboyd

@malikdiarra are you'll looking into this case or is it on hold?

surajmaurya14 avatar Nov 30 '23 13:11 surajmaurya14

Same boat as @arthurbarros

Any news on this? I tried everything and no environment variables is changing the fact that Discovery fails after 5 minutes.

image

I also tried setting the worker's missing environment variables in .env and linking them in docker-compose.yaml for the worker

# Worker
ACTIVITY_CHECK_TIMEOUT=15
ACTIVITY_DISCOVERY_TIMEOUT=30

image

There's also the BASIC_AUTH_PROXY_TIMEOUT environment variable (seen in linked issue airbytehq/airbyte#19201) for nginx timeouts, but that's set to 900 so 15 minutes.

image

I also tried setting WORKLOAD_API_READ_TIMEOUT_SECONDS=1200 with no success;

Using Airbyte v0.50.34

The only workaround I found working, is to create multiple users on Oracle DB and give permission to list just a subset of tables. With that have multiple Oracle DB connections for each of those users.

It's ugly but works.

arthurbarros avatar Dec 12 '23 16:12 arthurbarros

cc @pmossman - we've made some discovery/temporal improvments lately

evantahler avatar Dec 12 '23 17:12 evantahler

@evantahler Such as? I'm looking at the release changelogs, what kind of improvements are we looking for?

philippeboyd avatar Dec 12 '23 18:12 philippeboyd

Part of the problem here was that until recently, the Airbyte platform could not handle discovered catalogs over ~4mb, due to a limitation in Temporal. We've changed up how we pass information between jobs recently which might help alleviate this.

evantahler avatar Dec 12 '23 19:12 evantahler

Currently, for me on cloud, discover_schema api gave below response (final line): Discover primary keys for tables: [.....]

We had 2500+ tables on MYSQL.

But after that screen is freezed: image

For now, only solution which works is create multiple users on DB with limited access as @arthurbarros said.

Any dates when will changes be released to stable @evantahler

surajmaurya14 avatar Dec 13 '23 07:12 surajmaurya14

@surajmaurya14 the change to how we pass Temporal data is live on Cloud, but there may be another bottleneck somewhere in our system when handling such a large catalog.

Could you share your Cloud workspace ID and source name where you see the frozen screen so we can investigate where the bottleneck is? (Feel free to email it to me at [email protected] or message it to me on the Airbyte Slack, I'm @Parker Mossman there)

pmossman avatar Jan 02 '24 17:01 pmossman

@surajmaurya14 the change to how we pass Temporal data is live on Cloud, but there may be another bottleneck somewhere in our system when handling such a large catalog.

Could you share your Cloud workspace ID and source name where you see the frozen screen so we can investigate where the bottleneck is? (Feel free to email it to me at [email protected] or message it to me on the Airbyte Slack, I'm @Parker Mossman there)

Wrote an email to you @pmossman

surajmaurya14 avatar Jan 05 '24 07:01 surajmaurya14

Thanks @surajmaurya14, I was able to reproduce the issue and investigate our Temporal cluster while the discovery job was running to see where the failure originated from.

In this case, we set a hard cap of 9 minute execution time for Discover jobs in Airbyte Cloud. I think this catalog is so large that it is taking longer than 9 minutes to generate, so Temporal terminates the job at the 9 minute mark before it finishes. I can follow up with a few folks internally to see if we can either: (a): increase the 9 minute threshold to give cases like this more time (b): investigate this particular source to see if there's an optimization we can make for large table counts (since 9+ minutes is obviously a poor user experience!)

pmossman avatar Jan 05 '24 17:01 pmossman

@surajmaurya14 I passed along this feedback to our database sources team, and they recommended we try increasing the 9 minute timeout to 30 minutes to see if your use case eventually succeeds.

I made this change today, so our Temporal workers in Airbyte Cloud will now keep Discover jobs running up to 30 minutes before terminating.

Obviously this is just a stop gap and it'd be ideal to optimize this source for cases where we have thousands of tables, but I'm hoping this unblocks you and gives us some more insight into where the bottleneck may be.

Can you give things another try and let me know how it goes? If the job still freezes/times out after 30 minutes, we'll likely need to do more investigation into the particular source connector to see where things are getting stuck.

pmossman avatar Jan 09 '24 00:01 pmossman

@pmossman Server temporarily unavailable error. Wrote a reply on same email to you.

surajmaurya14 avatar Jan 09 '24 02:01 surajmaurya14

Thanks @surajmaurya14, I did some more digging and here's what I found:

Your discover catalog jobs are now able to finish in Temporal, so the increase to 30 minutes on the Temporal side helped. However, our Load Balancer is configured with a maximum request time of 10 minutes, which means that even if the Discover job eventually succeeds, the server issues a 502 before it can finish processing the network request.

I also observed 502 errors before the 10 minute mark is reached, which seems to correspond with new code deploys that cause server pods to restart. So even if we could raise the maximum request time from 10 minutes to 30 minutes, we deploy code so often that there's a high likelihood the server would drop the request before it could complete.

We have an ongoing project to convert the Discover API to an async model, so that our server no longer needs to keep an active thread open for the entire duration of the Discover job. This project should address the fundamental issue at hand here, I'll make a note to tag you when it lands so we can make sure your use case is finally unblocked.

Thanks for the back and forth here, I know it must be frustrating that the app isn't working for you now but this iteration is extremely helpful for improving the platform and I really appreciate your involvement!

pmossman avatar Jan 09 '24 19:01 pmossman

Same issue here, Trying to sync MySql to Snowflake

pedrohsroque avatar Apr 02 '24 17:04 pedrohsroque

Same here with an Oracle Database

shmf avatar Apr 16 '24 15:04 shmf

Same here, we have tried increasing temporal message limit and http timeouts to no effect.

Trying to sync MSSQL to Databricks.

Though we have an old version of Airbyte(0.44.2) running and don't have this issue on that version with the same MSSQL Database and Databricks. Both are running on K8s

plenti-jacob-roe avatar Apr 18 '24 05:04 plenti-jacob-roe

Also experiencing this issue with large Oracle db

jonodutch avatar Apr 18 '24 16:04 jonodutch