apollo-client
apollo-client copied to clipboard
Memory leak on SSR after upgrading from 2.x to 3.3.13
Recently decided to upgrade apollo client from 2.6 to 3.3.13. After releasing it to production we saw huge memory usage increase on a server side. Rolling back to the old 2.6 version solved the problem.
Here is what happened in production while switched to 3.3.13 (tried to solve memory leak problem), gave up after 3 days and rolled back to 2.6:
Here is what we saw in the heapdumps (comparison view, filtered objects that were not deleted):
Seems like a lot of "Entry" instances are not garbage collected (deleted 0). Here is a dependency tree, maybe someone can help to see where the problem is:
Any help will be appreciated :pray:
Heapdumps:
- https://drive.google.com/file/d/10DnUSdyXN5q030bbcVXmgNpWWcUeVfy0/view?usp=sharing
- https://drive.google.com/file/d/1a05z2oty1i5foYvBkHK_T6uWdHwdPmw4/view?usp=sharing
Shortly about our setup:
- SSR (Node.js, React, apollo)
- Server side code is bundled with webpack
What we have tried:
-
resultCaching:false
config for the inmemory cache - Manuall call
client.stop()
client.clearStore
after each request. - Removed all
readFragment
calls. - Different node.js versions 12.8, 14.16
- Increased max old space size (heap size)
- Downgrade to apollo 3.1.3
Versions
- OS - alpine3.11
- @apollo/client - 3.3.13
- node.js - 14.16.0
- react - 16.8.2
- webpack 4.35.2
- graphql": 15.5.0
- graphql-tag": 2.12.0
@AlexMost For SSR, our blanket recommendation† is to create a new client for each request, letting the old ones be garbage-collected. If you do that, this memory leak should stop happening, because those Entry
objects should be reachable only from a particular client/InMemoryCache
/StoreReader
. If that doesn't fix the leak, then I agree that's a bug we need to fix.
This used to be an explicit recommendation in our docs, but it seems to have been removed some time after I made this comment in November 2019: https://github.com/apollographql/apollo-client/issues/5529#issuecomment-549457307 (cc @StephenBarlow since I think this happened in #7442).
† Do you have a specific reason for not wanting to create a new client per SSR request? If so, I'm sure there is some more aggressive cleanup we could be doing within client.stop()
, but you'll want to make sure that same client isn't handling multiple asynchronous requests simultaneously.
Hi @benjamn! We are following this recommendation, every request creates its own apollo client and InmemoryCache instances. Actually, you can see that there are 57 Apollo Client instances in the heapdump. Also there are +1k MissingFieldError new instances. I tried to investigate that, found an issue with readFragment, tried to remove all readFragment usages but that didn't help.
@AlexMost PR in progress: #7943
Great news! @benjamn thanks for the quick research! Please, let me know when we can test those changes
btw @AlexMost you might want to consider creating new client instance for each request because otherwise you might have race conditions: concurrent requests can write different data into the same cache fields (like current user info).
we are using this approach and having no memory leaks with apollo client 3
@gbiryukov we are using this approach. Each request creates a new client instance and cache. Sorry if that wasn't clear from my previous messages.
@AlexMost This should be (at least partially) fixed in @apollo/[email protected]
(just published to npm). Please let us know if you (don't) see an improvement after updating!
@benjamn thanks for the quick update! We have tried the new version of getMarkupFromTree
with renderPromise.clear()
before 3.3.14
release (just copied function implementation with fixes)
patched getDataFromTree.js file
import React from 'react';
import { getApolloContext } from '@apollo/client/react/context';
import { RenderPromises } from '@apollo/client/react/ssr/RenderPromises';
RenderPromises.prototype.clear = function clear() {
this.queryPromises.clear();
this.queryInfoTrie.clear();
};
export function getMarkupFromTree({
tree,
context = {},
// The rendering function is configurable! We use renderToStaticMarkup as
// the default, because it's a little less expensive than renderToString,
// and legacy usage of getDataFromTree ignores the return value anyway.
renderFunction = require('react-dom/server').renderToStaticMarkup // eslint-disable-line
}) {
const renderPromises = new RenderPromises();
function process() {
// Always re-render from the rootElement, even though it might seem
// better to render the children of the component responsible for the
// promise, because it is not possible to reconstruct the full context
// of the original rendering (including all unknown context provider
// elements) for a subtree of the original component tree.
const ApolloContext = getApolloContext();
return new Promise((resolve) => {
const element = React.createElement(
ApolloContext.Provider,
{ value: { ...context, renderPromises } },
tree,
);
resolve(renderFunction(element));
}).then((html) => {
return renderPromises.hasPromises()
? renderPromises.consumeAndAwaitPromises().then(process)
: html;
}).finally(() => {
renderPromises.clear();
});
}
return Promise.resolve().then(process);
}
export function getDataFromTree(tree, context = {}) {
return getMarkupFromTree({
tree,
context,
// If you need to configure this renderFunction, call getMarkupFromTree
// directly instead of getDataFromTree.
renderFunction: require('react-dom/server').renderToStaticMarkup // eslint-disable-line
});
}
But, unfortunately, the memory leak is still present. The worst thing is that I can't reproduce that locally :cry:
Here are the heapdumps that we were lucky to get before the process was killed by OOM with 137 code:
- https://drive.google.com/file/d/11tHZXTBmkRw_Rcs4HQ0y3onYNlhYQNbr/view?usp=sharing
- https://drive.google.com/file/d/1r52a9BTg5tMmXL7yY717jaTRHB3Zg3zR/view?usp=sharing
Please, let me know if we can provide some more information about that issue. Anyway, thanks for the quick support!
Was looking through the objects inside the heapdump, filtered objects by "Retained size" and found one strange thing, maybe this will help. Significant amount of space is reserved by ObservableQuery instances, and one of them is way much bigger than others 5 487 912
.
Here is flagsValuesQuery that was insidte this object (maybe this will help)
query flagsValuesQuery {
optionalFlags (names: [
"PRODUCT_ADV_CATALOG_DEBUG"
"CONTENT_RECOMMENDED_NEW"
"CONTENT_DISCOUNT_ITEM_ANIMATION"
"CONTENT_HIDE_TESTIMONIALS"
"CONTENT_PORTABLE_FAVORITE_ENABLED"
"CONTENT_FILTERS_DISABLE_EMPTY"
"PORTABLE_AB_EXAMPLE"
"PORTABLE_ANOTHER_TEST_AB"
"CONTENT_HIDE_PHONE_PORTABLE"
"CRM_NEW_SOCIAL_AUTH"
"CRM_CHECK_URL_FOR_PROM_DOMAIN"
"CONTENT_PROJECTS_DROPDOWN"
"CONTENT_KABANCHIK_LP"
"CONTENT_WHATELSE_BLOCK_SIMILAR_IMAGE"
"CONTENT_MIXED_SERP_IN_CATEGORY"
"CONTENT_MIXED_SERP_IN_CATEGORY_WITH_TOP15"
"CONTENT_SHOW_CITY_IN_LISTING"
"CONTENT_ENABLE_DELIVERY_TO_CITY_AT_LISTINGS"
"CONTENT_JUSTIN_AB"
"CONTENT_MICRODATA_CATALOG"
"CONTENT_JSONLD_CATALOG"
"CONTENT_NY2020_HEADER_DECOR"
"PRODUCT_ADV_GOOGLE_TRACKING_SPA"
"PRODUCT_ADV_RTB_HOUSE_TRACKING_SPA"
"PRODUCT_ADV_RTB_HOUSE_TRACKING_PC_SPA"
"PRODUCT_ADV_CRITEO_TRACKING_SPA"
"PRODUCT_ADV_CRITEO_TRACKING_PC_SPA"
"PRODUCT_ADV_CRITEO_CATEGORY_TRACKING_SPA"
"PRODUCT_ADV_CRITEO_CATEGORY_TRACKING_PC_SPA"
"PRODUCT_ADV_PRIMELEAD_SPA"
"PRODUCT_ADV_PRIMELEAD_GTE_PRICE"
"PRODUCT_ADV_CROP_JS"
"PRODUCT_ADV_YOTTOS_SPA"
"PRODUCT_ADV_CRITEO_DISABLED"
"PS_1852_TRACK_UNADVERTISED_PRODUCTS"
"CONTENT_HAMSTER_ANIMATION"
"CONTENT_INDEX_TAG_PAGES"
"CORE_PROM_SHIPPING_NP_STEPS_BANNER_AB"
"CONTENT_ANIMATION"
"PRODUCT_ADV_YANDEX_DIRECT_TRACKING"
"MRD_COMPANIES_STATS"
"CONTENT_KEYWORDS_SEARCH_PHRASE_AB"
"CONTENT_TAG_ADV_CPA_ONLY_AB"
"CONTENT_HEADER_GIFT_ICON"
"CONTENT_PRODUCT_CARD_VIEWS_COUNT"
"CONTENT_REGIONAL_REDIRECT_ENABLED"
"CONTENT_NEW_ADULT_WARNING"
"CONTENT_DISABLE_VARIATION_DEBOOST_AB"
"MOBILE_BANNER_PORTABLE"
"MOBILE_BANNER_PORTABLE_AB"
"MOBILE_NEW_BANNER_V0"
"MOBILE_NEW_BANNER_V1"
"MOBILE_NEW_BANNER_V2"
"SATU_PROTECT_BUYERS_AB"
"SATU_REGIONAL_BOOSTS_AB"
"SATU_BOOST_MAX_SCORE_DEVIATION_AB"
"CONTENT_MEGA_FILTERS_LINKS"
"CONTENT_ABSOLUTE_MULTIMATCH_AB"
"SATU_EXTEND_REGION_SEARCH_RESULT_AB"
"PROM_SHIPPING_REMOVE_FREE_NP"
"CORE_PROM_SHIPPING"
"CORE_PROM_SHIPPING_MIN_COST_300"
"CORE_PROM_SHIPPING_MIN_COST_500"
"CORE_UKRPOSHTA_PROM_FREE_SHIPPING"
"CONTENT_PROMO_PANEL_ENABLED"
"CONTENT_PROMO_PANEL_PORTABLE_ENABLED"
"CONTENT_MAIN_PAGE_BANNERS_PORTABLE"
"CONTENT_NEW_LISTING_TOP_PANEL"
"CONTENT_MEGA_FILTERS_CATEGORIES"
"CORE_PRODUCT_BLOCKS_CAROUSEL_BUTTONS"
"CONTENT_ADULT_CLASSIFIER_SERVICE_AB"
"CONTENT_MEGA_FILTERS_REGIONS"
"CONTENT_ADV_WEIGHT_BOOST_IN_PAID_LISTINGS_AB"
"CORE_NEW_OPINION_CREATE_PAGE"
"CRM_ENABLE_NEW_2020_PACKAGES"
"CONTENT_CLASSIFIED_USER_MENU"
"CONTENT_SEARCH_MAIN_WORD_AB"
"CONTENT_CLEAN_SEARCH_WITH_ADV_AB"
"CONTENT_NEW_LISTINGS_AB"
"CONTENT_PRODUCT_IS_COPY_AB"
"CONTENT_NEW_ADVERT_WEIGHT_AB"
"CONTENT_SEARCH_MAIN_ENTITY_AB"
"CONTENT_MAIN_WORD_PRICE_SORT_AB"
"CONTENT_DISABLE_SUGGEST_IN_PREMIUM"
"CONTENT_ENABLE_HISTORY_IN_SEARCH_AUTOCOMPLETE"
"CONTENT_SEASON_CATEGORIES_SEARCH_AUTOCOMPLETE"
"CONTENT_SEARCH_THROUGH_FILTER_SECTIONS"
"CONTENT_DESKTOP_SPA_AB"
"CONTENT_TAG_WITH_SEARCH_SHOULD_MATCH_AB"
"CONTENT_TAG_WITH_SEARCH_LOGIC_AB"
"CONTENT_PORTABLE_TWIST_FILTER"
"CONTENT_PRICE_CHANGE_CHART_AB"
"CONTENT_HELP_LINKS"
"CONTENT_PERSONAL_FEED"
"CONTENT_ATTRS_MATCH_AB"
"CONTENT_SHOP_IN_SHOP"
"GOTCHA_ENABLED"
"CONTENT_REMARKETING_FB"
"CONTENT_ONTHEIO_ENABLED"
"CONTENT_POWER_ENABLED"
"CONTENT_PRODUCT_GROUP_FILTER"
"CONTENT_PRODUCT_VIEWS_COUNT_PORTABLE"
"CONTENT_HIDE_BREADCRUMBS_COUNTER"
"CONTENT_COMPANIES_LIST"
"CONTENT_GENDER_PERSONAL_FEED_AB"
"CONTENT_PRODUCT_PAGE_RECOMMENDATION_AB"
"CONTENT_PRODUCT_PAGE_RECOMMENDATION_V2_AB"
"SATU_PROTECT_BUYERS_BLOCK_ON_TOP_AB"
"SATU_HIDE_PHONE_NUMBER_AB"
"SATU_SHOW_WARNING_ON_PHONE_CLICK_AB"
"CONTENT_VARIATION_CHECK_AB"
"CONTENT_HIDE_FOOTER_COUNTRIES_BLOCK"
"CORE_ADAPTIVE_SC_IN_IFRAME"
"CORE_ADAPTIVE_SC_DESKTOP_CHECKOUT"
"CORE_ADAPTIVE_SC_CHECKOUT_AB"
"CORE_EVO_CREDIT"
"CORE_4813_MASTER_CARD_CAMPAIGN"
"SATU_YOLO_ADV_ONLY_AB"
"CLERK_TEST_API"
"CONTENT_DISABLE_YAMETRICA"
"CORE_NEW_OPINION_CATALOG_BOOST_PENALTY_AB"
"CONTENT_ORDER_SUCCESS_BOOST_AB"
"CONTENT_ADV_ONLY_AB"
"CONTENT_ELASTIC_CATS_AB"
"CORE_VALIDATE_OPINIONS_ONCE_AGAIN"
"CRM_BY_NEW_DOCUMENTS_UI"
"CONTENT_HIDE_SELLER_CERTIFICATION"
"CORE_AUTH_ON_CATALOG"
"SATU_MEGA_DISCOUNT"
"TIU_SHOW_POCHTA_ROSSII_DELIVERY_PRICE"
"CONTENT_MAIN_PAGE_NEW_JOIN_NOW_TEXT"
"PRODUCT_ADV_PS_1385"
"CORE_4669_BLACK_FRIDAY_LANDING_LABEL"
"DEAL_CAT_AND_SEARCH_ADV_ONLY_AB"
"CONTENT_FETCH_MATCH_PHRASE_FROM_DISMAX_AB"
"MP_4366_EVOPAY_BOOST"
"MP_4222_ACTIVE_FILTERS"
"MP_4375_LANG_REMINDER_PANEL"
"CONTENT_MODELS_AB"
"CONTENT_MODELS_CATEGORY_AB"
"MP_4376_DISABLE_REGION_DELIVERY_CHECKBOX"
"CORE_4827_ERROR_POPUP"
"CORE_4894_FB_TRACKING_ADD_TO_CART"
"TIU_BUY_BUTTON_AB"
"MP_4586_BUY_BUTTON_IN_PRODUCT_CARD_AB"
"MP_4535_HIDE_FAST_LINKS_ON_MAIN_PAGE"
"MP_4519_NEW_CONVERSION_BLOCKS_VIEW"
"CORE_EVO_WALLET"
"TIU_DELIVERY_AND_PAYMENT_ARROW_LIST_DESIGN_AB"
"CONTENT_CATALOG_SLOWPOKE_AB"
"CORE_5114_FORBIDDEN_MAILBOXES_HINT"
"MP_4647_CATEGORY_SEARCH_BUTTON"
"TIU_500_NEW_LISTING_TAGS_LOGIC"
"MP_4536_TOP_TAGS_BY_CATEGORY"
"TIU_514_CALLBACK_BUTTON"
"CORE_5185_AUTH_POPUP_ON_FAVORITES"
"MP_4787_MEGA_MENU"
"MP_4864_HEADER_SEARCH_V2"
"MP_4776_MOBILE_MULTI_ACCOUNT"
"SATU_625_ENABLE_GOOGLE_ADSENSE"
"CORE_5523_VISA_CAMPAIGN"
"MP_4415_ADVANTAGES_SLIDER"
"MP_4886_MERCHANT_LOGO_NAME_TO_SHOP_IN_SHOP_AB"
"TIU_631_ENABLE_GOOGLE_ADSENSE"
"MP_4869_SCROLLABLE_CONVERSION_BLOCKS"
"SATU_617_ENABLE_AD_UNIT_HEADER"
"SATU_618_ENABLE_AD_UNIT_RELATED"
"SATU_619_ENABLE_AD_UNIT_PAYMENT"
"SATU_620_ENABLE_AD_UNIT_COMPANY_OPINION"
]){
id
name
value
}
optionalValues(names: [
"CONTENT_DEFAULT_PER_PAGE_24_48_90_96"
"CONTENT_PRODUCT_PER_PAGE_TAG_24_48_90_96"
"CONTENT_PRODUCT_PER_PAGE_SEARCH_24_48_90_96"
"CONTENT_DISCOUNT_ITEM_ANIMATION_PERCENT"
"CONTENT_DISCOUNT_ITEM_ANIMATION_MULTIPLE"
"CONTENT_DISCOUNT_ITEM_ANIMATION_VIEW"
"CONTENT_HAMSTER_ANIMATION_DISPLAY_FREQUENCY"
"CONTENT_MIN_ITEMS_TAGS_BLOCK"
"CONTENT_TAGS_BLOCK_QUANTITY"
"CONTENT_MIN_HITS_BEFORE_SUGGESTIONS_NEEDED"
"CONTENT_MAIN_PAGE_BANNERS_ROTATE_DELAY_MS"
"CONTENT_FILTER_CATEGORY_COUNT"
"CONTENT_EVALUATE_LINK"
"CONTENT_LISTING_BANNER_STEP"
"MP_4375_LANG_REMINDER_PANEL_TIMEOUT"
"CORE_MIN_RATING_TO_SHOW_LIKE"
"MP_4601_SHOW_AB_TESTS"
"MP_4507_ATTR_FILTERS_SIZE"
"MP_4549_PROMO_POPUP_DELAY"
"MP_4549_PROMO_POPUP_COOKIE_EXPIRES_TIME"
]){
id
name
value
}
}
@AlexMost have you had a chance to try @apollo/[email protected]
directly instead of manually patching it in?
@AlexMost Adding to @hwillson's comment, it's possible you're still using the v3.3.13 code in Node.js despite changing the getDataFromTree.js
module, since Node uses the CommonJS bundles rather than separate ESM modules. Worth double-checking after npm i @apollo/[email protected]
, I think!
@benjamn @hwillson ok, I will double-check that. But, for sure I used patched version of getDataFromTree. Will let you know when I will have results with 3.3.14
Tried 3.3.14
, unfortunately it still leaks :crying_cat_face:
Heapdumps:
https://drive.google.com/file/d/17JIq8fkbfWaSzYKWaebDOjrHCQQRahr6/view?usp=sharing https://drive.google.com/file/d/1DUnZRqss2eE1mLg7iyKs14NcM9DGbjW5/view?usp=sharing
@AlexMost If you have a chance to try @apollo/[email protected]
, it contains 496b3ec2da33720d6444c4b4a7d4b44a4d994782, where I made the RenderPromises
class more defensive about not accepting new data after renderPromises.stop()
is called.
This may go without saying, but… when you do your heap dumps, is there any chance the most recent getDataFromTree
is still in progress (hasn't called renderPromises.stop()
yet)? If so, pausing before renderPromises.stop()
has been called and taking a heap snapshot may be showing memory that's only temporarily retained by that particular RenderPromises
object. In that case, I would recommend taking the heap snapshot when you're sure no getDataFromTree
is currently active, and click the 🗑️ button first to make sure any unreachable memory has been collected.
Hi @benjamn! Thanks for the update, going to test @apollo/[email protected]
ASAP.
As for the :wastebasket: button, we used heapdump
package to collect heapdumps from kubernetes pods. Can't actually use devtools there.
I have a memory leak on my app too that I can not seem to detect the cause of, but for information I tried the beta 22 and it didn't solve it. :(
@AlexMost any luck with https://github.com/apollographql/apollo-client/issues/7942#issuecomment-817542232?
@AlexMost were you able to test things out with @apollo/client@beta
? (currently 3.4.0-rc.6
)
Hi @benjamn,
We are also following your recommendation.
However, v3.4.0-beta.22
version also has a memory leak issue, And I found causing a memory leak in useQuery.
But, every implement is different, so let me explain the situation first.
Situation 1. Use in client side
- Open the page and render new Component
- useQuery returns the result immediately and begin the watchQuery for next changes
- Move to other page and useQuery will call
queryData.cleanup()
by useEffect cleanup function - Unnecessary references to the ApolloClient are lost
However, the same flow can cause problems in SSR
Situation 2. Use in server side
- render the page with ReactDOMServer.renderToString (includes Compoment)
- useQuery returns the result immediately and begin the watchQuery for next changes
- Hydrated html will return to client browser
- but,
queryData.cleanup()
is never called. Because useEffect is ignored in ReactDOM.hydrate - So Remaining unnecessary references to the ApolloClient, and It caused memory leak on the next.js server
For this reason, there is no need to watch query on SSR so I fixed this problem and registered PR -> https://github.com/apollographql/apollo-client/pull/8390 When I applied this PR to our project, the memory leak was gone.
(However, it is difficult to write a test case, because there is no public method to verify that the watchQuery is over)
Please check it out! And have a good day :)
I also have the same issue. I upgraded apollo/client to 3.4.0-rc.20 but the issue still exists.
Hi @hwillson! Hadn't chance to check newer versions of the apollo client, going to check that this or next week, will inform you about the results ASAP
I'm having the same issue with @apollo/client@~3.4.9
.
Is there any known workaround or tutorial on a better way to setup Apollo SSR that doesn't produce a memory leak?
Perhaps this can help: I ran into a possibly related issue using Next.js where I set things up so that the server does not initialize a new client for each request except for within the getServerSideProps
and getStaticProps
functions (because I wanted to see if I can get things to work that way). The idea was that, in getServerSideProps
and getStaticProps
we clearly need a fresh ApolloClient instance for each request, but within the App
where ApolloProvider
is rendered I wanted the client
prop to be a client instance that's created only once (call this the ApolloClient with the frontend configuration, i.e., ssrMode: false
). However, SSR requires creating an instance of an ApolloClient with the frontend configuration to render the app server-side, but it doesn't actually use this particular ApolloClient instance for anything because queries are executed as an effect, not during initial rendering. This also means that, in my setup, even though the same ApolloClient instance on the server side is reused across requests, it holds no cached data and therefore can't leak data from one user's request to another's, which I was at first worried about. Ok, I was pleased. However, as I inspected the client's cache
property across requests using a debugger, I noticed that the watches
set kept growing indefinitely. For every page load, all the queries that were created during initial rendering were added to this watches
set. This was my configuration of the ApolloClient meant for the frontend but created by the server for SSR where I was getting the memory leak:
export const apolloClient = new ApolloClient({
ssrMode: typeof window === 'undefined',
link: from([errorLink, httpLink]),
cache: new InMemoryCache(),
credentials: 'same-origin',
});
But when I changed it to the following, the watches
stopped accumulating:
export const apolloClient = new ApolloClient({
ssrMode: typeof window === 'undefined',
link: from([errorLink, httpLink]),
cache: new InMemoryCache(),
// In SSR, disable the cache entirely (this Apollo client is meant only for the frontend) so that our set of watched
// queries doesn't grow indefinitely.
defaultOptions: typeof window === 'undefined' ? {
watchQuery: {
fetchPolicy: 'no-cache',
}
} : undefined,
credentials: 'same-origin',
});
In short, this may be entirely different from what this GH issue is about, but that's not really clear. I'm finding that an ApolloClient instance may be used across requests so long as the cache is disabled with fetchPolicy: 'no-cache'
.
That worked! Thanks @dchenk
Alright, back again. Using no-cache
for the warchQuery fixes the memory leak, but breaks SSR. Still trying things out, but wanted to add that it's not resolved.
@dchenk Are you clearing/resetting that apolloClient
instance somehow between requests? I just noticed that ObservableQuery#reset
does not call this.watches.reset()
as it probably should, so that might be an avenue to investigate.
In case it needs saying: using no-cache
is a reasonable diagnostic/workaround, but not a solution to the memory leak problem.
tried 3.4.10
, but unfortunately, we still have a memory leak in production...
Tried fetchPolicy: no-cache
this one reduces memory and CPU consumption significantly, but memory leak is still present :cry:
Currently, we have test setup with apollo3 with mirrored traffic from production, so I can check any hypothesis very quickly, any suggestion or help will be much appreciated!
Looking forward to try 3.4.14
, hope this version will solve the issue!
UPD: unfortunately that didn't help 3.4.15
.
client.cache.reset({ discardWatches: true });
Same issue after update Apollo client from 2.x to 3.4.10. But, only if elastic-apm-node is used, if apm turned off - no leak. Elastic-apm-node with Apollo client 2.x - no leak too. Using HOC graphql and hooks.