browser-sdk
browser-sdk copied to clipboard
Error Tracking with single-spa / micro frontends
Hey folks,
We try to get Datadog RUM working with micro-frontends, where we have multiple micro frontend applications running on one page.
The setup we've tried:
- the single HTML file shared by all apps instantiates one BrowserSdk instance with one Datadog application & service
- each mfe application is reflected as a service on Datadog and uploading sourcemaps for its releases and service
- we disabled automated view tracking and instead each mfe app dispatches those manually to override the service attribute
But this does not seem to support multiple services on one page load, because the application stats, bundling all services, would be screwed as one page load emits multiple view events and events descending the view event, like error events, would be linked to the view event last dispatched on that page.
So when one page loads 3 or 4 applications, each reflected as a service on DD, we would only be able to properly monitor one of these. But it seems worse for uncaught, asynchronous errors bubble up to window.onerror
where they would become logged by the connected service, where they however do not fit in and hence create noise.
This issue relates to #1225 and #1280
Hello @PaulKujawa,
Your analysis is good, you can have a look at https://github.com/DataDog/browser-sdk/issues/1280#issuecomment-1282259473 for our current state around sub apps.
In your case, I'd recommend you to try to:
- only have a single service representing all mfe applications
- if mfe applications can have different versions, use a unique version composing the versions of the different mfe applications (ex: appA@v123 / appB@v456 => singleService@v123+456)
Using the same service
and version
for SDK configuration and upload of source maps would allow to link the collected error to their source code.
Hello @PaulKujawa,
Your analysis is good, you can have a look at #1280 (comment) for our current state around sub apps.
In your case, I'd recommend you to try to:
- only have a single service representing all mfe applications
- if mfe applications can have different versions, use a unique version composing the versions of the different mfe applications (ex: appA@v123 / appB@v456 => singleService@v123+456)
Using the same
service
andversion
for SDK configuration and upload of source maps would allow to link the collected error to their source code.
This would involve some very undesirable and hard-to-achieve requirements:
- The container app would have to know all the MF versions (to be able to compose the combined version)
- When every app (container or MF) has a new deployment, it should redeploy the sourcemaps of all the apps, so the new combined version doesn't lose the other application sourcemaps
Indeed, it is the only workaround I can think of as long as we don't have support for multiple sub apps on a single page view.
Is it possible to change the service inside datadogRum.init#beforeSend
? If we can detect the proper application using the triggered error info (e.g. filename from the stack trace), it would work just fine too.
Another solution is to make Datadog able to autofetch the sourcemaps if they are public.
When a .js file has a source map, the map file is usually indicated at the end of the file, in the #sourceMappingUrl
. Another heuristic is to add .map
suffix to the file and try to fetch it.
Is it possible to change the service inside datadogRum.init#beforeSend?
It is not something that we support for now, we may consider it but that would require more thoughts on our side.
Another solution is to make Datadog able to autofetch the sourcemaps if they are public.
Do you mean on the SDK side? If yes, it is probably not something that we will want to invest on, given the potential complexity and performance impact.
Do you mean on the SDK side? If yes, it is probably not something that we will want to invest on, given the potential complexity and performance impact.
I mean on the Datadog's server side. After an error is sent, it is processed, right? The error stack contains a file (at <file>
), and this file may end with a #sourceMappingUrl
, and this file could be scanned/uploaded along the processing, internally.
I have reached out to the the team responsible of the source code integration, retrieving the publicly exposed source maps is something that they have considered, it is not in their current priorities though.
Hello @PaulKujawa, Your analysis is good, you can have a look at #1280 (comment) for our current state around sub apps. In your case, I'd recommend you to try to:
- only have a single service representing all mfe applications
- if mfe applications can have different versions, use a unique version composing the versions of the different mfe applications (ex: appA@v123 / appB@v456 => singleService@v123+456)
Using the same
service
andversion
for SDK configuration and upload of source maps would allow to link the collected error to their source code.This would involve some very undesirable and hard-to-achieve requirements:
- The container app would have to know all the MF versions (to be able to compose the combined version)
- When every app (container or MF) has a new deployment, it should redeploy the sourcemaps of all the apps, so the new combined version doesn't lose the other application sourcemaps
single-spa has a K8 service called importmap-deployer what maintains the central importmap.json, referencing the latest JS files of each application. Communication between CI/CD pipelines of each app and this service runs over an HTTP API. One might be able to extend this API to also transmit the sourcemaps and release version. It could then either directly update DD or delegate it to some other K8 service.
But @fsmaia is right, question would be how the browser runtime would receive this updated, composed version. The service taking on the role of the importmap-deployer would need to rebuild some JSON file with the new composed version, that could be loaded by the container app like it loads the importmap.json file.
Possible, but quite some workaround.
/edit: Also, going with one service only would mess with the issue to release mapping and make it more cumbersome to monitor statistics based on applications.
For reference, Sentry supports automatically fetching sourcemaps. This is our biggest blocker for migrating off of Sentry to Datadog.
https://docs.sentry.io/platforms/javascript/sourcemaps/
@bcaudan would it be possible to... do the merge & update of individual sourcemaps on Datadog side? Lets say we have 3 apps, and one "composed" version with 3 source maps.
Now, one application got updated. We would like to create a new version, but only provide the sourcemaps of the updated app. Datadog would then take over the other sourcemaps from the latest release and ideally even compose the new version. This way we don't need to introduce a central Hub on client side. Maybe it could be an optional flag one could set in the CLI when pushing sourcemaps for a service. I know this is very specific and probably needs some abstraction, but perhaps be a compromise for demand and offer for the time being?
@7E6D4309-D2A9-4155-9D4F-287B8CDA14C1, can you please explain how fetching public sourcemaps helps you with the micro frontend setup? Would you not still need to compose one version in a central place? And publishing sourcemaps is not considered a best industry practise
@PaulKujawa thanks for the suggestion, we'll keep this use case in mind and evaluate possible approaches when we will prioritise this topic.
We have the same problem. We are going to move to microfrontends using single-spa and will have to sacrifice sourcemaps in Datadog RUM, for this. It would be great if Datadog could provide support for this use case!
@robmosca we actually got it working. We treat all our micro frontends as a monolith, centralising source maps on AWS and providing a composed version to Datadog, as well as our FE Datadog client. So we have one DD application and have build dashboards and query views that use the view path group to cover the applications owned by different teams.
I've never written a blog post, but if needed I can give it a try.
@robmosca we actually got it working. We treat all our micro frontends as a monolith, centralising source maps on AWS and providing a composed version to Datadog, as well as our FE Datadog client. So we have one DD application and have build dashboards and query views that use the view path group to cover the applications owned by different teams.
I've never written a blog post, but if needed I can give it a try.
@PaulKujawa this sounds great! For error tracking we ended up keeping the Datadog RUM agent instance inside the container microfrontend, and providing access to the .addAction()
and .addError()
functions to the other microfrontends, while injecting metadata to identify the specific microfrontend and version, which later can be used in monitors and dashboards to separate data for different teams.
I am not quite sure I understand how you solved the problem of source maps. If you end up writing a blog post, I would be the first interested in reading it. Now that you say it, we might also write a blog post about our approach!
@bcaudan: any news on the support of source maps for microfrontends in Datadog RUM? At the moment we are still considering dropping using source maps in Datadog but we are not really happy about this solution. We considered the possibility of building the composited source maps for all microfrontends and upload them to datadog every time a microfrontend is released but this goes against the advantage of using microfrontends (independent deployment) and introduces excessive complexity to the deployment pipelines. Is this in the roadmap?
Hey @robmosca, it is still an area that we would want to better support, we have some further investigations planned in the short term to see how we could provide more help for those cases. We will keep you posted here.
MFEs should be first-class citizens. E.G. just like there's a dependency map in APM -> Service Map
MFEs and UI components should be part of that map
We have been hooking into window.DD_RUM
in our micro frontends to send custom actions via .addAction
. This worked fine for us until recently some teams (each team owns a different page/rum app in datadog) updated to version 4.45.0 (from 4.35.0) of @datadog/browser-rum
package.
Were there changes in between these 2 versions that would cause window.DD_RUM.addAction to no longer send custom actions?
I see this issue has been open since a year ago. do you have any update for it, please?
@pilar1347 you should still be able to send custom actions via window.DD_RUM.addAction
. If you still have an issue with it, please open a ticket (a way to reproduce your issue would help a lot)
@naserghiasisis We still have this use case in mind and we still want to improve its support. We did not made significant progress on this topic yet though.
Hi everyone :) I'm part of Datadog's RUM team, and in order to better understand the needs around Microfrontend support, we created a survey so we can gather your feedback. If you are concerned by this topic and want to help us create a better experience for you could you take 5 minutes and fill it out? The link is below: https://app.ballparkhq.com/record/192df656-cf60-41f1-92d0-83d67d2eb694 Dont hesitate to reach out to me if you have any questions ! Thank you !!
I don't have permission to access the link @StanBeckers
Its updated ! Thanks for letting us know
Hi again everyone!
Thank you to those of you who answered our survey, the feedback was extremely useful. We are currently in the technical exploration phase and are considering a few options. If you want to help us further, you can help us rank these options by completing the following survey: https://www.userinterviews.com/projects/WjBK2WmPlg/apply
Due to the nature of these technical explorations we ask participants to this survey to fill out an NDA, but the first 30 to complete it get $10 as a token of our gratitude :)
Looking forward to hearing from you !!
@robmosca we actually got it working. We treat all our micro frontends as a monolith, centralising source maps on AWS and providing a composed version to Datadog, as well as our FE Datadog client. So we have one DD application and have build dashboards and query views that use the view path group to cover the applications owned by different teams.
I've never written a blog post, but if needed I can give it a try.
Hi Paul, would you be able to share more details on your technical implementation of this please?
@StanBeckers @bcaudan any updates here? 🙏🏽