Logfire browser not working with electron
Description
I'm working on an Electron based project and sending the logs with logfire to a proxy via @pydantic/logfire-browser configured like this:
logfire.configure({
traceUrl,
environment: isDev ? 'development' : 'production',
serviceName: app.getName(),
serviceVersion: app.getVersion(),
instrumentations: [getWebAutoInstrumentations()], // <- from @opentelemetry/auto-instrumentations-web
});
I noticed that at first, logs from development mode (when the app was not packaged yet) did not show in the platform. When diving into the error, I found that it was because XMLHttpRequest was being used in the @opentelemetry/otlp-exporter-base but it was not defined (probably because of the electron environment).
This error started happening on production (packaged app) as well, and making a patch that replaced the XMLHttpRequest with regular fetch call was enough to make it work again. I wanted to know if this is a bug or it is intentional and there is something different that is needed to be done on electron projects
Patch:
diff --git a/node_modules/@opentelemetry/otlp-exporter-base/build/src/transport/xhr-transport.js b/node_modules/@opentelemetry/otlp-exporter-base/build/src/transport/xhr-transport.js
index 47dc7bf..eefa3ad 100644
--- a/node_modules/@opentelemetry/otlp-exporter-base/build/src/transport/xhr-transport.js
+++ b/node_modules/@opentelemetry/otlp-exporter-base/build/src/transport/xhr-transport.js
@@ -23,62 +23,58 @@ class XhrTransport {
constructor(_parameters) {
this._parameters = _parameters;
}
- send(data, timeoutMillis) {
- return new Promise(resolve => {
- const xhr = new XMLHttpRequest();
- xhr.timeout = timeoutMillis;
- xhr.open('POST', this._parameters.url);
+ async send(data, timeoutMillis) {
+ try {
+ const controller = new AbortController();
+ const timeoutId = setTimeout(() => controller.abort(), timeoutMillis);
+
const headers = this._parameters.headers();
- Object.entries(headers).forEach(([k, v]) => {
- xhr.setRequestHeader(k, v);
+ const response = await fetch(this._parameters.url, {
+ method: 'POST',
+ headers: headers,
+ body: data,
+ signal: controller.signal
});
- xhr.ontimeout = _ => {
- resolve({
+
+ clearTimeout(timeoutId);
+
+ if (response.status >= 200 && response.status <= 299) {
+ api_1.diag.debug('Fetch request success');
+ return {
+ status: 'success',
+ };
+ } else if ((0, is_export_retryable_1.isExportRetryable)(response.status)) {
+ const retryAfter = response.headers.get('Retry-After');
+ return {
+ status: 'retryable',
+ retryInMillis: (0, is_export_retryable_1.parseRetryAfterToMills)(retryAfter),
+ };
+ } else {
+ return {
status: 'failure',
- error: new Error('XHR request timed out'),
- });
- };
- xhr.onreadystatechange = () => {
- if (xhr.status >= 200 && xhr.status <= 299) {
- api_1.diag.debug('XHR success');
- resolve({
- status: 'success',
- });
- }
- else if (xhr.status && (0, is_export_retryable_1.isExportRetryable)(xhr.status)) {
- resolve({
- status: 'retryable',
- retryInMillis: (0, is_export_retryable_1.parseRetryAfterToMills)(xhr.getResponseHeader('Retry-After')),
- });
- }
- else if (xhr.status !== 0) {
- resolve({
- status: 'failure',
- error: new Error('XHR request failed with non-retryable status'),
- });
- }
- };
- xhr.onabort = () => {
- resolve({
+ error: new Error('Fetch request failed with non-retryable status'),
+ };
+ }
+ } catch (error) {
+ if (error.name === 'AbortError') {
+ return {
status: 'failure',
- error: new Error('XHR request aborted'),
- });
- };
- xhr.onerror = () => {
- resolve({
+ error: new Error('Fetch request timed out'),
+ };
+ } else {
+ return {
status: 'failure',
- error: new Error('XHR request errored'),
- });
- };
- xhr.send(data);
- });
+ error: new Error(`Fetch request failed: ${error.message}`),
+ };
+ }
+ }
}
shutdown() {
// Intentionally left empty, nothing to do.
}
}
/**
- * Creates an exporter transport that uses XHR to send the data
+ * Creates an exporter transport that uses fetch to send the data
* @param parameters applied to each request made by transport
*/
function createXhrTransport(parameters) {
Python, Logfire & OS Versions, related packages (not required)
"@opentelemetry/auto-instrumentations-web": "^0.49.0",
"@pydantic/logfire-browser": "^0.9.0",
"electron": "^37.2.6"
I wonder if if the node.js logfire package would be better for an electron app than the @pydantic/logfire-browser browser package (looking at docs here)? @bruno-borges-2001 are there any features missing from the node.js package or other reasons it does not work with electron?
Or perhaps using both:
- Main process (Node context): use the node.js
logfirepackage. - Renderer process (Browser context): use
@pydantic/logfire-browser.
@petyosi I thought you might be able to provide a suggestion on this issue since you previously helped me with a somewhat related query in https://github.com/pydantic/logfire-js/issues/32.
@dhimmel I think the main feature that node.js is missing would be the ability to pass the trace url instead of the token. I could not find a way to do that, maybe I missed something, but if this is a feature there then going to the logfire lib could be the way to go
Ah okay, so the logfire node.js library currently hardcodes the following:
const TRACE_ENDPOINT_PATH = 'v1/traces'
...
traceExporterUrl: `${baseUrl}/${TRACE_ENDPOINT_PATH}`,
without the ability to specify the trace url in LogfireConfigOptions or AdvancedLogfireConfigOptions.
The logfire-browser on the other hand does allow setting traceUrl in its LogfireConfigOptions.
One solution might be to use the raw opentelemetry packages as seen in this example, but it would be preferable if the logfire packages could work for instrumenting an Electron app where the client traces are sent through a proxy server such that the logfire write token is shielded from the distributed Electron app.
@dhimmel thanks for pointing this out. To clarify, the problem above is in the opentelemetry-js package itself. They seem to be aware of that, as there's the experimental package used in the example you have linked - https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/exporter-trace-otlp-http. I will see if it makes sense for the browser instrumentation to use it instead of the current one.
Thanks @petyosi, we'll keep an eye on https://github.com/pydantic/logfire-js and stay tuned here for any workarounds in the meantime.
@petyosi coming back to this issue, I'm thinking proper electron app support is really just blocked by TRACE_ENDPOINT_PATH and METRIC_ENDPOINT_PATH being hardcoded in the logfire-node package (renamed from just logfire in prior comments). Would it be possible to make these values configurable, so that logfire-node could be used with a proxy to enable client side execution in untrusted environments?
@dhimmel the url paths are indeed hard-coded (they are well established OTel conventions), but you have a control over the logfire base url - https://github.com/pydantic/logfire-js/blob/fbeda12a66c51c07623d457c7370a62266e75ad4/packages/logfire-node/src/logfireConfig.ts#L128, which should allow you to point the SDK to your own proxy.
Okay I finally got logfire-node working with the proxy using a config like:
import { DiagLogLevel } from '@opentelemetry/api';
import logfire from '@pydantic/logfire-node';
const baseUrl = 'https://custom-base-url.net';
logfire.configure({
advanced: {
baseUrl,
},
serviceName: 'logfire-node-proxy-test',
sendToLogfire: true,
token: 'dummy-token-to-leave-base-url-alone',
console: true,
diagLogLevel: DiagLogLevel.ALL,
});
Note that unless you supply a token, baseUrl ends up getting overwritten as a blank string via:
const baseUrl = !sendToLogfire || !token ? '' : logfireApi.resolveBaseUrl(process.env, cnf.advanced?.baseUrl, token)
the url paths are indeed hard-coded (they are well established OTel conventions)
should the proxy example use /v1/traces instead of /client-traces for compatibility with logfire-node?