Export metrics api handler from "exporter-prometheus"
Problem.
exporter-prometheus depends on exporting new endpoint on a port that is separate than the default app port. This requires additional work from the service side to reserve ports and additional work to discover this assigned port from the scrapper side.
Solution
If the Metric reader can export a handler that returns the current metrics, this can be directly used within the main app server endpoint. This avoids having a separate server listening for the metrics endpoint and simplifies the port discovery by reusing main app http port.
Hi @Assem-Uber, thanks for reaching out. This feature already exists - PrometheusExporter#getMetricsRequestHandler().
You'll need to instantiate the PrometheusExporter with preventServerStart: true to avoid a server being started.
@pichlermarc that is awesome, i missed that in the config. Thanks for the quick reply!
@pichlermarc got a follow up question. Is there a reason behind accepting http request/response as arguments for getMetricsRequestHandler instead of returning a promise with the text. Since i'm using the handler in Nextjs project, it requires some hackish code get the response in a ServerResponse Object and retrun in as NextResponse.
The above approach wasn't straight forward to integrate. I tried the below approach and it works, can we have a similar public api for achieving this ?
const { resourceMetrics, errors } = await otelPrometheusExporter.collect();
const serializedMetrics = otelPrometheusExporter._serializer.serialize(resourceMetrics);
Yeah, I originally added this. I was using a custom server with Next.js at that time and use Express for my non-Next.js services and I thought req/res would be the most convenient way.
@weyert Can you explain how req/res is more convenient than the plain value?
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days.
Are you considering changes to the API? I can help if this is something we agree on.