loki
loki copied to clipboard
Support receiving logs in Loki using OpenTelemetry OTLP
Is your feature request related to a problem? Please describe. I am running Grafana Loki inside a Kubernetes cluster but I have some applications running outside the cluster and I want to get logging data from those applications into Loki without relying on custom APIs or file-based logging.
Describe the solution you'd like OpenTelemetry describes a number of approaches including using the OpenTelemetry Collector. The OpenTelemetry Collector supports various types of exporters and the OTLP exporter supports logs, metrics, and traces. Tempo supports receiving trace data via OTLP and it would be great if Loki also had support for receiving log data via OTLP. This way, people could run the OpenTelemetry Collector next to their applications and send logs into Loki in a standard way using the OpenTelemetry New First-Party Application Logs recommendations.
Currently, unless I am misunderstanding the Loki documentation, it seems the only API into Loki is custom:
Details on the OTLP specification:
- OTLP/gRPC: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlpgrpc
- OTLP/HTTP: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlphttp
Describe alternatives you've considered There are a number of Loki Clients that one can use to get logs into Loki but they all seem to involve using the custom Loki push API or reading from log files. Supporting the OpenTelemetry Collector would allow following the OpenTelemetry New First-Party Application Logs recommendations
Additional context Add any other context or screenshots about the feature request here.
done .https://github.com/grafana/loki/pull/5363
1: grafana otlp log view
2 go client mod.go dependency
go.opentelemetry.io/collector/model v0.44.0
demo go client code:
import (
"context"
"testing"
"time"
"github.com/stretchr/testify/require"
"go.opentelemetry.io/collector/model/otlpgrpc"
"go.opentelemetry.io/collector/model/pdata"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
)
func TestGrpcClient(t *testing.T) {
grpcEndpoint := "localhost:4317"
//client
addr := grpcEndpoint
conn, err := grpc.Dial(addr, grpc.WithTransportCredentials(insecure.NewCredentials()))
require.NoError(t, err)
client := otlpgrpc.NewLogsClient(conn)
request := markRequest()
_, err = client.Export(context.Background(), request)
require.NoError(t, err)
}
func markRequest() otlpgrpc.LogsRequest {
request := otlpgrpc.NewLogsRequest()
pLog := pdata.NewLogs()
pmm := pLog.ResourceLogs().AppendEmpty()
pmm.Resource().Attributes().InsertString("app", "testApp")
ilm := pmm.InstrumentationLibraryLogs().AppendEmpty()
ilm.InstrumentationLibrary().SetName("testName")
now := time.Now()
logReocrd := ilm.LogRecords().AppendEmpty()
logReocrd.SetName("testName")
logReocrd.SetFlags(31)
logReocrd.SetSeverityNumber(1)
logReocrd.SetSeverityText("WARN")
logReocrd.SetSpanID(pdata.NewSpanID([8]byte{1, 2}))
logReocrd.SetTraceID(pdata.NewTraceID([16]byte{1, 2, 3, 4}))
logReocrd.Attributes().InsertString("level", "WARN")
logReocrd.SetTimestamp(pdata.NewTimestampFromTime(now))
logReocrd2 := ilm.LogRecords().AppendEmpty()
logReocrd2.SetName("testName")
logReocrd2.SetFlags(31)
logReocrd2.SetSeverityNumber(1)
logReocrd2.SetSeverityText("INFO")
logReocrd2.SetSpanID(pdata.NewSpanID([8]byte{3, 4}))
logReocrd2.SetTraceID(pdata.NewTraceID([16]byte{1, 2, 3, 4}))
logReocrd2.Attributes().InsertString("level", "WARN")
logReocrd2.SetTimestamp(pdata.NewTimestampFromTime(now))
request.SetLogs(pLog)
return request
}
Hi! This issue has been automatically marked as stale because it has not had any activity in the past 30 days.
We use a stalebot among other tools to help manage the state of issues in this project. A stalebot can be very useful in closing issues in a number of cases; the most common is closing issues or PRs where the original reporter has not responded.
Stalebots are also emotionless and cruel and can close issues which are still very relevant.
If this issue is important to you, please add a comment to keep it open. More importantly, please add a thumbs-up to the original issue entry.
We regularly sort for closed issues which have a stale
label sorted by thumbs up.
We may also:
- Mark issues as
revivable
if we think it's a valid issue but isn't something we are likely to prioritize in the future (the issue will still remain closed). - Add a
keepalive
label to silence the stalebot if the issue is very common/popular/important.
We are doing our best to respond, organize, and prioritize all issues but it can be a challenging task, our sincere apologies if you find yourself at the mercy of the stalebot.
May I ask, whats the current state here? :)
@frzifus The status of this is that we still miss the API, but the key storage issue is addressed by non-indexed labels (See upcoming docs PR: https://github.com/grafana/loki/pull/10073). As @slim-bean mentioned in his earlier comment, we need an efficient storage for OLTP labels: and AFAIU and as mentioned in the last NASA community call we are close to non-indexed labels:
Any update on the native OTLP support ?
@sandeepsukhani might know a thing or two about this :-)
Hey folks, we have added experimental OTLP log ingestion support to Loki. It has yet to be released, so you would have to use the latest main to try it. You can read more about it in the docs. Please give it a try in your dev environments and share any feedback or suggestions.
Hi, really looking forward to that feature :)
I saw that a service.instance.id
will be considered a label, doesn't this have the potential to be a high cardinality value?
Also will it be possible to customize the "labels" list? In our case we run nomad so the k8s....
resource attributes wouldn't really work for us. But we would have resource attributes like nomad.job.name
which would make sense for us as labels
@sandeepsukhani looks good, I'll give it a try next week.
One immediate suggestion is that I'd like to be able to configure the indexed labels so I can add/remove items from the list. Perhaps it should default to the list you have in the docs and then the user can provide their own list to override it.
Also I see the span_id and trace_id are currently metadata, shouldn't the trace_id at least be indexed so I can correlate logs to traces?
Another suggestion is that the conversion adds a 'severity_number' metadata attribute which is not very useful, instead it should map it to a 'level' field like the opentelemetry collector translator does: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/translator/loki/logs_to_loki.go.
Hi, Can I ask if there are plans to support grpc? Or maybe I missed some documentation and it's actually supported now?