opentelemetry-go-contrib
opentelemetry-go-contrib copied to clipboard
otelaws AppendMiddlewares causes S3 pre-signed URLs to stop working
Description
When adding the otelaws.AppendMiddlewares(&cfg.APIOptions)
, all pre-signed URLs for downloading S3 files start to break with code SignatureDoesNotMatch
and the message:
"The request signature we calculated does not match the signature you provided. Check your key and signing method."
When removing otelaws.AppendMiddlewares(&cfg.APIOptions)
they immediately return to work.
I'm attaching an image from a diff window with the generated URLs. The only difference that is not related to the AWS account nor the link expiration time is the X-Amz-SignedHeaders
. The URL generated from code with otelaws middlewares has two values host
and traceparent
, while the one without otelaws middlewares has only host
.
Packages installed
- github.com/aws/aws-sdk-go-v2 v1.17.4
- github.com/aws/aws-sdk-go-v2/config v1.18.12
- github.com/aws/aws-sdk-go-v2/service/s3 v1.30.2
- github.com/gin-gonic/gin v1.8.2
- github.com/swaggo/files v1.0.0
- github.com/swaggo/gin-swagger v1.5.3
- github.com/swaggo/swag v1.8.10
- go.opentelemetry.io/contrib/instrumentation/github.com/aws/aws-sdk-go-v2/otelaws v0.38.0
- go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.38.0
- go.opentelemetry.io/otel v1.12.0
- go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.12.0
- go.opentelemetry.io/otel/sdk v1.12.0
- google.golang.org/grpc v1.52.3
Code
I created a simple project to demonstrate this behavior.
👉 Code is available here. 👈
This sample exports traces to Jaeger using OTLP with gRPC and opentelemetry-collector-contrib
at version 0.68.0
.
This is the code with AWS SDK v2 configuration and otelaws
middleware:
// ...
cfg, err := config.LoadDefaultConfig(ctx)
otelaws.AppendMiddlewares(&cfg.APIOptions) // This call will make the pre-signed URL to stop work
if err != nil {
panic(err)
}
client := s3.NewFromConfig(cfg)
presignClient := s3.NewPresignClient(client)
presignParams := &s3.GetObjectInput{
Bucket: aws.String(os.Getenv("BUCKET_NAME")),
Key: aws.String(filename),
}
presignDuration := func(presignOptions *s3.PresignOptions) {
presignOptions.Expires = 5 * time.Minute
}
presignResult, err := presignClient.PresignGetObject(ctx, presignParams, presignDuration)
if err != nil {
panic(err)
}
return presignResult.URL
Steps to reproduce:
- Configure default credentials at the
~/.aws/credentials
file - Start the OTEL collector and Jaeger containers
docker compose up -d
- Export the environment variable
BUCKET_NAME
with an existent AWS S3 Bucket name:
export BUCKET_NAME=<your_bucket>
- Start the application:
go run main.go
- Open this URL in the browser:
http://localhost:8080/swagger/index.html
-
Call the
/s3/files/{name}
endpoint with a valid S3 file within the your bucket. -
Try to download the file with the generated URL, this URL will work.
-
Call the
/s3/files/otel/{name}
endpoint with a valid S3 file within the your bucket. -
Try to download the file with the generated URL, this URL will not work.
-
Open Jaeger UI:
http://localhost:16686/
- The trace for path
/s3/files/{name}
should have one span, while the trace for/s3/files/otel/{name}
should have two spans.
We have noticed this also with presigning EKS tokens with otelaws v0.39.0
I also faced the same issue. I implemented noOpTextMapPropagator that does nothing because it is not necessary to include the traceparent header in AWS API requests, and used it in the config as a temporary workaround.
type noOpTextMapPropagator struct{}
func (n noOpTextMapPropagator) Inject(ctx context.Context, carrier propagation.TextMapCarrier) {}
func (n noOpTextMapPropagator) Extract(ctx context.Context, carrier propagation.TextMapCarrier) context.Context {
return ctx
}
func (n noOpTextMapPropagator) Fields() []string {
return []string{}
}
cfg, _ := config.LoadDefaultConfig(context.Background())
otelaws.AppendMiddlewares(
&cfg.APIOptions,
otelaws.WithTextMapPropagator(noOpTextMapPropagator{}),
)
I'm very happy to find this at the end of a long search :)
@kazz187 Thanks a lot for the workaround. I'd like to add that there is a noop propagator available out of the box (which I've found because it's the default in an unconfigured otel): propagation.NewCompositeTextMapPropagator()
We've just run into this issue too. Is there a known fix, or do we have to disable the text propagation still?