loki icon indicating copy to clipboard operation
loki copied to clipboard

error fetching chunks & failed to get s3 object

Open firstsko opened this issue 2 years ago • 25 comments

I found something error on the log.When I query ge QueryRange more than 1 hour,The error will be happend.Any one have ideas for it ?


level=error ts=2022-01-24T10:57:05.273404635Z caller=batch.go:699 msg="error fetching chunks" err="failed to get s3 object: context canceled"
level=error ts=2022-01-24T10:57:05.273434964Z caller=batch.go:699 msg="error fetching chunks" err="failed to get s3 object: context canceled"
level=warn ts=2022-01-24T10:57:05.273457211Z caller=grpc_logging.go:55 method=/logproto.Querier/Query duration=16.524312361s err="failed to get s3 object: RequestCanceled: request context canceled\ncaused by: context canceled" msg="gRPC\n"
level=error ts=2022-01-24T10:57:05.273486368Z caller=batch.go:699 msg="error fetching chunks" err="failed to get s3 object: context canceled"
level=error ts=2022-01-24T10:57:05.273786914Z caller=batch.go:699 msg="error fetching chunks" err="failed to get s3 object: RequestCanceled: request context canceled\ncaused by: context canceled"
level=error ts=2022-01-24T10:57:05.274246531Z caller=batch.go:699 msg="error fetching chunks" err="failed to get s3 object: RequestCanceled: request context canceled\ncaused by: context canceled"
level=warn ts=2022-01-24T10:57:05.274385302Z caller=grpc_logging.go:55 method=/logproto.Querier/Query duration=29.542213769s err="failed to get s3 object: RequestCanceled: request context canceled\ncaused by: context canceled" msg="gRPC\n"

Here is my Query Instance config:

querier:
    engine:
        timeout: 5m
    query_ingesters_within: 3h
    query_timeout: 5m
    

Here is my Query Frontend Instance Config: query_range:

make queries more cache-able by aligning them with their step intervals

align_queries_with_step: true max_retries: 5 parallelise_shardable_queries: true cache_results: true split_queries_by_interval: 5m results_cache: cache: enable_fifocache: true fifocache: size: 2048 validity: 24h

frontend: log_queries_longer_than: 5s compress_responses: true max_outstanding_per_tenant: 2048

My S3 Config:

storage_config:
boltdb_shipper:
  active_index_directory: ./data/boltdb-shipper-active
  cache_location: ./data/boltdb-shipper-cache
  cache_ttl: 24h         # Can be increased for faster performance over longer query periods, uses more disk space
  shared_store: s3
filesystem:
  directory: ./data/chunks
aws:
  s3: http://loki:[email protected]:9000/loki-data
  s3forcepathstyle: true
  region: loki-test

the minio Oss system is worked ,and can be searched

103a51ec5321702e:17e8b407c48:17e8b40de3f:46cb9fb5
Mon Jan 24 2022 16:43:31 GMT+0800
679 KB
103a51ec5321702e:17e8b40de3e:17e8b40e841:e723ea12
Mon Jan 24 2022 16:43:31 GMT+0800
678 KB
103a51ec5321702e:17e8b40e841:17e8b40f215:c964c4f5
Mon Jan 24 2022 16:43:31 GMT+0800
678 KB
103a51ec5321702e:17e8b40f214:17e8b40fb12:c59ef872
Mon Jan 24 2022 16:43:31 GMT+0800
677 KB
103a51ec5321702e:17e8b40fb11:17e8b410477:2eb32d1d
Mon Jan 24 2022 16:43:32 GMT+0800
678 KB
103a51ec5321702e:17e8b410476:17e8b410e04:158423d5
Mon Jan 24 2022 16:43:32 GMT+0800
677 KB
103a51ec5321702e:17e8b410e03:17e8b411726:71315af6
Mon Jan 24 2022 16:43:32 GMT+0800
678 KB
103a51ec5321702e:17e8b411725:17e8b41208f:d2fd006f
Mon Jan 24 2022 16:43:32 GMT+0800
678 KB
103a51ec5321702e:17e8b41208e:17e8b4129d8:40d7d18d
Mon Jan 24 2022 16:43:32 GMT+0800
678 KB
103a51ec5321702e:17e8b4129d7:17e8b41340f:54d47763
Mon Jan 24 2022 16:44:01 GMT+0800
678 KB
103a51ec5321702e:17e8b41340e:17e8b413d5d:4ba55c3e
Mon Jan 24 2022 16:44:01 GMT+0800
677 KB


firstsko avatar Jan 24 '22 11:01 firstsko

grpc timeout ???

vendor/github.com/grafana/dskit/grpcclient/grpcclient.go:98

level=warn ts=2022-01-24T10:57:05.273457211Z caller=grpc_logging.go:55 method=/logproto.Querier/Query duration=16.524312361s err="failed to get s3 object: RequestCanceled: request context canceled\ncaused by: context canceled" msg="gRPC\n"

image

liguozhong avatar Jan 25 '22 09:01 liguozhong

Same issue here ✋

twix14 avatar Jan 31 '22 17:01 twix14

Sadly I have the same issue. Any updates yet?

Syphixs avatar Feb 14 '22 09:02 Syphixs

make queries more cache-able by aligning them with their step intervals ? align_queries_with_step: true split_queries_by_interval: 5m

anything else ?

firstsko avatar Feb 21 '22 03:02 firstsko

Same here ✋

DolevAlgam avatar Mar 01 '22 20:03 DolevAlgam

Same here

danielserrao avatar Mar 21 '22 12:03 danielserrao

Hey guys, have the same issue Any update please?

chthtlo avatar Mar 22 '22 07:03 chthtlo

Same error with gcs backend.

marcusteixeira avatar Mar 24 '22 19:03 marcusteixeira

This error is an expected log. When the limit is 1000, the context will automatically cancel after 1000 logs are queried.

liguozhong avatar Mar 29 '22 03:03 liguozhong

Same issue here! any update?

manu756 avatar Apr 03 '22 17:04 manu756

Same issue here!

step-baby avatar Apr 12 '22 07:04 step-baby

@liguozhong

This error is an expected log. When the limit is 1000, the context will automatically cancel after 1000 logs are queried.

do you have any reference to this ? while this make sense i would expect not to get an error in this case when is a clean close of the context

primeroz avatar May 06 '22 09:05 primeroz

Same error with gcs backend.

rickysos avatar May 12 '22 19:05 rickysos

Same error with GCS backend as well.

ninetyninenines avatar May 12 '22 19:05 ninetyninenines

Same issue using S3

alexandre1984rj avatar May 19 '22 20:05 alexandre1984rj

Same issue with GCS backend as well, is there any setting on that we can set to override the default 1000 limit?

joaoluchetti avatar May 24 '22 19:05 joaoluchetti

Same issue with azure blob store as well

akshaya-msd avatar May 25 '22 12:05 akshaya-msd

So, there are probably a few hundred reasons why context canceled might be reported as the cause of a storage operation failing (which is probably why there are so many "me too" comments above). That said, here is one possibility and the journey I took to find it:

TLDR; my network config was such that my EC2 instance needed the PUT response hop limit increased, and unfortunately the HTTP request sent by the golang aws-sdk to the EC2 instance metadata service in order to get the API token for accessing S3 as the instance IAM role simply hangs indefinitely when there are more network hops then allowed. See this aws-sdk-go bug.

Figuring this out was quite the chore. The logging here was so obtuse, even with debug enabled, they I wound up forking and customizing loki to add a bunch more logging. Mostly this just helped me determine that my configuration was being correctly applied and the S3 operations were being attempted as expected.

Additionally I verified that my EC2 instance and Kubernetes pods running on it could all access the S3 bucket in question (test via CURL and the aws CLI). I also verified that they could perform GET requests on the EC2 metadata service (which seems to play an important role in IAM based access to AWS resources from an EC2 instance). However I was only testing GETs, so this didn't turn up any problems.

The critical diagnostic step for me was to alter the aws-sdk's s3 configuration to have it output debug logs as well (see s3_storage_client.go):

  s3Config = s3Config.WithLogLevel(*aws.LogLevel(aws.LogDebugWithHTTPBody))
  s3Config = s3Config.WithCredentialsChainVerboseErrors(true)

This is what clued me in to the fact that the HTTP PUT to the EC2 metadata services /latest/api/token endpoint was hanging and that was the root cause.

In conclusion, this was a horrible debugging experience, but I think it is as much the aws-sdk-go library's fault as it is loki's. More debug logs around storage would be good, and a way to enable the underlying storage SDK's logging would be nice too (or maybe it could be magically enabled if the log level is set to debug).

sflanker avatar Jun 01 '22 10:06 sflanker

This error is an expected log. When the limit is 1000, the context will automatically cancel after 1000 logs are queried.

Where is such an instruction?

wkshare avatar Jun 06 '22 01:06 wkshare

Hi there @liguozhong. If this is an expected log is there any discussion to update the error to warning or change the message?

alexandre1984rj avatar Jun 10 '22 03:06 alexandre1984rj

Hi there @liguozhong. If this is an expected log is there any discussion to update the error to warning or change the message?

good idea.

liguozhong avatar Jun 10 '22 07:06 liguozhong

i change replication_factor from 3 to 1. the question is fixed. but i don't know why.

wkshare avatar Jun 12 '22 13:06 wkshare

Hi. any updates?

LinTechSo avatar Jul 16 '22 11:07 LinTechSo

Same issue using S3

homie-du avatar Aug 25 '22 09:08 homie-du

same issue, is thery any way around this?

saidamef avatar Sep 21 '22 08:09 saidamef

+1

dansl1982 avatar Oct 19 '22 14:10 dansl1982