amazon-kinesis-client
amazon-kinesis-client copied to clipboard
AmazonKinesisException in 1.8.7
Any thoughts about why this is happening? It's not obvious from the stack.
Seeing the following occassional exception in latest (1.8.7) for a stream with 260 shards and low throughput:
com.amazonaws.services.kinesis.model.AmazonKinesisException: null (Service: AmazonKinesis; Status Code: 500; Error Code: InternalFailure; Request ID: f4a8f8ab-53f5-b0d0-a6c4-575586b934f7)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1588) ~[aws-java-sdk-core-1.11.125.jar!/:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1258) ~[aws-java-sdk-core-1.11.125.jar!/:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030) ~[aws-java-sdk-core-1.11.125.jar!/:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742) ~[aws-java-sdk-core-1.11.125.jar!/:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716) ~[aws-java-sdk-core-1.11.125.jar!/:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) ~[aws-java-sdk-core-1.11.125.jar!/:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) ~[aws-java-sdk-core-1.11.125.jar!/:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) ~[aws-java-sdk-core-1.11.125.jar!/:?]
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) ~[aws-java-sdk-core-1.11.125.jar!/:?]
at com.amazonaws.services.kinesis.AmazonKinesisClient.doInvoke(AmazonKinesisClient.java:1948) ~[aws-java-sdk-kinesis-1.11.125.jar!/:?]
at com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:1924) ~[aws-java-sdk-kinesis-1.11.125.jar!/:?]
at com.amazonaws.services.kinesis.AmazonKinesisClient.executeGetRecords(AmazonKinesisClient.java:969) ~[aws-java-sdk-kinesis-1.11.125.jar!/:?]
at com.amazonaws.services.kinesis.AmazonKinesisClient.getRecords(AmazonKinesisClient.java:945) ~[aws-java-sdk-kinesis-1.11.125.jar!/:?]
at com.amazonaws.services.kinesis.clientlibrary.proxies.KinesisProxy.get(KinesisProxy.java:158) ~[amazon-kinesis-client-1.8.7.jar!/:?]
at com.amazonaws.services.kinesis.clientlibrary.proxies.MetricsCollectingKinesisProxyDecorator.get(MetricsCollectingKinesisProxyDecorator.java:74) ~[amazon-kinesis-client-1.8.7.jar!/:?]
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisDataFetcher.getRecords(KinesisDataFetcher.java:69) ~[amazon-kinesis-client-1.8.7.jar!/:?]
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.SynchronousGetRecordsRetrievalStrategy.getRecords(SynchronousGetRecordsRetrievalStrategy.java:31) ~[amazon-kinesis-client-1.8.7.jar!/:?]
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.BlockingGetRecordsCache.getNextResult(BlockingGetRecordsCache.java:50) ~[amazon-kinesis-client-1.8.7.jar!/:?]
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ProcessTask.getRecordsResultAndRecordMillisBehindLatest(ProcessTask.java:377) ~[amazon-kinesis-client-1.8.7.jar!/:?]
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ProcessTask.getRecordsResult(ProcessTask.java:342) ~[amazon-kinesis-client-1.8.7.jar!/:?]
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ProcessTask.call(ProcessTask.java:159) ~[amazon-kinesis-client-1.8.7.jar!/:?]
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.MetricsCollectingTaskDecorator.call(MetricsCollectingTaskDecorator.java:49) ~[amazon-kinesis-client-1.8.7.jar!/:?]
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.MetricsCollectingTaskDecorator.call(MetricsCollectingTaskDecorator.java:24) ~[amazon-kinesis-client-1.8.7.jar!/:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_151]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
The stack trace indicates that the exception is an internal service exception from Kinesis. The KCL handles such exceptions and retries to fetch records for the shard. Feel free to reopen the issue if you have more questions.
If this is a "normal" service exception would it make sense to log a friendlier message without a stack trace?
This is not a normal service exception, but that doesn't mean that it will not occur. As for the message instead of the stack trace, we do agree with the changes you have suggested. We will prioritize this against other customer requests we receive. Thank you for the feedback.