snowflake-kafka-connector icon indicating copy to clipboard operation
snowflake-kafka-connector copied to clipboard

Log is flooded with INFO messages

Open davimi opened this issue 1 year ago • 0 comments

I am running the connector version 2.1.2 with SNOWPIPE_STREAMING. I noticed that the Kafka Connect logs are flodded with INFO messages:

10:24:30.243 kafka-connect [2024-02-07 09:24:30,243] INFO [SF_KAFKA_CONNECTOR] Successfully called insertRows for channel:..., buffer:StreamingBuffer{numOfRecords=14, bufferSizeBytes=63522, firstOffset=291343757, lastOffset=291343770}, insertResponseHasErrors:false, needToResetOffset:false (com.snowflake.kafka.connector.internal.streaming.TopicPartitionChannel)
10:24:30.517 kafka-connect [2024-02-07 09:24:30,517] INFO [SF_KAFKA_CONNECTOR] Successfully called insertRows for channel:..., buffer:StreamingBuffer{numOfRecords=13, bufferSizeBytes=59307, firstOffset=8131026, lastOffset=8131038}, insertResponseHasErrors:false, needToResetOffset:false (com.snowflake.kafka.connector.internal.streaming.TopicPartitionChannel)
10:24:30.804 kafka-connect [2024-02-07 09:24:30,804] INFO [SF_KAFKA_CONNECTOR] Successfully called insertRows for channel:..., buffer:StreamingBuffer{numOfRecords=8, bufferSizeBytes=36206, firstOffset=8134229, lastOffset=8134236}, insertResponseHasErrors:false, needToResetOffset:false (com.snowflake.kafka.connector.internal.streaming.TopicPartitionChannel)
10:24:30.809 kafka-connect [2024-02-07 09:24:30,809] INFO [SF_INGEST] buildAndUpload task added for client=KC_CLIENT_snowflake_snowpipe_streaming_POC, blob=2024/2/7/9/24/s8hbgu_BgfzkZzTz85t9TfRnn4vVQTJ5k0dI68lKRjU7aNyiiQCC_1011_334_73714.bdec, buildUploadWorkers stats=java.util.concurrent.ThreadPoolExecutor@3c95ac71[Running, pool size = 3, active threads = 0, queued tasks = 1, completed tasks = 73714] (net.snowflake.ingest.streaming.internal.FlushService)
10:24:30.809 kafka-connect [2024-02-07 09:24:30,809] INFO Got brand-new compressor [.gz] (net.snowflake.ingest.internal.apache.hadoop.io.compress.CodecPool)
10:24:30.810 kafka-connect [2024-02-07 09:24:30,810] INFO [SF_INGEST] Finish building chunk in blob=2024/2/7/9/24/s8hbgu_BgfzkZzTz85t9TfRnn4vVQTJ5k0dI68lKRjU7aNyiiQCC_1011_334_73714.bdec, table=..., rowCount=48, startOffset=0, estimatedUncompressedSize=107986.0, paddedChunkLength=16388, encryptedCompressedSize=16400, bdecVersion=THREE (net.snowflake.ingest.streaming.internal.BlobBuilder)
10:24:30.810 kafka-connect [2024-02-07 09:24:30,810] INFO [SF_INGEST] Start uploading blob=2024/2/7/9/24/s8hbgu_BgfzkZzTz85t9TfRnn4vVQTJ5k0dI68lKRjU7aNyiiQCC_1011_334_73714.bdec, size=16400 (net.snowflake.ingest.streaming.internal.FlushService)
10:24:30.858 kafka-connect [2024-02-07 09:24:30,858] INFO [SF_INGEST] Finish uploading blob=2024/2/7/9/24/s8hbgu_BgfzkZzTz85t9TfRnn4vVQTJ5k0dI68lKRjU7aNyiiQCC_1011_334_73714.bdec, size=16400, timeInMillis=48 (net.snowflake.ingest.streaming.internal.FlushService)
10:24:30.858 kafka-connect [2024-02-07 09:24:30,858] INFO [SF_INGEST] Start registering blobs in client=KC_CLIENT_snowflake_snowpipe_streaming_POC, totalBlobListSize=1, currentBlobListSize=1, idx=1 (net.snowflake.ingest.streaming.internal.RegisterService)
10:24:30.858 kafka-connect [2024-02-07 09:24:30,858] INFO [SF_INGEST] Register blob request preparing for blob=[2024/2/7/9/24/s8hbgu_BgfzkZzTz85t9TfRnn4vVQTJ5k0dI68lKRjU7aNyiiQCC_1011_334_73714.bdec], client=KC_CLIENT_snowflake_snowpipe_streaming_POC_2_0, executionCount=0 (net.snowflake.ingest.streaming.internal.SnowflakeStreamingIngestClientInternal)
10:24:30.925 kafka-connect [2024-02-07 09:24:30,925] INFO [SF_INGEST] Register blob request returned for blob=[2024/2/7/9/24/s8hbgu_BgfzkZzTz85t9TfRnn4vVQTJ5k0dI68lKRjU7aNyiiQCC_1011_334_73714.bdec], client=KC_CLIENT_snowflake_snowpipe_streaming_POC_2_0, executionCount=0 (net.snowflake.ingest.streaming.internal.SnowflakeStreamingIngestClientInternal)

These do not seem like INFO level log messages as they relate to specific methods being called. They are appearing every second or so in my logs, creating a lot of noise in the logging. Could they be set to a lower log level to avoid overly many log lines in the logs?

davimi avatar Feb 07 '24 09:02 davimi