kafka-connect-storage-common
kafka-connect-storage-common copied to clipboard
How to configure a different URL than default AWS S3 URL
I would like to configure a different S3 URL than default S3 URL (s3.amazonaws.com). But I could not find a configurable directive like s3.bucket, s3.region. Could you provide details on how to configure that? Our company abstracted the S3 default URL with a different URL and internally it talks to AWS S3.
Hi Felix, You can use the following properties s3.bucket.name,s3.region,s3.part.size,s3.credentials.provider.class( default DefaultAWSCredentialsProviderChain). all these properties are mentioned in this link https://docs.confluent.io/current/connect/kafka-connect-s3/configuration_options.html . Regards, Abhishek Sahani
On Fri, Aug 2, 2019 at 8:23 PM FelixKJose [email protected] wrote:
I would like to configure a different S3 URL than default S3 URL ( s3.amazonaws.com). But I could not find a configurable directive like s3.bucket, s3.region. Could you provide details on how to configure that? Our company abstracted the S3 default URL with a different URL and internally it talks to AWS S3.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/confluentinc/kafka-connect-storage-common/issues/108?email_source=notifications&email_token=AGEZV6RSX24WMLVARFDXVIDQCRC47A5CNFSM4II6PX52YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HDCRG3Q, or mute the thread https://github.com/notifications/unsubscribe-auth/AGEZV6WXSGTCSLKPTB4LMBTQCRC47ANCNFSM4II6PX5Q .
Hi Abhishek,
Thank you for the quick response.
I have seen those properties (s3.bucket.name,s3.region,s3.part.size). But I don't see a property like s3.endpoint or s3.url.
So I could have provided/configured something like
s3.endpoint = s3.internal.amazonaws.com (our custom url)
s3.bucket=user
s3.region=
Could some one please help me with this?
You'd use store.url (assuming that URL accepts the same API requests as S3)
For example, here is a Minio blog on using it with the S3 connector - https://blog.minio.io/journaling-kafka-messages-with-s3-connector-and-minio-83651a51045d
Thank you so much
@FelixKJose Is your question answered? Feel free to close the issue, if so
Hi, @OneCricketeer - i am not able to access https://blog.minio.io/journaling-kafka-messages-with-s3-connector-and-minio-83651a51045d- can you please help me how to configure the store.url for non aws storage systems, we have internal storage system which builds on top of amazon s3. when i use below config "s3.bucket.name": "my-bucket", "store.url": "http://myurl.com", it is not working, hence looking for exact syntax of how to provide store.url, does this url should have bucket name and region as well? or only end point of url?
@shaiktas https://blog.min.io/kafka_and_minio/
Thank you @OneCricketeer for prompt response, i do have same , but getting below exception.
"throwable": { "class": "org.apache.kafka.connect.errors.ConnectException", "msg": "com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: null), S3 Extended Request ID: null", "stack": [ "io.confluent.connect.s3.S3SinkTask.start(S3SinkTask.java:119)", "org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:304)", "org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:194)", "org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)", "org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)", "java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)", "java.util.concurrent.FutureTask.run(FutureTask.java:264)", "java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)", "java.lang.Thread.run(Thread.java:834)"], "cause": { "class": "com.amazonaws.services.s3.model.AmazonS3Exception", "msg": "Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: null)", "stack": [ "com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1586)", "com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1254)", "com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1035)", "com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:747)", "com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:721)", "com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:704)", "com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:672)", "com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:654)", "com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:518)", "com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4185)", "com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4132)", "com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1302)", "com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1259)", "io.confluent.connect.s3.storage.S3Storage.bucketExists(S3Storage.java:167)", "io.confluent.connect.s3.S3SinkTask.start(S3SinkTask.java:106)", "org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:304)", "org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:194)", "org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)", "org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)", "java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)", "java.util.concurrent.FutureTask.run(FutureTask.java:264)", "java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)", "java.lang.Thread.run(Thread.java:834)"]
That error isn't specific to Connect... I suggest you forward that error to the provider of your custom S3 API along with the version of the AWS Java SDK being used since it's saying "Bad Request"