flink-connector-jdbc
flink-connector-jdbc copied to clipboard
[FLINK-36303] PostgreSQL timestamp precision 0 not supported
Thanks for opening this pull request! Please check out our contributing guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
Hi @kgeis, sorry for the late review.. Could you fix the commit message?
@1996fanrui LGTM
Thanks, @eskabetxe. I had renamed the PR and didn't know how to clear that check, and I appreciate the pointer. I've never renamed a pushed commit before. Always learning!
I look forward to this being implemented. It might enable moving a not-very-Big-Data search engine ingestion app from Spark to an apparently leaner Flink.
@1996fanrui 🙏
Any chance of this getting into 4.1? The underlying issue blocks me from using Flink.
Awesome work, congrats on your first merged pull request!
https://github.com/apache/flink-connector-jdbc/actions/runs/15461448916/job/43523504320#step:16:1014
hey @kgeis , it seems the CI fails due to this PR, would you mind checking it in your free time? thanks.
I found the problem. The integration test is hard-coded to the old behavior. Can this PR be reopened, or should I start a new PR?
Thanks for the quick response!
I found the problem. The integration test is hard-coded to the old behavior. Can this PR be reopened, or should I start a new PR?
A new PR is needed.
Created #172 to fix the integration test.