David Allen
David Allen
@conker84 desired solution from talking with Dave: Possible Solution - Keep the producer data format the same, but add a few new keys: Added: [prop1, prop2, prop3] Changed: [prop4, prop5]...
On the production side, it seems to me it isn't a breaking change. We are adding new keys to the payload being sent across, so that's backward compatible, right? Older...
We can create a separate strategy. If at the end we have good guidance on how to configure to do this case, then it's fine. On the other hand, it...
In this case, let's go with a separate strategy. Perhaps call it replication.
We want the cypher approach; need to implement a method of specifying a polling startup query & timeout similar to what was done with APOC previous.
this is unlikely to be worked on because of coming plugin deprecation in favor of kafka connect source implementation
Use case -- you want to use source, and you want to bootstrap your topic with a full copy of neo4j before enabling source to product subsequent transactions to kafka
* How would data be published? Possibly new eventType * How to avoid 10 minute transaction timeouts? Not sure yet. 3rd possibility -- apoc.periodic.iterate with streams.publish -- allows the user...
Starting point -- write apoc.periodic.iterate documentation on how we can address this use case with an example of that. Include how to use the notion of "filters" for source production,...
The alternative would be this, which is a feature known to APOC but not implemented yet: https://github.com/neo4j-contrib/neo4j-apoc-procedures/issues/1382