riak and DLService scaling
Currently, the riak and DLService components cannot scale automatically due to their static configuration: when the DLService is started, it is passed the riak nodes as well as other DLService nodes as a parameter. However, these values are never updated after the components start. As a result, scaling up the services (e.g., add another riak replica, add another DLService node) does not update already started components.
We should make the DLService update its view of the available riak nodes dynamically during runtime. When a riak node is added to or removed from the platform, this should be reflected on the DLService node's view.
Similarly, DLService should also learn whether other DLService nodes are added or removed at runtime. With the new design for caching (#10), this might not be necessary.
Also, check out @manuelstein's comments on the PR #74.
Agree. Creating a new branch "feature/dls_scaling" for this feature.
On Wed, Jul 22, 2020 at 11:12 AM Istemi Ekin Akkus [email protected] wrote:
Currently, the riak and DLService components cannot scale automatically due to their static configuration: when the DLService is started, it is passed the riak nodes as well as other DLService nodes as a parameter. However, these values are never updated after the components start. As a result, scaling up the services (e.g., add another riak replica, add another DLService node) does not update already started components.
We should make the DLService update its view of the available riak nodes dynamically during runtime. When a riak node is added to or removed from the platform, this should be reflected on the DLService node's view.
Similarly, DLService should also learn whether other DLService nodes are added or removed at runtime. With the new design for caching (#10 https://github.com/knix-microfunctions/knix/issues/10), this might not be necessary.
Also, check out @manuelstein https://github.com/manuelstein's comments on the PR #74 https://github.com/knix-microfunctions/knix/pull/74.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/knix-microfunctions/knix/issues/75, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAQSEDEU2YM5CVOSYZAIQGDR42UOLANCNFSM4PEP5LAA .
I am not sure whether we should actually spend the effort on making the DLService scale as such. We might end up putting it into the sandbox, depending on the outcome of the Redis caching related issue #10.
But now that I think about it, perhaps it's good to have a mechanism that will update the view of the available riak nodes, regardless where that mechanism runs (e.g., in the sandbox, on the node).
Yes, I think in this branch we should achieve the following two things:
- during deployment, let operators decide whether strong/eventual consistency is needed. DL service reports errors or won't start when conditions are not met.
- let DL service be aware of riak node changes.
On Thu, Jul 23, 2020 at 9:20 AM Istemi Ekin Akkus [email protected] wrote:
I am not sure whether we should actually spend the effort on making the DLService scale as such. We might end up putting it into the sandbox, depending on the outcome of the Redis caching related issue #10 https://github.com/knix-microfunctions/knix/issues/10.
But now that I think about it, perhaps it's good to have a mechanism that will update the view of the available riak nodes, regardless where that mechanism runs (e.g., in the sandbox, on the node).
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/knix-microfunctions/knix/issues/75#issuecomment-662856400, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAQSEDEIQGEYQ3CQRA6CGMLR47QEHANCNFSM4PEP5LAA .