Is two separate Couchbase clusters in different data centers possible?
According to the documentation Couchbase with a memcached bucked can be used. The documentation also says that the property memcachedNodes can contain multiple bucket hosts. But it says that no hosts must be defined in the failoverNodes property.
At the moment I work for a company that has two data centers very close to each other. I came up with the idea to make two separate couchbase cluster. And put the three nodes of data center 1 in the memcachedNodes property and the three nodes of data center 2 in the failoverNodes property. Would that be possible with memcached-session-manager?
If yes, does it make a difference if I use sticky or non-sticky?
I was thinking at the configuration below:
<Context>
...
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="http://host1.yourdomain.com:8091/pools,http://host2.yourdomain.com:8091/pools,http://host3.yourdomain.com:8091/pools"
failoverNodes="http://host4.yourdomain.com:8091/pools,http://host5.yourdomain.com:8091/pools,http://host6.yourdomain.com:8091/pools"
username="bucket1"
password="topsecret"
memcachedProtocol="binary"
sticky="true"
sessionBackupAsync="false"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
/>
</Context>
You're right, as documented failoverNodes cannot be used if couchbase uris like http://host1.yourdomain.com:8091/pools are configured in memcachedNodes (which are kind of seed nodes IIRC, from which the couchbase client retrieves cluster nodes and cluster changes). When msm is configured with couchbase uris, it assumes that couchbase is taking care of high availability of sessions in memcached (via replication), so msm in consequence doesn't perform any additional backups of the session (like it does it in "normal"/memcached mode with non-sticky sessions) and does not have/provide the concept of locality (failoverNodes).
But you should be able to configure regular host:port addresses of the couchbase memcached nodes, so technically it would be possible. Of course you'll loose the feature of the automatic reconfiguration of the client (msm) in case of couchbase cluster changes (you'd need to reconfigure msm).
Though, it depends on your actual setup if this really makes sense and works. Things to consider, maybe you've already done this:
- Do you have a single couchbase cluster per DC, or one spanning both DCs? (not sure if couchbase is a CA or CP system in terms of CAP)
- How do you run / loadbalance tomcats? Tomcats in each DC, sticky or non sticky sessions?
Sticky / non-sticky makes a difference in terms of latency at least, as sticky sessions should provide lower latency. How big this difference is depends on your setup, from the users point of view it should not be recognizable. With non-sticky sessions you should set sessionBackupAsync=false, so that you have read-your-own-writes semantics.
Thanks for your feedback. So I think I can conclude that msm does not support multiple couchbase clusters.
The architecture is two data centers about 200 meters from eachother:
Data center 1
- Tomcat server 1 => Points primary to Couchbase cluster 1, secundary to Couchbase cluster 2 if cluster 1 is completely down
- Couchbase cluster 1
Data center 2
- Tomcat server 2 => Points primary to Couchbase cluster 1, secundary to Couchbase cluster 2 if cluster 1 is completely down
- Couchbase cluster 2
Based on your feedback I'm thinking about installing Moxi on the Tomcat nodes. According to the documentation http://docs.couchbase.com/moxi-guide Moxi can proxy multiple clusters through different ports.
Moxi also supports proxying to multiple clusters from a single moxi instance, where this was originally designed and implemented for software-as-a-service purposes. Use a semicolon (';') to specify and delimit more than one cluster: -z “LISTEN_PORT=[CLUSTER_CONFIG][;LISTEN_PORT2=[CLUSTER_CONFIG2][]]”
So with Moxi the configuration will be something like:
-z "11211=mc1,mc2;11311=mcA,mcB,mcC"
I would like to use sticky sessions for performance and sessionBackupAsync to false like configured below for consistency. Between the couchbase clusters I'm thinking about setting up XDCR asynchronous, because in the future the data centers could be further away. Then both tomcat servers will probably live in DC1 and also then is the Couchbase cluster 2 in DC2 only used when Couchbase cluster 1 is totally down.
<Context>
...
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="n1:localhost:11411,n2:localhost:11511"
failoverNodes="n2"
username="sessions"
password="topsecret"
memcachedProtocol="binary"
sticky="true"
sessionBackupAsync="false"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
/>
</Context>
Do you think my proposal is going to work? Or do you have improvements?
In addition, I think two Moxi instances per tomcat server is even better then using the -z parameter.
I have the setup running successfully with the MSM configuration above and two separate Moxi processes on each Tomcat server.
Port 11411: Connects to the sessions couchbase bucket in couchbase cluster 1 Port 11511: Connects to the sessions couchbase bucket in couchbase cluster 2
Moxi configuration - /usr/lib/systemd/system/[email protected]:
# -*- mode: conf-unix; -*-
# To create clones of this service:
# systemctl enable moxi-server@instance_name.service
[Unit]
Description = Couchbase Moxi Server
Documentation = http://docs.couchbase.com
After = network.target remote-fs.target nss-lookup.target
[Service]
SyslogIdentifier = moxi
User = moxi
Type = forking
PIDFile=/var/run/moxi/moxi-server-%i.pid
WorkingDirectory = /opt/moxi
LimitNOFILE = 40960
LimitMEMLOCK = infinity
EnvironmentFile=-/etc/sysconfig/moxi-server@%i
ExecStart = /opt/moxi/bin/moxi -vvv -r -d -P /var/run/moxi/moxi-server-%i.pid -Z /opt/moxi/etc/moxi-%i.cfg -O /var/log/moxi/%i.log $CLUSTER_CONFIG
[Install]
WantedBy = multi-user.target
Sysconfig for the service above - /etc/sysconfig/moxi-server@sessions
CLUSTER_CONFIG="http://shareddc1-couchbase-01a.example.org:8091/pools/default/bucketsStreaming/sessions,http://shareddc1-couchbase-03a.example.org:8091/pools/default/bucketsStreaming/sessions,http://shareddc1-couchbase-05a.example.org:8091/pools/default/bucketsStreaming/sessions"
Findings:
- If one server fails in the Couchbase cluster, Moxi automatically changes the cluster topology and Tomcat notices nothing and works fine. No user impact.
- If a second server fails in the Couchbase cluster, Moxi automatically changes the cluster topology and Tomcat notices nothing and works fine. No user impact.
- If the whole cluster is gone, Moxi automatically changes the cluster topology and Tomcat automatically uses the other memcached port. No user impact.
- If both clusters are gone/fail, Tomcat just uses the local session map and doesn't write sessions to Couchbase without any degraded performance. No user impact because of sticky sessions.
- If the Moxi process on port 11311 fails spontaneously (or gets a kill -9), Tomcat automatically uses the other memcached port. No user impact.
So thanks a lot for Memcached Session Manager, it works beautifully!
Awesome, sounds great! :-)