transporter icon indicating copy to clipboard operation
transporter copied to clipboard

MongoDB TLS/SSL connections with self-signed certs does not appear to work

Open doktoroblivion opened this issue 7 years ago • 7 comments

Bug report

I am new to using this function and cannot find any documentation on anyone doing something similar, including the issues. So here goes. Our mongodbs are clustered in a set of 3 nodes comprising a replica-set. Each member connects to the other using TLS connections and self-signed certs generated for each node. A similar cert is generated for client connections. I am trying to use the client cert right now to pull data from the databases in question, admin, but it does not look like the function works for cert-only TLS connections. I have done the following:

  • export MONGODB_URI='mongodb://CN=mongoAdmin,OU=mongodb-client,O=ACME,L=RTP,ST=NC,[email protected]:27017,10.144.81.94:27017,10.144.81.91:27017/?replicaSet=test2-mongodb-rs&authMechanism=MONGODB-X509&authSource=$external'

Relevant pipeline.js:

var source = mongodb({
  "uri": "${MONGODB_URI}",
  "ssl": true,
  "cacerts": ["/home/egriffin/fr8r-infrastructure/src/k8s-compose-mongodb/certs/mongodb-client.pem",
              "/home/egriffin/fr8r-infrastructure/src/k8s-compose-mongodb/certs/CACert.pem"],
  "bulk": true
})

var sink = file({
  "uri": "file:///tmp/test"
})

t.Source("source", source, "/.*/").Save("sink", sink, "/.*/")

System info:

  • Transporter version: 0.4.0-rc.1
  • OS: Linux 14.04 LTS
  • DB version(s): MongoDB 3.2.13

Reproducible Steps:

  1. exported the MONGODB_URI above
  2. issued: transporter test pipeline_test1.js, then transporter run pipeline_test1.js

What did you expect to happened?

First, I expected the test subcommand to actually tell me it connected to something, I just think that spits out a meaningless 3 liner indicating that the appropriate goland objects are created, no actual wire/internet connection is actually tested, other it would clearly fail there. Second, I would think I would see records appear in the source followed by writings to the sink since this is the initial connection it should do a bulk copy, followed by updates as they come in. The source is active. I can connect to it as a replicaset using the cert/CA pair without issue with the mongo client.

What actually happened?

Like I indicated above, the test subcommand gave:

$ transporter test pipeline_test1.js 
Transporter:
 - Source:         source                                   mongodb         .*                            
  - Sink:          sink                                     file            .*

The run command failed follwing saying no reachable servers. They are reachable, I just doubt the function supports a client cert-only connection.

$ transporter run pipeline_test1.js 
INFO[0000] starting with metadata map[]                  name=source path=source type=mongodb
INFO[0000] adaptor Starting...                           name=source path=source type=mongodb
INFO[0000] boot map[source:mongodb sink:file]            ts=1498507611864824840
INFO[0000] adaptor Listening...                          name=sink path="source/sink" type=file
INFO[0005] metrics source records: 0                     path=source ts=1498507616864899017
INFO[0005] metrics source/sink records: 0                path="source/sink" ts=1498507616864905419
INFO[0010] metrics source records: 0                     path=source ts=1498507621864806395
INFO[0010] metrics source/sink records: 0                path="source/sink" ts=1498507621864812472
INFO[0012] adaptor Stopping...                           name=source path=source type=mongodb
INFO[0012] adaptor Stopped                               name=source path=source type=mongodb
INFO[0012] closing connection...                         name=source path=source type=mongodb
INFO[0012] connection closed...                          name=source path=source type=mongodb
INFO[0012] adaptor Stopping...                           name=sink path="source/sink" type=file
INFO[0012] received stop, message buffer is empty, closing... 
INFO[0012] adaptor Listen closed...                      name=sink path="source/sink" type=file
INFO[0012] adaptor Stopped                               name=sink path="source/sink" type=file
INFO[0012] closing connection...                         name=sink path="source/sink" type=file
INFO[0012] connection closed...                          name=sink path="source/sink" type=file
INFO[0012] metrics source records: 0                     path=source ts=1498507624782273576
INFO[0012] metrics source/sink records: 0                path="source/sink" ts=1498507624782275761
INFO[0012] exit map[source:mongodb sink:file]            ts=1498507624782276153
connection error, no reachable servers

doktoroblivion avatar Jun 26 '17 20:06 doktoroblivion

thanks for the very detailed report! I'm going to label this as a feature request as transporter doesn't currently support client certificate authentication for MongoDB yet. The current SSL support is only for encrypting the connection.

jipperinbham avatar Jun 27 '17 13:06 jipperinbham

@jipperinbham Thank you for verifying my suspicions.

doktoroblivion avatar Jun 27 '17 18:06 doktoroblivion

+1 facing same issues , please early resolve will be helpful .

akianalytics avatar Jan 02 '18 10:01 akianalytics

Hi guys, I am facing the same problem, and it is imp for me, is this in the pipeline at all?

a-magdy avatar Mar 02 '18 13:03 a-magdy

@a-magdy I have developed my own nodejs based solution on oplog streams and another git package if you are interested. It provides oplog updates between your old source DB and your new target DB after you have dump/restored...let me know. In case you are wondering our problem was from an old source mongodb into our newly stood up compose/mongodb based solution. Once we got our apps/microservices all running using the new compose version, we shut down the oplog app, and done!

doktoroblivion avatar Mar 02 '18 13:03 doktoroblivion

@doktoroblivion thanks for your reply, I would be interested yes.

My usage for such tool is syncing data from mongodb to an elasticsearch container (with tail), I am not sure it is the same usage you just describe.

If I understand correctly, you send the data once from mongodb to compose, and it would stop after that.

I would definitely be interested in your solution, maybe I can adapt it :)

Also you said you are using oplog streams, does this mean you are only supporting mongo:3.6? I tried upgrading, but it is giving me trouble with my Meteor app, and Mongo-connector, so I had to revert to 3.4 until 3.6 is widely supported

I was checking how is mongo-connector is working, and trying to compare it to transporter, apparently it is only a matter of parsing the url, and sending the right params The commit that added ssl support in mongo-connector: https://github.com/mongodb-labs/mongo-connector/commit/0f2a459eab34b40c2b2a0ee315e1a9739c731a09

I traced the issue in transporter mongodb adapter for a little bit, and it appears to be from this line that it is in the go package go-mgo

Specifically in this function The library is not maintained anymore, but even the forked version from it didn't add this proper ssl support yet, same function

I think re-implementing ssl commit from mongo-connector from python to go in go-mgo, might not be big as there is already a reference

The solution would be to allow setting the param sslCertificatePolicy (--ssl-certificate-policy) to either optional or ignored

a-magdy avatar Mar 02 '18 14:03 a-magdy

@a-magdy been real busy I have not had time to get back to the above, will try again later this week early next.

doktoroblivion avatar Mar 15 '18 19:03 doktoroblivion