example-pubsub icon indicating copy to clipboard operation
example-pubsub copied to clipboard

Node.JS Error when Creating new exchange

Open Removed-5an opened this issue 9 years ago • 12 comments

When creating a new exchange followed by publishing a message in a topic I receive the following error message:

Unhandled rejection RqlRuntimeError: Cannot perform read: Primary replica for shard ["", +inf) not available (server is not ready)

var exchange = new pubsub.Exchange('test', {db: 'pubsub', host: config.host, port: config.port});

var topic = exchange.topic('test');

var queue = exchange.queue(function(topic) {
    return topic.match('test*');
});

queue.subscribe(function(topic, payload){
    console.log('I got the topic:', topic);
    console.log('With the message:', payload);
});

topic.publish({msg: 'hello'});

What am I doing wrong?

Removed-5an avatar Apr 22 '15 22:04 Removed-5an

You aren't doing anything wrong, it looks like the pubsub code isn't using table().wait() to ensure the table is ready before sending messages. If you want to work around this, try waiting a moment after creating the exchange before sending the first message. I will fix this bug when I get a moment.

deontologician avatar Apr 23 '15 14:04 deontologician

Thanks, I tried using setTimeout() with 10 seconds but that didn't help. Looking forward to that fix.

Removed-5an avatar Apr 24 '15 09:04 Removed-5an

OK, so I checked it out on my machine, and I think the real reason was that the upper bound on the rethinkdb version was too strict. It was most likely forcing you to download an old version of the driver to use with a newer version of the server (the driver was locked to 1.13). I've updated the versions not to have an upper bound, so check the version of your server (rethinkdb --version at the command line) and install a version of the driver for that version.

deontologician avatar Apr 27 '15 19:04 deontologician

Unfortunately I don't think that's it Node.JS Module: 2.0.0 RethinkDB Server: rethinkdb 2.0.1~0trusty (GCC 4.8.2)

Removed-5an avatar Apr 27 '15 19:04 Removed-5an

Ok, a second reason might be if you created the table and sharded it over 2 servers, but one of those servers is no longer connected. Could that be the case?

deontologician avatar Apr 27 '15 19:04 deontologician

I don't think so, I have a very simple single server setup. I think your initial idea made sense. How come the creation of the table doesn't expect a callback?

Removed-5an avatar Apr 27 '15 19:04 Removed-5an

OK, I think I know the issue. I wrote this when I was a bit new to promises. This doesn't really make sense at all, I need to rewrite it a bit. The Exchange.queue class needs to returns a promise etc, rather than a queue directly. Sorry about the confusion

deontologician avatar Apr 27 '15 20:04 deontologician

No problem, promises still confuse me as well, haha. I was planning to look into it myself, but was a bit confused on what you did with that promise there.

Also when you are planning to do a rewrite, may I suggest a small addition: Could you use async queue as in: https://github.com/caolan/async#queue to solve the following issue you describe in your code: // If the topic doesn't exist yet, insert a new document. Note: // it's possible someone else could have inserted it in the // meantime and this would create a duplicate. That's a risk we // take here. The consequence is that duplicated messages may // be sent to the consumer.

There needs to be certainty that messages are broadcasted to all subscribers in order and only once. you could use the callback of when a write or update is finished to request a new item for the async queue. That way it would be impossible to send messages twice ore create duplicate entries.

Removed-5an avatar Apr 27 '15 20:04 Removed-5an

If you have just one Node process talking to the database, this isn't an issue. The problem is multiple producers, which an async queue wouldn't solve (except in the case of worker processes, which don't make a lot of sense since this is a cpu light, I/O heavy task)

deontologician avatar Apr 27 '15 20:04 deontologician

Right now we don't have "once and only once" semantics in RethinkDB changefeeds. So you may be able to de-dupe in the Node process if you get multiple messages for a given event, but right now these aren't good for job queues etc where you need to ensure a job is only given to one worker at a time. It's better for notification services and the like where duplicate notifications aren't a huge problem.

deontologician avatar Apr 27 '15 20:04 deontologician

You're right about that, my bad :). I guess there is indeed no way to avoid that, not that I can think of right now at least. I was planning on using this for multiple node processes, with all of them publishing as well as subscribing. Might get a bit tricky if the "queue" isn't reliable. If I want something like that I guess I need to fallback on Redis which is a shame because I'll need another server and other dependencies just for that. Hmm come to think of it, I might work around this by making enough channels in a way that each channel only has one publisher...

Removed-5an avatar Apr 27 '15 20:04 Removed-5an

Just as a final note: the issue here (which truly is a bug in the library) only shows up because you are producing and consuming from the same process. If you have a producer in one process and a consumer in another process, with their own connections, this problem won't happen.

deontologician avatar Apr 27 '15 21:04 deontologician