GearmaNode icon indicating copy to clipboard operation
GearmaNode copied to clipboard

Error: all job servers fail

Open kvelaro opened this issue 5 years ago • 1 comments

Related to #12 I have read that issue, and did as you advised. I catch error and try not to exit worker, but it does. In code I have:

if(App.test == false) {
   var childProcess = spawn(nodejsBin, [App.methodsDir + path.sep + file, 'standalone']);
}
else {
   var childProcess = spawn(nodejsBin, [App.methodsDir + path.sep + file, 'standalone', 'test']);
}
childProcess.stdout.on('data', (data) => {
   console.log(`App.js stdout: ${data}`);
});
childProcess.stderr.on('data', (data) => {
   console.log(`App.js stderr: ${data}`);
});
childProcess.on('close', (code) => {
   console.log(`App.js close: Child process exited with code ${code}`);
});

...

let client = gearmanode.client();
                client.submitJob(nextMethod, JSON.stringify(params));
                fn();
                client.on('error', function() {
                    fn(new Error("rrrrrr"));
                });

But in worker output I see

App.js stdout: Error: rrrrrr
    at exports.Client.<anonymous> (/var/www/html/protected/workers/js/methods/wsautocalc.js:236:24)
    at emitOne (events.js:116:13)
    at exports.Client.emit (events.js:211:7)
    at exports.Client.Client._unrecoverableError (/var/www/html/protected/workers/js/node_modules/gearmanode/lib/gearmanode/client.js:283:10)
    at exports.Client.Client._getJobServer (/var/www/html/protected/workers/js/node_modules/gearmanode/lib/gearmanode/client.js:296:14)
    at tryToSend (/var/www/html/protected/workers/js/node_modules/gearmanode/lib/gearmanode/client.js:163:26)
    at jsSendCallback (/var/www/html/protected/workers/js/node_modules/gearmanode/lib/gearmanode/client.js:153:13)
    at connectCb (/var/www/html/protected/workers/js/node_modules/gearmanode/lib/gearmanode/job-server.js:266:49)
    at Socket.<anonymous> (/var/www/html/protected/workers/js/node_modules/gearmanode/lib/gearmanode/job-server.js:125:13)
    at emitOne (events.js:116:13)

App.js stdout: {}

App.js stderr: debug: packet encoded, type=WORK_COMPLETE, buffer.size=50

App.js stdout: verbose: packet sent, type=WORK_COMPLETE, len=50

App.js stderr: debug: packet encoded, type=PRE_SLEEP, buffer.size=12

App.js stdout: verbose: packet sent, type=PRE_SLEEP, len=12

App.js stderr: events.js:183
      throw er; // Unhandled 'error' event
      ^

Error: all job servers fail
    at tryToSend (/var/www/html/protected/workers/js/node_modules/gearmanode/lib/gearmanode/client.js:165:35)
    at jsSendCallback (/var/www/html/protected/workers/js/node_modules/gearmanode/lib/gearmanode/client.js:153:13)
    at connectCb (/var/www/html/protected/workers/js/node_modules/gearmanode/lib/gearmanode/job-server.js:266:49)
    at Socket.<anonymous> (/var/www/html/protected/workers/js/node_modules/gearmanode/lib/gearmanode/job-server.js:125:13)
    at emitOne (events.js:116:13)
    at Socket.emit (events.js:211:7)
    at emitErrorNT (internal/streams/destroy.js:64:8)
    at _combinedTickCallback (internal/process/next_tick.js:138:11)
    at process._tickCallback (internal/process/next_tick.js:180:9)

Also I want to mention: I have 700 jobs to be made, and if I run first time, all jobs are done correct, and no worker does not exit (throw error). If I rerun tasks (after thay are finished, and !!! without server/service restart ) at approximately when thereis 400 task left, more than 5 workers exit. This strange situation happens always (always at a second run). Could you advise me something?

kvelaro avatar Jul 12 '19 06:07 kvelaro

Stupid me, It was typical error of memory leaking. I ignored example provided by this packet of closing socket after my task is done.

job.on('complete', function() {
    console.log('RESULT: ' + job.response);
    client.close();
});

So, I guess gearman has limit of socket count of 1000. At first run I opened 700 sockets, and at second nearly 300. When max limit count reached, I had that error.

I think that error

all job servers fail

is not suitable here.

But to mention it again, it was memory leak, sorry.

kvelaro avatar Jul 16 '19 06:07 kvelaro