electron-pdf icon indicating copy to clipboard operation
electron-pdf copied to clipboard

HotFix: Autoclosing the window will lead to orphaned windows in the windowMaid

Open 749 opened this issue 6 years ago • 5 comments

I noticed a small bug while implementing the BrowserWindow pool (#234).

When using the default setting of closing a window automatically, the windowMaid would not be notified and say that there are active windows.

I used the destroy method to avoid duplicate code.

749 avatar Feb 06 '19 17:02 749

In my export server code I have this comment:

  // If this is not done on nextTick a Segmentation fault can (will) occur.
  // We must prevent the window/buffer from getting GC'd before it is sent in
  // the response https://github.com/electron/electron/issues/1112
  job && process.nextTick(job.destroy.bind(job))

Will have to check the status on that issue and do some testing before I can merge this. In the meantime you can do the cleanup outside of the exportJob if you want.

This is also mentioned here: https://github.com/fraserxu/electron-pdf#using-an-in-memory-buffer

codecounselor avatar Feb 06 '19 18:02 codecounselor

This is the code I used in my application:


const jobOptions = {
const {exporter} = require('../../init/exporter');
const LOG = require('../../init/logger');

const jobOptions = {
    /**
     r.results[] will contain the following based on inMemory
     false: the fully qualified path to a PDF file on disk
     true: The Buffer Object as returned by Electron

     Note: the default is false, this can not be set using the CLI
     */
    inMemory: false,
    closeWindow: false
};

function Job_Browser_RenderToPdf(input, output, config) {
    return new Promise((resolve, reject) => {
        exporter.createJob(input, output, config, jobOptions)
            .then(job => {
                job.on('job.render.complete', r => {
                    LOG.verbose(r);
                    resolve(output);
                    job.destroy();
                });
                job.render();
            })
            .catch(err => {
                reject(err);
            });
    });
}

With the modification I made in this MergeRequest I tried sending about 100 parallel requests. With closeWindow: true and the job.destroy() commented out. I did not see any segmentation faults, could it be fixed upstream? EDIT: Even 500 parallel requests don't cause a segmentation fault. Just FYI it takes around 26seconds to complete them (though it is a really basic example html)

<!Doctype html>
<html>
<head>
    <style>
        div {
            color: aqua;
        }
    </style>
</head>
<body>
<h1>Hello Electron-PDF World!</h1>
<p>Brought to you by 749</p>
<div>Just some colored Text</div>
<pre>
    <script>
        document.write("From JS: Hello!");
    </script>
</pre>
</body>
</html>

749 avatar Feb 06 '19 20:02 749

Can you test it with inMemory: true? I'm pretty sure I only saw the error after that feature was added and enabled.

codecounselor avatar Feb 06 '19 21:02 codecounselor

Yeah that does make sense it would cause a segmentation fault if you enable both closeWindow and inMemory at the same time. As the buffer references a memory area which has been released back to the kernel/OS. If we then try to access that memory the kernel/OS will stop the application. I'll rewrite my app to use the buffer instead and comment again once I have results.

749 avatar Feb 06 '19 21:02 749

Hmm, with my app I don't get any segmentation fault even with this.destroy() inside the render function as well as closeWindow and inMemory enabled. However it could still happen, perhaps it would be better to throw an error if inMemory and closeWindow are both activated. (about 550 requests seems to be the limit for my machine. Any more and it start not completing some requests) Do you still have the test code you used to generate the segmentation fault?

749 avatar Feb 06 '19 21:02 749