cloudinary_npm
cloudinary_npm copied to clipboard
Can only utilize Promises if a `process.on("unhandledRejection")` is present
Right now if you make a call to cloudinary.v2.uploader.rename it has the following code in the coffescript.
if result.error
deferred.reject(result.error)
else
deferred.resolve(result)
callback?(result)
The result here is that the cloudinary library is doing a deferred rejection AND a callback. So if I utilize the callback methodology cloudinary.v2.uploader.rename(id, id2, options, cb) the deferred.reject causes a process crash since my application happens to contain a unhandledRejection present.
Here's an example reproduction case:
var cloudinary = require("cloudinary");
process.on('unhandledRejection', function() {
console.log("crashed!");
});
var id_from = "id_from";
var id_to = "id_to";
var cloud_name = "simpleview";
var api_key = "api_key";
var api_secret = "api_secret";
cloudinary.v2.uploader.rename(
id_from,
id_to,
{
cloud_name : cloud_name,
api_key : api_key,
api_secret : api_secret,
secure : true
},
function(err, result) {
console.log("err", err, result);
}
);
In this case, even though I'm using callback semantics, my application will still crash because I happen to have a process.on("unhandledRejection"). I believe if a callback is present, you should not deferred.reject since the callback is handling it.
Hi @owenallenaz,
The issue seems like it can be avoided by catching the error through a catch. Is that a viable option? Would it be possible to share with us an example where Cloudinary would fail and it would require using an unhandled rejection?
Lastly, this is something that I can have our team look into more. Due to the nature of this issue, we can consider it for the next release.
A couple things. I'm not intending to handle unhandledRejections specifically from Cloudinary. Our application happens to have an unhandledRejection and a uncaughtExeption handler in it's root file to catch unhandled errors, log them, and then crash the process. We're doing that because it's the recommended best practice for error handling. I'm not using it as a mechanism to catch Cloudinary errors, specifically, but rather just as a way so that we still process crash if we screw something up and so we don't blackhole developer errors.
Because of the bug I pointed out, just the presence of the unhandledRejection will cause a process crash even if I'm using callback semantics with the Cloudinary package. In general from my experience with other libraries, if you use the promise variant, it should follow promise semantics. If you use the callback variant it should follow callback semantics. So if I pass a cb to the function then the signature should be function(err, result) and nothing should hit my unhandledRejection because I've passed a callback. The above is how mongodb works since they utilize a similar pattern where the same method works for both cb and promise variants.
As the code is written if I do a .catch() on the rename it gets real awkward because the error will come back to BOTH my catch AND my function. So that means basically adding a .catch(function() {}) for no purpose at all. In this specific case I'm working around it by using the Promise semantics via async/await. It still feels like a bug that should be fixed.
I've come across a case where I don't think a .catch() will do the trick: upload_stream(). I'll be honest, I haven't looked into it very deeply, but I'm guessing you have to use a callback when calling upload_stream since it returns a stream and not a promise. In this case, the unhandled promise rejection can't be avoided. Similar to the OP, I can't have unhandled rejections in my project, so I've worked around this by temporarily writing my data to a file on disk and using the upload() method instead of upload_stream(). I completely agree with @owenallenaz's view about using either callback semantics or promise semantics, but not both :)
@owenallenaz @danbriggs5 I agree we should use callback xor promise. Unfortunately it will be a breaking change. We'll fix it in the next major release.
In the meantime, you can suppress those messages with the following commands:
const Q = require('q');
Q.stopUnhandledRejectionTracking();
This will only affect the Q library. Unless your own application uses Q - as opposed to native Promise - it should resolve your issue.
Let us know if it worked!
@tocker Thanks for the response! Just tested and this works for me. Not the cleanest solution in the world but definitely nice to have a solution that doesn't require writing files to disk before uploading ;) I appreciate it.
Totally hear ya with regards to it being a breaking change. Look forward to the next release.
So we recently came across this error again but in a worse way. If you look at the code in the current package at in the uploader at https://github.com/cloudinary/cloudinary_npm/blob/master/lib/uploader.js#L108 you'll find the following block:
let chunk_start = sent;
sent += buffer.length;
options.content_range = `bytes ${chunk_start}-${sent - 1}/${(is_last ? sent : -1)}`;
params.timestamp = utils.timestamp();
let finished_part = function (result) {
const errorOrLast = (result.error != null) || is_last;
if (errorOrLast && typeof callback === "function") {
callback(result);
}
return done(!errorOrLast);
};
let stream = call_api("upload", finished_part, options, function () {
return [params, {}, buffer];
});
return stream.write(buffer, 'buffer', function () {
return stream.end();
});
In that case the result of the call_api is not returned or accessible to the developer. When I originally raised this issue we were facing it because we were using the callback variant of upload. When we altered it to use the promise variant the problem went away. Unfortunately due to the call_api in the upload_chunked that means that if the upload errors in anyway it results in a process crash. The only way we can avoid that process crash is by using the Q hack above, which would disable error tracking for ANY npm package that happens to use Q which isn't really a viable solution in my eyes. There is an entire paragraph in the Node docs dedicated to NOT doing the fix above (blackholing unhandled errors). https://nodejs.org/dist/latest-v12.x/docs/api/process.html#process_warning_using_uncaughtexception_correctly .
Even if you guys don't want to do the breaking change, if you altered the call_api utilized in that function to use the promise variant, then the problem should go away. This bug will surface itself in upload_chunked, upload_large and upload_chunked_stream, basically resulting in an uncatchable errors from all 3.
I have nothing meaningful to add to this issue but this is almost two years old and it's still there, in our case on upload_stream, which seems to have no correct workaround. When can we expect a fix?
@Marsup The ETA is the end of the year.
@Marsup to expand on @shirlymanor's comment,
A naive fix of this issue breaks backwards compatibility in certain use cases. A fix is planned as part of a larger effort around this area.
Hi there, we're still encountering this in production - this has crashed our servers on several occasions.
The root cause is the following:
- We're using the public
upload_streamAPI. - Internally,
upload_streamcallsupload({ stream: true }) - When the
stream: trueoption is passed down,uploadreturns the stream, and not a promise (call_apireturns the stream here). - However, a promise is still rejected, because it is held by the
Q.defer()object incall_apiwhich is always instantiated, even if a stream will be returned. - It is impossible to catch this promise, because it's not returned.
Couple of things:
- Why does this internal
uploadmethod handle both the streaming case and the promise case? These have such completely different semantics that it doesn't make any sense at all to handle them in a single method. - You claim that you don't want to break backwards compatibility - but the promise in question isn't part of your public API because it is never returned by
upload_stream, so how could this possibly be a breaking change? The fact that your internal upload method handles both of these cases is of absolutely no concern to users of this API - your public API correctly separates these cases already! - If your changes aren't backwards compatible as you claim, why don't you just create a new
upload_streammethod with the correct semantics? You already have av2namespace which we're forced to use, why not create av3for this?
You last gave us an ETA of "end of this year", last year. When can we actually expect a fix for this please? I generally don't like to pressure anyone on GitHub, but this isn't an open source project maintained by individuals in their spare time, we are paying customers. We are strongly considering migrating off Cloudinary because it simply is not possible to use this module correctly currently. (And no, adding a process.on('unhandledRejection') handler is not an acceptable fix - neither is buffering files to disk, the whole point of a streaming API is to avoid that!)
Looking through the source code for this module generally has given me great concerns for the quality of your overall product, I hope that more care has been taken in other parts of your system. This is not a niche concern either, uploading files is your core product, how can this possibly be so badly broken?
I'm separately opening an internal ticket linking to this, if any Cloudinary employees would like to continue this conversation privately, you can reach me on the email in my bio. Thanks.
@owenallenaz @lachenmayer This is not documented yet however the latest node SDK release (1.25.1) provides now an option to disable promises when using the API.
This option can be provided to the Upload API functions as disablePromise: true.
Please let us know if this solves your issues with the unhandledRejection?
@owenallenaz @lachenmayer This is not documented yet however the latest node SDK release (1.25.1) provides now an option to disable promises when using the API. This option can be provided to the Upload API functions as
disablePromise: true.Please let us know if this solves your issues with the unhandledRejection?
Thanks. It works but should use disable_promises option instead of disablePromise
Thanks, @artem-sydorenko-backend I will open a request to change the name for this one.
Closing off this issue as disablePromise was renamed to disable_promises some time ago.
Any way to disable promises on the other APIs? Or we still need a catch after each one?
@jacobweberbowery Looking at the repos for some of our other SDKs, I believe this is only supported in Node at the moment. We do try and maintain feature parity between our SDKs so I would expect this to be implemented sometime in the future for the other SDKs.
@rnamba-cloudinary Sorry, I meant on the other Node APIs, like delete_resources.
Hey @jacobweberbowery.
If you'd prefer not to use a promise, we do support using callbacks instead. For example, here's a delete using delete_resources:
const publicIdsToDelete = ["public ID 1", "public ID 2", "etc"];
cloudinary.v2.api.delete_resources(publicIdsToDelete, (error, result) => {
if (error) {
console.warn('Error deleting resources:', error);
} else {
console.log('Deleted resources:', result);
}
});