Query, Paging, and unexpected results
I have a very small database right now (about 5,000 records), with indexes on the two fields I search on, 'artist' and 'title'. I have the following method created for getting paging data. Minimum I have a skip, limit, and sort. Basic paging I have no issues as all, receiving data every single request. The issue comes in when I 'search'. I have an extremely basic RegExp setup (partial match on either the artist or title), yet 599 times out of a 600 I get no docs back from my final exec(). I always get my count() (currently outside the Cursor because I don't know how to get the count and the docs from the Cursor), but not my docs. Here is my method:
getAll(options = {}) {
return new Promise((resolve, reject) => {
options = options || {}; // in case 'null' or 'undefined' was passed
let query = {};
if (options.search) {
const reg = new RegExp(options.search.trim(), 'i');
query = {
$or: [
{
title: { $regex: reg }
},
{
artist: { $regex: reg }
}
]
};
}
let result = {};
this.db.count(query, (err, count) => {
result.total_rows = count;
let req = this.db.find(query);
if (options.sort) {
const fields = {};
options.sort.forEach(field => {
fields[field.id] = field.desc ? -1 : 1;
});
req = req.sort(fields);
}
if (options.skip !== null && options.skip !== undefined) {
req = req.skip(options.skip);
}
if (options.limit) {
req = req.limit(options.limit);
}
req.exec((err, docs) => {
if (err) {
reject(err);
return;
}
result.rows = docs;
resolve(result);
});
});
});
}
Further information:
- I am using a db file, and use
autoload: truewhen instantiating it. - I do setup indexes on
titleandartistwhen I load the db titleandartistfields are between 10 to 75 characters each, in each record
It appears that the exec() callback is returning a reference to docs, rather than the final resolved docs. If I add a timeout inside my exec(), prior to resolving my Promise I will get data 60% of the time when searching.
req.exec((errors, docs) => {
if (errors) {
reject(errors);
return;
}
setTimeout(() => {
result.rows = docs;
resolve(result);
}, 50);
});
The length of the timeout I have set does make a difference. The more data being filtered and sorted, the longer the timeout needs to be. Anything lower that 10ms gave me nothing, 50ms or higher gave me the best success rate. The ideal solution is that docs is fully resolved once the callback function is called.
I've tried to look at the NeDb code to see what's going on. I don't have any answers yet, other than it appears to be something with the tasks in the async.queue in the Executor calling the Cursor._exec() callbacks, but exactly where or how I'm not sure.
I think the problem is in your resolve function.
I normally do
exec((error, docs) => {
if(!error && docs) {
for(index in docs) {
//do something with docs[index]
}
}
})
And it always has the document ready when processing it like this.
Even if the something I am doing is results.push(docs[index])
because, afaik the for is partially blocking (it won't exit the context) but an if and a setTimeout will, so you may be referencing an object that is not being used (according to the interpreter) and so it doesn't have to wait.
Even making it "result=docs" then passing "result" to timeout may have a different effect.
Could you try it?