NDJSON/CSV methods to add and update documents
⚠️ This issue is generated, it means the nameing might be done differently in this package (ex: add_documents_json instead of addDocumentsJson). Keep the already existing way of naming in this package to stay idiomatic with the language and this repository.
📣 We strongly recommend doing multiple PRs to solve all the points of this issue
MeiliSearch v0.23.0 introduces two changes:
- new valid formats to push data files, additionally to the JSON format: CSV and NDJSON formats.
- it enforces the
Content-typeheader for every route requiring a payload (POSTandPUTroutes)
Here are the expected changes to completely close the issue:
-
[ ] Currently, the SDKs always send
Content-Type: application/jsonto every request. Only thePOSTandPUTrequests should send theContent-Type: application/jsonand not theDELETEandGETones. -
[ ] Add the following methods and 🔥 the associated tests 🔥 to ADD the documents. Depending on the format type (
csvorndjson) the SDK should sendContent-Type: application/x-dnjsonorContent-Type: text/csv)- [ ]
addDocumentsJson(string docs, string primaryKey) - [ ]
addDocumentsCsv(string docs, string primaryKey) - [ ]
addDocumentsCsvInBatches(string docs, int batchSize, string primaryKey) - [ ]
addDocumentsNdjson(string docs, string primaryKey) - [ ]
addDocumentsNdjsonInBatches(string docs, int batchSize, string primaryKey)
- [ ]
-
[ ] Add the following methods and 🔥 the associated tests 🔥 to UPDATE the documents. Depending on the format type (
csvorndjson) the SDK should sendContent-Type: application/x-dnjsonorContent-Type: text/csv)- [ ]
updateDocumentsJson(string docs, string primaryKey) - [ ]
updateDocumentsCsv(string docs, string primaryKey) - [ ]
updateDocumentsCsvInBatches(string docs, int batchSize, string primaryKey) - [ ]
updateDocumentsNdjson(string docs, string primaryKey) - [ ]
updateDocumentsNdjsonInBatches(string docs, int batchSize, string primaryKey)
- [ ]
docs are the documents sent as String
primaryKey is the primary key of the index
batchSize is the size of the batch. Example: you can send 2000 documents in raw String in docs and ask for a batchSize of 1000, so your documents will be sent to MeiliSearch in two batches.
Example of PRs:
- in PHP SDK: https://github.com/meilisearch/meilisearch-php/pull/235
- in Python SDK: https://github.com/meilisearch/meilisearch-python/pull/329
Related to: https://github.com/meilisearch/integration-guides/issues/146
If this issue is partially/completely implemented, feel free to let us know.
Currently, the SDKs always send Content-Type: application/json to every request. Only the POST and PUT requests should send the Content-Type: application/json and not the DELETE and GET ones.
Hi @curquiza! I just created a PR to fix the first subtask of this issue.
Hi @curquiza!
If I understood correctly the handling of batchSize parameter is to be implemented on the client side. In case of NDJSON this should be trivial, but for CSV it might be a bit more complicated.
A couple of questions regarding this:
- Is a header mandatory for CSV files?
I assume it is, because otherwise it might not be possible to tell which value belongs to which field. - If CSV header is mandatory, is this requirement documented anywhere? Is it checked on the server side?
I've only found mentions of the new supported formats in the OpenAPI spec.
If the server already makes sure that all CSVs have a header row, then I think the client can use the first line of the input CSV as header, split the rest of the lines according to the batchSize and prepend the header to each batch.
Hi @theag3nt, I'm sorry I didn't answer you sooner, I missed it.
Is a header mandatory for CSV files?
Yes, every document should be formatted like a CSV file with a CSV header.
If CSV header is mandatory, is this requirement documented anywhere? Is it checked on the server side?
There are no specific requirements, except that the document/file must be in CSV format. An example of request in curl:
curl \
-X POST 'http://localhost:7700/indexes/movies/documents' \
-H 'Content-Type: text/csv' \
--data--binary '
"id","label","price:number","colors","description"\n
"1","hoodie","19.99","purple","Hey, you will rock at summer time."
'
I think the client can use the first line of the input CSV as header, split the rest of the lines according to the batchSize and prepend the header to each batch.
I totally agree with you, it seems to be the best way.
@curquiza please update the issue, I think point 1 has been completed by #227 and 2.2, 2.3, 2.4 & 2.5 have been completed by #235 .