node-csv
node-csv copied to clipboard
Delimiter Discovery
Summary
The library should detect common delimiters , or ;.
Motivation
Many international users will be working with commas or semi-colons on a daily basis depending on who generated their CSV. Currently we need to manually parse the first line and see which delimiter is used.
Alternative
let delimiter = csvString.match(/[,;]/)?.[0];
let data = csv.parse(csvString, {delimiter};
Draft
let data = csv.parse(csvString, {
detect_delimiters: [',', ';'],
};
Additional context
Since it may be difficult to detect "just any" delimiter the developer can pass an array of common delimiters which they expect.
Hey @cawoodm, Anyone is working on this one ? I take work on it.
While I am not against the idea, I can't say that I fully support the idea. However, if you come up with a clean delimiter_auto option, I'll probably merge it.
@wdavidw I looked on GitHub to see what other people were doing.
node-csv-string detect function, which basically looks for the first occurrence of one of the delimiters, I guess is fine for most cases.
Another more advanced implementation was the detect-csv determineMost function , which looks at a sample and returns the delimiter with the highest occurancy count.
What do you think ?
I would tend to discover the character, like in the second methods, after filtering any already used character in options (eg quotes, row delimiters, ...) and general ascii characters ([a-zA-z0-9]) (including accented characters).
@wdavidw
I created a small proof of concept for the auto_delimiter option.
https://github.com/adaltas/node-csv/compare/master...hadyrashwan:node-csv:patch-1
When running tests, the below happens, not sure why:
- All tests pass when I run the test script. Except the encoding with BOM option.
- When I run the encoding tests (
packages/csv-parse/test/option.encoding.coffee) on its own, it works. - I added a small test for \t based on the delimiter tests to see how the logic runs it did detect the delimiter successfully, however it did not pass the test.
Question:
- We are committing the
distfiles, is this expected ?
Missing parts:
- Handling of characters coming from escape, quote, and record delimiter options.
- Add more tests.
- Add a references in TS definition.
- Add a new page about the
auto_delimiterin the docs.
Appreciate your feedback :)
I'll take some time to review later. In the mean time, what do you mean by "We are committing the dist files".
A few notes for now:
delimiter_autoand notauto_delimiter,__discoverDelimiterAutoand not__autoDiscoverDelimiter- Disabled by default, default value is
false - When normalizing the option, add consistency check, for exemple it cannot equal the values of
record_delimiter(all those rules require tests) - Dont convert to string, you shall compare values directly inside the buffer
- Write more unit tests but in particular one which write data one byte at a time (see https://github.com/adaltas/node-csv/blob/master/packages/csv-parse/test/api.stream.events.coffee#L53 as an example)
- My strategy would be to discover delimiter before any parsing is done, here is how I will start my experiment
- Work around the
__needMoreData, ifdelimiter_autois activated, and only in the first line, you shall allocated a safe buffer size dedicated to discovery - Start discovery (your
__autoDiscoverDelimiterfunction) just after bom handling and before actual parsing (https://github.com/adaltas/node-csv/blob/master/packages/csv-parse/lib/api/index.js#L109)
- Work around the
I'll take some time to review later. In the mean time, what do you mean by "We are committing the dist files".
When I'm working I always see the build files in the dist folders added to git and not ignored.
Some projects add those build files in the git ignore file.
Just want to make sure that I'm not adding those files by mistake.
A couple of comments on the method of detecting delimiters:
- We cannot safely assume that the most common character is THE CSV delimiter. The CSV delimiter is the character that consistently split the row into the same number of columns on each row.
- CSVs can safely store strings that can contain the delimiter, so parsing has to be a little more intelligent (either considering quotes or allowing for a small degree of inconsistency in column count per row.
Python has a great example of handling these in their implementation. The pandas library uses this implementation but only reads from the first line of the file (here).
Are there any plans to open a PR for that? As far as I can see the current changes are only present on a branch.
I would definitely love to see that feature.
Here's another algorithm for detecting the delimiter that seems like a good idea: https://stackoverflow.com/a/19070276/2180570
I didn't have the time yet. The solution needs to deal with the streaming nature of the parser. The solution would be to extract a limited amount of bytes from the stream, apply a detection algorythm such as the one proposed above, then replay the bytes stored on the side with the detected delimiter. I need some time to do it correctly.