world-portfolios
world-portfolios copied to clipboard
.js data to .csv
Migrate the way datas are saved on js code to a more readable and simple format like .csv
EDIT:
Originally idea from : https://x.com/ln_dev7/status/1701336511365496973?s=20
A json file is probably more suitable.
Also, regarding the roadmap of this app, do you also consider adding a backend or a data access layer? @ln-dev7
A json file is probably more suitable.
and why that ?
A json file is probably more suitable.
and why that ?
It's the most adopted format I believe. Obviously, it's a matter of collective deciding. Either way is fine IMHO.
It's the most adopted format I believe
this doesn't answer the question on "why the .json format is more suitable", sorry ! But yeah, of course it's a matter of collective deciding, and this issue it's a proposal after all,
To see from where this issue is originated, please check : https://x.com/ln_dev7/status/1701336511365496973?s=20
More suitable in the sense that most developers tend to use it (by convention) as storage format. Ultimately, it doesn't matter which format is chosen.
Going to add onto this issue as it seems nothing has changed yet: I believe the idea to migrate to a .csv
data format would be interesting in the long term when this project gets a lot of contributions. However, like @gabin-dongmo stated, most developers are used to JSON and it is a lot easier to setup and read/modify if you only want to add a few lines, whereas CSV usually takes a desktop editor as it tends to be very large files with thousands of lines. Just my two cent on this 🤷
A .csv
is a text file, containing a "table", you have columns defined once on top and entries starged at line i>0,
You don't need a "special editor" to open it or read it.
Yes, in a long run, a json file with >1000 entries is way more large than a csv file with same entries, and the reason is pretty obvious, in the .json file, you need to specify keys
, brackets
, commas
, quotes... and it needs to be "formated" to be much more human readable.
most developers are used to JSON and it is a lot easier to setup and/modify
How is this csv content :
id, name, age, color,
1, sanix, 10, dark,
2, ask, 11, red,
3, doum, 8, yellow,
harder to read/understand than :
[
{
"id": 1,
"name": "sanix",
"age": 10,
"color": "dark",
},
{
"id": 2,
"name": "ask",
"age": 11,
"color": "red",
},
{
"id": 3,
"name": "doum",
"age": 8,
"color": "yellow",
},
]
This issue is essentially for performances reasons;
On this small example, try to compare how a .json
take much characters and spaces than a .csv
for the same amount of datas,
this means, when the web app is going to run, it will take much more time too, to load all those in memory than the CSV.
And finally as i said previously, this is just a proposal, so if it's okay with json for everyone, let's go with that 😎
Most people who edit large CSV files usually resort to a desktop or online editor program to make it easier and simpler. Some .csv
files can get absolutely huge and be very hard to read, especially if the spacing is inconsistent, etc.
While I agree that a .csv
file might be more efficient when loading, it poses some pretty substantial downsides in term of scalibility and re-utilization/export of data. Let's say for example there's another project which wants to pull data from the worldportfolios, the advantage of JSON is real since JSON is a pretty widely supported notation format across dozens of programming languages. CSV could be used, but again, JSON is more widely adopted. I think JSON also brings some advantages in terms of scalibility as the data structures can get a lot more complex on every single level, allowing better future expansion.
However, why not create a GitHub action script that converts the existing JSON data (easy to read and modify) and automatically converts it to a .csv
which is then picked up by the deployement flow. I believe this would solve the issue of performance while keeping everything as developer friendly as possible. Here's a related existing action: https://github.com/marketplace/actions/json-to-csv-action
# .github/workflows/csv.yml name: JSON to CSV on: workflow_dispatch: jobs: run: name: Run Action runs-on: ubuntu-latest steps: - uses: austenstone/json-to-csv@main with: json-artifact-name: ${{ steps.export.outputs.input.json }} - uses: actions/download-artifact@v4 with: name: ${{ steps.export.outputs.output.csv }}
It could also possibly be interesting to look into a database system which would allow partial data loading in case we get a lot of inputs and the files get very big. It's probably overkill for now but could be an idea for later.