rht
rht
@flyingzumwalt addressing your points. I am curating the benchmark (facts) not just for the sake of the maintainers, but also archivists, institutions, etc looking to store large datasets using the...
TL;DR: Given the storage size ballooning due to pinning, the very specific data-saving condition of the chunking, the ipfs toolset is not ready for large data archival (this statement is...
To whom it may concern: I have already proposed that the repository should contain build scripts for public datasets being published to ipfs (https://github.com/ipfs/archives/issues/86). So that the archival method can...
+1 static indexes (and any objects computed out of the archives) should be static and versioned in merkledag just like the archives themselves. dependency: file metadata https://github.com/ipfs/ipfs/issues/36
Since `github.com/cdnjs/cdnjs` is a code repo, #82 is re-added via `gx`.
The most effective way to find the common ground, I think, is by packaging datasets that have already been packaged in these various standards and see which choice covers the...
@flyingzumwalt since you're the captain of this repo, RFCR? Is this not in the intended direction?
I put 2 version of manifest file. One is the output of `ipfs add` (as shown in the example output in https://github.com/ipfs/notes/issues/205#issue-197357094), the other is in ipld format inside the...
This PR packages the data with the scale of 10e5 orders of magnitude smaller than data.gov. Delivering data.gov depends on having the manifest/datapackage.json/packfile implemented, which can be done in parallel...
The issues this PR close basically contain which datasets have been published to ipfs. The main concern is the datapackage.json format and the packmanifest format. I don't know how to...