Martin Heidegger
Martin Heidegger
It should be part of the dependencies in the `package.json`. Using `isomorphic-fetch` without specified dependencies is not enough.
A polyfill needs to be available anyhow (since not all browsers implement it) and you could make a separate "no-fetch.js" that can be deeplinked using `require('backlog-js/no-fetch')`.
I was looking through the code in dat-desktop. At every operation it called encode as a way to validate if the api parameter was correct (even though it was usually...
Reference to the meeting notes: https://github.com/datprotocol/working-group/blob/master/meeting-notes/24-08May2019.md#meeting-notes Take-aways: 1. URL compatible 2. Needs to work for a single-core
This is now theory land: Another way to go about this would be more "lowlevel". Every DAT consists of 2 hypercores, metadata & content. What if it were to contain...
@pfrazee It is similar, but on a lower level. Hyperdrive → Hypercore. Also it uses hashes instead of direct file lookups for deletions.
Codename for this thing: `hyperfs`. If you build hyperfs, the way to lookup files will be vastly different from how it currently works in hyperdrive (pre multi-writer). You need to...
@pfrazee That issue persists with both a hyperdrive and a hypercore solution. Pulling it one abstraction level higher doesn't make it quicker to be used. @mitra42 I am seconding @pfrazee...
@pfrazee I don't think we can come up with a solution that doesn't require the entire ecosystem to update. I believe a case could be made that the next update...
Planned doesn't mean done ;)