ldeffenb
ldeffenb
IIRC, uploads already stage the chunks locally and push them to the swarm without holding up the original requester by default. There is a new upload parameter to deliver the...
Is the node you are uploading through running on an SSD and a reasonably fast processor? Bee does need to split the file into chunks, store chose chunks, build a...
I suspect you'll find the response better with a node using an SSD. Notice your disk utilization numbers (72-90%). When you take into account that swarm is likely doing computation...
I would guess that it may be due to #3037 I have proven that locally pinned files, either via the upload or pinning APIs, can have chunks unpinned when the...
You could just use 127.0.0.1 instead of localhost?
But any change should work with all of http://localhost:1633 and http://127.0.0.1:1633 and http://::1:1633 and not leave any particular use case in the dust.
My localstore is 434.8GB, 400.1GB of which is in sharky and statestore is 69,2MB. Which is it that you would be needing? Obviously the former would be hard to share.
Well, I restarted the node for other reasons, and then queried the pins from the restarted node. There are over 860,000 pins. But when I tried to delete the first...
I believe this really needs to be addressed, particularly for content that may be shared between multiple sites.
> I am curious about the feasibility of this, because currently the swarm network is not optimised for real time delivery. However I happened to participate in a hack project...