go-peerstream icon indicating copy to clipboard operation
go-peerstream copied to clipboard

p2p stream multi-multiplexing in Go (with https://github.com/docker/spdystream)

Results 6 go-peerstream issues
Sort by recently updated
recently updated
newest added

This repo has been forked on libp2p/go-peerstream. There seem to be no significant differences and the fork has been gx-released more recently. This repo should probably be removed or point...

This still needs some thought, but what i'm thinking about is having connections close automatically if there are no streams open over them. With this, we should be able to...

I think that the stream creation methods here should take contexts. some of our stream muxing libraries are capable of accepting contexts somewhere along the line, I think it would...

`addConn` (https://github.com/jbenet/go-peerstream/blob/master/conn.go#L180) happens on the same happens on the same thread as accept (https://github.com/jbenet/go-peerstream/blob/master/listener.go#L89). calls to `addConn` should be parallelized, for handshakes.

https://build.protocol-dev.com/job/race/9225/console ``` WARNING: DATA RACE Read by goroutine 6: github.com/jbenet/go-ipfs/Godeps/_workspace/src/github.com/jbenet/go-peerstream.(*Swarm).Conns() /workspace/race/src/github.com/jbenet/go-ipfs/Godeps/_workspace/src/github.com/jbenet/go-peerstream/swarm.go:113 +0x81 github.com/jbenet/go-ipfs/net/swarm.(*Swarm).ConnectionsToPeer() /workspace/race/src/github.com/jbenet/go-ipfs/net/swarm/swarm.go:119 +0x116 github.com/jbenet/go-ipfs/net/swarm.(*Swarm).NewStreamWithPeer() /workspace/race/src/github.com/jbenet/go-ipfs/net/swarm/swarm.go:91 +0x125 github.com/jbenet/go-ipfs/net.(*network).NewStream() /workspace/race/src/github.com/jbenet/go-ipfs/net/net.go:201 +0x9a github.com/jbenet/go-ipfs/net/backpressure.TestStBackpressureStreamWrite() /workspace/race/src/github.com/jbenet/go-ipfs/net/backpressure/backpressure_test.go:308 +0x886 testing.tRunner() /usr/local/go/src/pkg/testing/testing.go:422 +0x10f Previous write by...