Michael Pilquist
Michael Pilquist
Might also be related to https://github.com/lampepfl/dotty/issues/14640
For your Kafka use case, you want to basically start each inner stream but then only pull up to `n` elements at a time? You could do this with `parJoinUnbounded`...
@SystemFw What do you think about allowing configuration of the channel inside `parJoin`? Somehow letting folks to optionally specify a bound for `output` here, instead of always using a sync...
Initial guess is that this is example of badly interacting features -- `concurrently` and `.compile.resource`. That is, `concurrently` introduces its own root scope which results in the scope from `Stream.resource`...
Accumulated `cats.data.WriterT` data not preserved from function passed to `fs2.io.readOutputStream`
IIRC this is due to an issue with the Concurrent instance for WriterT. /cc @djspiewak
Looks good to me. I ran in to the same issue when thinking about a new fs2 queue type. Re: `bufferThrough`, I'm a bit worried that it would be confused...
Some simple ones to get started: Q: Create an infinite stream that starts at 0 and increments by 1, wrapping around at `Int.MaxValue`. A: ```scala Stream.iterate(0)(_ + 1) ``` Q:...
@pchlupacek Do you think this is solvable in next week or so? We'll have a minor API breakage release in 1.1 where you could change PubSub signature if needed.
This one is getting worked by a colleague of mine.
Is this the result of `merge` using this guard? https://github.com/typelevel/fs2/blob/de371a52b2403761f50d40483131f4cae0388a99/core/shared/src/main/scala/fs2/Stream.scala#L1888-L1889