Harendra Kumar
Harendra Kumar
If we have a type class with toStream/fromStream operations then we can write general functions that operate on any type supporting this type class.
The issue was seen when testing with the `dev` build flag.
There are some benchmarks mentioned in the [monad-par paper](https://simonmar.github.io/bib/papers/monad-par.pdf) we can try those. In fact we can put those in the concurrency-benchmarks package and compare streamly/monad-par/parallel etc through that package....
For pure parallelism and for stateless monads (e.g. ReaderT) we should not need the monad-control (MonadBaseControl) constraints. Those should be needed only for monadic streaming with stateful (e.g. StateT) monads.
Thanks for reporting. There seems to be a bug in the ordered parConcatMap. I cannot reproduce it without `ordered` though. I see 100's of workers being dispatched in some cases....
We should check the worker limit in `pushWorker` when we are incrementing the count under CAS. We should not dispatch if the count has gone beyond the limit.
This falls somewhat in the RPC domain - e.g. if we compare this with the streamly-process `pipeBytes` - same as that, we could also pass an argument string when invoking...
``` insertAllSepBy: intersperse insertEachEndBy, insertAfterEach: intersperseEndBy insertEachBeginBy, insertBeforeEach: intersperseBeginBy insertAfterN: intersperseEveryNEndBy ```
Having `unicode-data-core` as a separate package sounds like a good idea to me because `unicode-data` is too big and might grow with more stuff. But we also need to figure...
As of now, we lack the facilities to create arbitrary graphs (with feedback cycles) of streams declaratively. Using a reference in IO, as you are doing, seems to be the...