Results 251 comments of l0rinc

This tokenizer is already an order of magnitude (10x) slower than tiktoken, which is almost an order of magnitude slower than jtokkit, see: - https://github.com/openai/tiktoken/pulls?q=is%3Apr+author%3Apaplorinc - https://github.com/knuddelsgmbh/jtokkit/pulls?q=+is%3Apr+author%3Apaplorinc+

Did you figure it out? I can't load any custom TensorFlow light model in Flutter...

I managed to add it by first making it work from swift code: https://www.tensorflow.org/lite/guide/ios

@Sjors, this is the optimization I've mentioned to you in person, I'd appreciate a review if you think it's worthwhile:

> I don't think this is used in a critical path, to warrant a speedup? It's literally the example from the [benchmarking docs](https://github.com/bitcoin/bitcoin/blob/master/doc/benchmarking.md): which was specifically optimized in various other...

What's the point of the benchmarks in that case exactly? Anyway, I can make the code slightly simpler and similarly performant, would that be welcome?

Thanks @sipa, appreciate your comment! How do I find small changes that are needed, welcome, not so controversial, and don't need 3 years of up-front studying? That's why I like...

@andrewtoth, that was the first thing I checked, couldn't find any reasonable "good first issue" - or other issue that piqued my interest. The problem has to be personal, otherwise...

> Ok, the reason for https://github.com/bitcoin/bitcoin/pull/7656#issue-139438690 was an improvement in listunspent. Seems fine, if this is still the case. But this will need to be checked first. I've created 10k...

I was following the benchmarks to decide what to work on, but since you've mentioned this is an important usecase, I measured it as well. Is there anything else you'd...