LeeHao
LeeHao
> pipeline别用,有bug,例如以下语句,在并发执行的时候redis中正确执行,在pika中数据执行错乱。 go版本 > > ``` > pipe := client.Pipeline() > pipe.LRange("key", 0, 200-1) > pipe.LTrim("key", 200, -1) > cmders, err := pipe.Exec() > ``` > > python版本 > >...
> 单个测试没问题,在多协程中同时执行以下语句会有问题,测试的时候可以往pika队列里面写入1-10000000的数字字符串,然后1000个协程同时执行以下语句将获取到的值存入txt,对比txt是否是1-10000000。 > > ```go > pipe := client.Pipeline() > pipe.LRange("key", 0, 200-1) > pipe.LTrim("key", 200, -1) > cmders, err := pipe.Exec() > ``` 我这样测试还是没有测试出来问题,你可以看看这样测试对不对吗,谢谢您了 https://github.com/ForestLH/goredisclient
It still doesn't seem to work for me :(
咱们这个测试是不是不能包含db数量大于1的情况 我将databases改成了8,然后跑`./pikatests.sh basic`就会报错 > CONFIGURATION:port : 21212 thread-num : 1 thread-pool-size : 12 sync-thread-num : 6 log-path : ./log/ db-path : ./db/ write-buffer-size : 256M arena-block-size : timeout : 60...
If possible, I would like to give it a try. Maybe can assign it to me.
> 需要处理一下冲突 好的 :smile:
> Hello @ForestLH, thank you for adding support for S3's conditional write. However, the previous PR only covered simple writes. Would you be interested in extending this to multipart uploads...
> For Azblob, you can use https://github.com/Azure/Azurite. We have a fixture available at: https://github.com/apache/opendal/blob/main/fixtures/azblob/docker-compose-azurite.yml. > > For other services, we have only configured continuous integration so far. You can submit...
I'm sorry for not reporting the progress for so long, because I have a lot of things to do recently. I'm so sorry to tell you that I may not...
pls assign me :)