redis-plus-plus icon indicating copy to clipboard operation
redis-plus-plus copied to clipboard

[QUESTION] press redis++

Open dandyhuang opened this issue 1 year ago • 7 comments

I benchmark async cluster, async cluster instances create 64 num。 machine 32c 62g。 command send get,key is 6B、 value is 1kB data。 I send 13w qps,when machine cpu is 50%。 async cluster is timeout。 How can I benchmark the machine to 80% cpu。 async cluster is not timeout。 What else can I do to adjust,like connects(now is 1000)

dandyhuang avatar Sep 08 '22 09:09 dandyhuang

Please try the following code to do a benchmark testing:

vector<string> prepare(AsyncRedisCluster &r, int key_num, int key_len, int value_len) {
    std::default_random_engine engine(std::random_device{}());
    std::uniform_int_distribution<int> uniform_dist(0, 25);
    vector<string> keys;
    string value(value_len, 'a');
    atomic<int> err_num{0};
    atomic<int> reply_cnt{0};
    for (auto idx = 0; idx < key_num; ++idx) {
        string key;
        for (auto i = 0; i < key_len; ++i) {
            key.push_back('a' + uniform_dist(engine));
        }
        r.set(key, value,
                [&err_num, &reply_cnt](Future<bool> &&fut) {
                    ++reply_cnt;
                    try {
                        fut.get();
                    } catch (const Error &e) {
                        err_num++;
                    }
                });
        keys.push_back(std::move(key));
    }

    // ensure all aysnc commands have been sent to Redis, and get their replies.
    while (reply_cnt != key_num) {
        this_thread::sleep_for(chrono::milliseconds(10));
    }

    cout << "set " << key_num << " keys, err num: " << err_num << endl;

    return keys;
}

int main() {
    try {
        ConnectionOptions opts;
        opts.host = "127.0.0.1";
        opts.port = 7000;
        ConnectionPoolOptions pool_opts;
        pool_opts.size = 20;        // you can set a larger pool size, if you have too many worker threads
        auto r = AsyncRedisCluster(opts, pool_opts);

        // parameters you can tune
        auto key_num = 1000;
        auto key_len = 6;
        auto value_len = 1024;
        auto times = 100000;
        auto worker_num = 32;

        auto keys = prepare(r, key_num, key_len, value_len);

        auto start = chrono::steady_clock::now();
        vector<thread> workers;
        for (auto idx = 0; idx < worker_num; ++idx) {
            workers.emplace_back([&r, &keys, times]() {
            //workers.emplace_back([&opts, &pool_opts, &keys, times]() {         // Also, you can try to have AsyncRedisCluster instance for each thread.
                        //auto r = AsyncRedisCluster(opts, pool_opts);
                        auto start = std::random_device{}() % keys.size();
                        atomic<int> err_num{0};
                        atomic<int> reply_cnt{0};
                        for (auto idx = 0; idx < times; ++idx) {
                            auto index = (start + idx) % keys.size();
                            r.get(keys[index], [&err_num, &reply_cnt](Future<Optional<string>> &&fut) {
                                        ++reply_cnt;
                                        try {
                                            fut.get();
                                        } catch (const Error &e) {
                                            err_num++;
                                        }
                                    });
                        }

                        // ensure all aysnc commands have been sent to Redis, and get their replies.
                        while (reply_cnt != times) {
                            this_thread::sleep_for(chrono::milliseconds(1));
                        }
                    });
        }
        for (auto &worker : workers) {
            worker.join();
        }
        auto stop = chrono::steady_clock::now();
        auto elapse = chrono::duration_cast<chrono::milliseconds>(stop - start).count();
        auto qps = worker_num * times * 1.0 / elapse * 1000;
        cout << "qps: " << qps << endl;
    } catch (const Error &e) {
        cout << e.what() << endl;
    }
    return 0;
}

Check the comment in the code for detail.

Regards

sewenew avatar Sep 08 '22 15:09 sewenew

I tested like you,just thread replace coroutine(use bthread)。 I find benchmark testing is also related to the number of (auto r = AsyncRedisCluster(opts, pool_opts); )cluster instances 。

// vector create cluster instances  nums
std::unordered_map<std::string, std::vector<std::shared_ptr<sw::redis::AsyncRedisCluster>>> async_cluster_;
//  coroutine use 
FLAGS_even_uv = 64;
for (auto p : ctx->dp_req().param()) {
      auto start = butil::gettimeofday_us();
     std::string cluster_id = "cluster_press"
      async_cluster_[cluster_id][start % FLAGS_even_uv]->get(
          p.key(), [=](sw::redis::Future<sw::redis::OptionalString>&& fut) {
            this->GetCmdCb(p.key(), ctx,
                           std::forward<sw::redis::Future<sw::redis::OptionalString>>(fut));
          });
   }  

dandyhuang avatar Sep 09 '22 02:09 dandyhuang

I find benchmark testing is also related to the number of (auto r = AsyncRedisCluster(opts, pool_opts); )cluster instances

Yes. Because, by default, each AsyncRedisCluster instance has a thread running an event loop that handling read/write operations.

If you have a heavy pressure, you can create an AsyncRedisCluster instance for each thread.

Regards

sewenew avatar Sep 09 '22 10:09 sewenew

I find benchmark testing is also related to the number of (auto r = AsyncRedisCluster(opts, pool_opts); )cluster instances

Yes. Because, by default, each AsyncRedisCluster instance has a thread running an event loop that handling read/write operations.

If you have a heavy pressure, you can create an AsyncRedisCluster instance for each thread.

Regards

one AsyncRedisCluster instance can set multiple thread running an event loop that handling read/write operations? The current evenloop source code seems to be a single-producer and single-consumer implementation。

dandyhuang avatar Sep 10 '22 07:09 dandyhuang

NO, you cannot do that. libuv's event loop is not thread safe. So there's only one thread running the event loop, i.e. reading and writing socket. If multiple threads shared a single AsyncRedisCluster instance, these threads send commands to Redis by pushing tasks to an inner task queue of the underlying event loop, and there will be a threads processing these tasks.

Regards

sewenew avatar Sep 10 '22 08:09 sewenew

I see, thanks。 I don't know how many AsyncRedisCluster instances to use。

dandyhuang avatar Sep 10 '22 09:09 dandyhuang

Yeah, you need to do some benchmark by tuning those parameters.

Regards

sewenew avatar Sep 10 '22 09:09 sewenew

Since there's no update. I'll close the issue.

Regards

sewenew avatar Sep 27 '22 14:09 sewenew