hsbench icon indicating copy to clipboard operation
hsbench copied to clipboard

Fix read op is not reading more than 64K

Open ofriedma opened this issue 4 years ago • 6 comments

After several tests using hsbench I have seen that the bw reported by hsbench is not equal to the network bw. It looks like hsbench is closing the connection before getting the whole obj for medium object or larger(bigger than 512K).

ofriedma avatar Apr 27 '20 12:04 ofriedma

@markhpc Can you take a look?

ofriedma avatar Apr 30 '20 08:04 ofriedma

@markhpc ping

ofriedma avatar May 11 '20 06:05 ofriedma

Hi, sorry for the long delay! It's been a little while since I looked at this code. Is the basic gist of it here that we aren't actually doing the full read before calling close and thus need your work-around to ensure that everything is transferred before closing?

Edit: looking into this more, I think I must have been confused by the wording in the SDK developer guide:

req, result := s3Svc.GetObjectRequest(&s3.GetObjectInput{...})
// result is a *s3.GetObjectOutput struct pointer, not populated until req.Send() returns
// req is a *aws.Request struct pointer. Used to Send request.
if err := req.Send(); err != nil {
    // process error
    return
}
// Process result

Would it be good enough to wait on the GetObjectAsync call with the request then use StreamReader's ReadToEnd() or would you suggest the original solution here?

markhpc avatar May 26 '20 18:05 markhpc

Your suggestion is better. I think that anyway we should change stats.addOp(thread_num, object_size, end-start) to use the body size we are getting from the server, because sometime the bucket is mixed with small and big objects so we will get more accurate throughput reporting

ofriedma avatar May 27 '20 07:05 ofriedma

Ok, would you be willing to see if the ReadToEnd() idea works? I agree regarding the stats. Thanks!

markhpc avatar May 27 '20 14:05 markhpc

Is this still being investigated?

fritchie avatar Jul 20 '21 14:07 fritchie