hsbench
hsbench copied to clipboard
Fix read op is not reading more than 64K
After several tests using hsbench I have seen that the bw reported by hsbench is not equal to the network bw. It looks like hsbench is closing the connection before getting the whole obj for medium object or larger(bigger than 512K).
@markhpc Can you take a look?
@markhpc ping
Hi, sorry for the long delay! It's been a little while since I looked at this code. Is the basic gist of it here that we aren't actually doing the full read before calling close and thus need your work-around to ensure that everything is transferred before closing?
Edit: looking into this more, I think I must have been confused by the wording in the SDK developer guide:
req, result := s3Svc.GetObjectRequest(&s3.GetObjectInput{...})
// result is a *s3.GetObjectOutput struct pointer, not populated until req.Send() returns
// req is a *aws.Request struct pointer. Used to Send request.
if err := req.Send(); err != nil {
// process error
return
}
// Process result
Would it be good enough to wait on the GetObjectAsync call with the request then use StreamReader's ReadToEnd() or would you suggest the original solution here?
Your suggestion is better.
I think that anyway we should change stats.addOp(thread_num, object_size, end-start)
to use the body size we are getting from the server, because sometime the bucket is mixed with small and big objects so we will get more accurate throughput reporting
Ok, would you be willing to see if the ReadToEnd() idea works? I agree regarding the stats. Thanks!
Is this still being investigated?