frostdb icon indicating copy to clipboard operation
frostdb copied to clipboard

Write table blocks directly to bucket

Open thorfour opened this issue 2 years ago • 5 comments

Right now in frostdb when we persist a table block into memory, we first read the entirety of the table block into a set of merged row groups, and then into a buffer. This causes our memory to explode every time we persist a block. This is seen the screenshot from Parca below.

https://pprof.me/fcbc8f3

image

It would be ideal if we could instead have the TableBlock implement the io.Reader interface that the objstore.Bucket interface requires so we could write it directly there instead of the memory spikes.

thorfour avatar Aug 10 '22 20:08 thorfour

It's kind of a shame that objstore requires an io.Reader when all it does is copy that io.Reader to an io.Writer. Parquet files can be written in a single contiguous write, so if we could instead get the writer from objstore we could refactor Serialize to take an io.Writer instead of writing to this in-memory buffer first:

https://github.com/polarsignals/frostdb/blob/2f68e10c0065d36b9dc2b73dc32a824834b3c6c2/table.go#L1137

brancz avatar Aug 11 '22 07:08 brancz

Ok, after looking ever so slightly into different providers, it seems that really only the GCS provider works the way I expected. I suspect the best way forward is to write the parquet file to disk and upload it from there. Feels really unnecessary, but I think it's the best we can do given all the circumstances.

brancz avatar Aug 11 '22 07:08 brancz

If we want to go the extra mile and avoid writing to disk at all cost we could propose an addition to the objstore API to get an UploadWriter which returns an io.WriteCloser and if a provider doesn't implement it we fall back to the filesystem based approach from above, but if they do support it we write directly to object storage.

brancz avatar Aug 11 '22 07:08 brancz

So we can actually refactor the Serialize function today to take an io.Writer using the io.Pipe

    r, w := io.Pipe()
      var err error
      go func() {
          defer w.Close()
          err = t.Serialize(w)
      }()
      defer r.Close()

      fileName := filepath.Join(t.table.name, t.ulid.String(), "data.parquet")
      err2 := t.table.db.bucket.Upload(context.Background(), fileName, r)
      if err2 != nil {
          return err2
      }

However, trying this out, I am still seeing the same pattern, and it turns outs it's comfing from the ReadRows function. https://pprof.me/7d4c455

thorfour avatar Aug 11 '22 19:08 thorfour

image

Have a fix that's headed in the right direction

thorfour avatar Aug 12 '22 21:08 thorfour