Ivan Rizzante
Ivan Rizzante
@ie-pham I think I spoke early, I see the same behavior with the disk space usage in Minio always increasing. These errors: ``` error shipping block to backend, blockID 60b3c1ae-4494-485f-8486-8728e51de384:...
@ie-pham can you tell me what exactly the metric name is?
Sorry for my late reply too: > Exact hash for the image that you are using as there are definitely some differences in the last few releases maven-repo.sdb.it:18080/grafana/tempo@sha256:5e5ebabd9bf373779e2a4bdf0959c26545df35d6a79125aa0f64b7ad834be49f > Could...
Hi, any update?
Hi @ie-pham the logs are from the session where the row_group_size was 100MB, I'm going to try increasing the row_group_size to 200MB and let you know.
Hi @ie-pham I'm using right now 300MB as row_group_size and I've also updated to tempo image: grafana/tempo@sha256:87d49512e192b05d6c1896c34ac86f58fc9be6f5429921acb157659fc74628ea So far so good, I don't see "Your proposed upload is smaller than...
Hi @joe-elliott thanks for the updates, so you're suggesting to wait for https://github.com/grafana/tempo/pull/1873 to be merged and then start using `row_group_size_bytes` right?
I'd like to understand this as well. We do have a JBoss AS hosting our web application, which has exactly the same problems. We switched to haproxy and we currently...
see https://github.com/grafana/loki/issues/4221
@kvrhdn thanks for the hint. Using the following tempo.yaml in query frontend fixed the issue: ```yaml tempo.yaml: ---- compactor: {} distributor: {} http_api_prefix: "" ingester: lifecycler: ring: replication_factor: 3 memberlist:...