Fornax

Results 56 comments of Fornax

I have a 63 GB file now. I assume there are credentials in the dump. Is there a secure way to transfer it to you? I can put it on...

Uploaded core dump to a9e9a1f5-cd3c-4111-80be-6972e5bd97be. Accidentally called the file `yourfile`, but it's a tar.gz.

Hey, were you able to discover anything useful with the core dump? I don't want to rush you, but I have not been able to run repairs on my cluster...

I removed the original file from my system. It was taking up too much space. The size checks out though, 63 GB == 59 GiB. The process ID also matches...

I can make another core dump if you want. I can probably make it smaller if I restart scylla before dumping the memory, when the caches are not full yet.

I have been deleting a lot of stale data lately with token range queries. In one table, time_series_2, about two thirds of the data is stale. I have been gradually...

My time series table looks like this: ```sql CREATE TABLE "time_series_2" ( "table_name" text, "date" date, "foreign_id" uuid, "granularity" text, -- minute, hour, day, all "statistic" text, "time" timestamp, "amount"...

After I restarted the node the shards did not get stuck until I started a new repair. I have uploaded a new dump with UUID `7330a0f6-bdca-4894-a590-4368e831ecf3`. The sha256sum is `d88261ceba725d6be0f606c8376e1e2fac6f623657d14febd95c6f6617b3b3f5`...

I just fixed the last linter warning. The remaining failures seem to have to do with the S3 backend. So not related to this commit.