Cezar Moise
Cezar Moise
Create 3 i4i.8xlarge with rf=3. Reach 90% utilization Add i4i.large under load Verify space utilization Reach 90% utilization Add i4i.large under load Verify space utilization Reach 90% utilization Add i4i.large...
Create 3 node (i4i.large) cluster with rf=3. After reaching 90% disk usage. Wait until cluster stabilizes Perform scale-out by adding 3 nodes of larger instances (i4i.4xlarge) ## [Results](https://github.com/scylladb/scylla-cluster-tests/issues/9257#issuecomment-2483382654)
Create 3 node (i4i.4xlarge) cluster with rf=3. After reaching 90% disk usage wait until cluster stabilizes Perform scale-out by adding 3 nodes of smaller instance (i4i.large) ## [Results](https://github.com/scylladb/scylla-cluster-tests/issues/9256#issuecomment-2483389787)
Existing snapshots are no longer compatible with latest versions. Referring to the parameter `mgmt_reuse_backup_snapshot_name` in backup/restore tests. For example for snapshot named `1tb_2t_twcs` > 2025-02-25 09:46:43.179: (TestFrameworkEvent Severity.ERROR) period_type=one-time event_id=8e52bff6-33c9-4cb2-a1ba-97999a5a4ff8,...
At the time of the issue, the cluster was 3 i4i.xlarge nodes, 1 per rack. Added 3 i4i.large, 1 per rack. There were multiple keyspaces/tables, some had active writes. All...
During `disrupt_nodetool_seed_decommission`. Node4 is removed as seed. Node4 is decommissioned. Node7 is added to the cluster. Node7 is added as seed. Node4 log ``` 2025-06-14T23:10:38.933+00:00 longevity-100gb-4h-2025-2-db-node-20203587-1 !INFO | scylla[5778]: [shard...
Scenario: Start a 2 node cluster with manager active Add a 3rd node, it fails. Without manager it does not fail. > WARN 2025-06-16 18:32:25,393 [shard 0: gms] raft_group_registry -...
Driver version: 3.29.5 Code is simple, just create a session are run a query. Test was run with fips and encription. ```python with self.cluster.cql_connection_patient(self.target_node) as session: query_result = session.execute('SELECT keyspace_name...
Test `test_agent_check_location` ran for 45 minutes then timed out. Using this manager build https://jenkins.scylladb.com/view/scylla-manager/job/manager-master/job/manager-build/1117/artifact/00-Build.txt Fake config is `{"gcs": {"endpoint": "127.0.0.1:1", "anonymous": "true"}}` I attached the logs from my local run...
Scenario: A cluster with 6 nodes, 3 racks, 2 nodes per rack. Cluster is filled to 90%. A repair is started. During the repair, 3 new (bigger) nodes are added....