scylla-manager
scylla-manager copied to clipboard
Restore task succeeds even after the restored table's schema has changed
In this scenario, we create a simple table with 2 text columns, like so:
CREATE COLUMNFAMILY ks.cf1 (key varchar PRIMARY KEY, c1 text, c2 text) WITH comment='test cf' AND compaction = {'class': 'SizeTieredCompactionStrategy', 'enabled': 'true'} AND compression = {} AND read_repair_chance=0.000000 AND dclocal_read_repair_chance='0.0' AND speculative_retry='NONE'
At first, we backup its contents and truncate the table's data.
Afterwards we drop the c2 column: ALTER TABLE ks.cf1 DROP c2
, and then we start a restore task, expecting it to fail since the schema has changed.
The restore task, however, succeeds:
sctool restore -c 080d3240-19ee-4fc3-bd7c-1a17b4f4d150 --restore-tables --location s3:backup-bucket --snapshot-tag sm_20230118113647UTC
Run: 675f559c-9724-11ed-b98e-f4ee08c9cc47
Status: DONE - repair required (see restore docs)
Start time: 18 Jan 23 13:36:52 IST
End time: 18 Jan 23 13:36:55 IST
Duration: 2s
Progress: 100% | 100%
Snapshot Tag: sm_20230118113647UTC
+----------+-------------+---------+---------+------------+--------------+--------+
| Keyspace | Progress | Size | Success | Downloaded | Deduplicated | Failed |
+----------+-------------+---------+---------+------------+--------------+--------+
| ks | 100% | 100% | 12.213k | 12.213k | 12.213k | 0 | 0 |
+----------+-------------+---------+---------+------------+--------------+--------+
From what I understood of the restore feature, the task should fail if the schema was changed, should it not?
The same thing happen if I change the schema by adding a column or even replacing one of the columns with one that is a different type. in such cases, the value of the new column would be None.
@ShlomiBalalis @tzach I think that during some meeting we agreed that user is responsible for providing the right schema and SM will not try to verify that (it will try to download backup data in the existing schema). The only validation that SM is currently doing is checking if keyspaces and tables chosen to be restored are present in current schema, but without checking their columns etc. Should it be changed?
Here is a fragment of Manager Planning Meeting google doc conversation: Karol's question:
Just to confirm. The assumption is that scylla-manager doesn't do the truncate of the table(s), but it relies on the end user that he already did it. It's not scylla-manager responsibility to neither truncate nor validate the table(s) at all.
Tzach's answer:
Yes
@ShlomiBalalis @tzach I think that during some meeting we agreed that user is responsible for providing the right schema and SM will not try to verify that (it will try to download backup data in the existing schema). The only validation that SM is currently doing is checking if keyspaces and tables chosen to be restored are present in current schema, but without checking their columns etc. Should it be changed?
Here is a fragment of Manager Planning Meeting google doc conversation: Karol's question:
Just to confirm. The assumption is that scylla-manager doesn't do the truncate of the table(s), but it relies on the end user that he already did it. It's not scylla-manager responsibility to neither truncate nor validate the table(s) at all.
Tzach's answer:
Yes
Oh, I missunderstood it then. I thought that the manager won't confirm the schema in advance, yes, but once the restore will start it'll fail. This is what I wrote in the test plan back then. @Michal-Leszczynski So should the expected result be that any column that does not match the backup will be null?
This is a difficult question for me, because it's more about Scylla core than SM. (SM only delivers SSTables from backup but does not interpret them)
From some experiments run locally I can say the expected result that you mentioned is correct, but from SM point of view this is an undefined behavior.
@tzach - can you please read the thread? what is the expected behaviour?
Agree with @Michal-Leszczynski here There is no way I'm aware of of validating the schema match the data.
@asias ?
Agree with @Michal-Leszczynski here There is no way I'm aware of of validating the schema match the data.
@asias ?
Sorry for the delay. I was out on vacation.
If the user are using scylla manager to back and then restore, IMO we can at least warn the user that the schema has changed and reject the restore.
How about?
- Backup
Save the DESCRIBE to SM
e.g.
cqlsh> DESCRIBE ks2.standard1
CREATE TABLE ks2.standard1 (
key blob PRIMARY KEY,
"C0" blob,
"C1" blob,
"C2" blob,
"C3" blob,
"C4" blob,
"C5" blob,
"C6" blob,
"C7" blob
) WITH bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}
AND comment = ''
AND compaction = {'class': 'SizeTieredCompactionStrategy'}
AND compression = {}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
- Restore Check the current output of DESCRIBE with the saved.
@asias I believe that we cannot 100% trust the output of describe schema that is uploaded to backup. Here is a comment from backup procedure:
// Always backup system_schema.
//
// Some schema changes, like dropping columns, are applied lazily to
// sstables during compaction. Information about those schema changes is
// recorded in the system schema tables, but not in the output of "desc schema".
// Using output of "desc schema" is not enough to restore all schema changes.
// As a result, writes in sstables may be incorrectly interpreted.
// For example, writes of deleted columns which were later recreated may be
// resurrected.
For this reason SM is not using the output of describe schema for restoring schema to cluster, but it uses system_schema.*
SSTables for this purpose.
So I guess that rejecting restore based only on output of describe schema shouldn't de done.
Similar approach would be to add an additional flag (e.g. --force
) which, if not set, would validate described schema as you mentioned. Would this kind of flag be useful?
@tzach
I do not understand how such a --force
will work.
IMHO, there are two alternative
- restore the schema from backup (based on schema from system tables sstable backup)
- trust the user that the schema match the data
For the second, we can (should) document a manual validation process that can use DESCRIBE, but as you mentioned above, it's not a 100% guaranteed solution.
@Michal-Leszczynski I understand this is the current behavior
restore the schema from backup (based on schema from system tables sstable backup) trust the user that the schema match the data
There is also the information in the documentation saying:
Restoring the content of the tables assumes that the correct schema is already present in the destination cluster. Scylla Manager does NOT validate that, so it’s user responsibility to ensure it. The only form of validation that Scylla Manager performs is checking whether restored tables are present in destination cluster, but it does not validate their columns types nor other properties. In case destination cluster is missing correct schema, it should be restored first.
In my opinion it's enough.
@ShlomiBalalis can we close this ticket ?