firebird
firebird copied to clipboard
restore no longer possible after using computed fields
despite of the fact that the original table structure might look stupid, i defined it to have any easy way to reproduce the error it was tested in fb3 and fb4 in utf8 and results were the same.
i know that each table has a limited record size, but since computed columns are not physical stored data columns, it is possible to define such a table based on functions results. A backup also has no problem. perhaps restore needs a review regarding computed columns and calculate them differently while creating the table. another option might be to raise an error while create/alter table if the valid record size is no longer as needed
but having valid metadata crashing restore only is a dangerous situation, since i know too much users are only doing backups and the restore is not done also to be sure the backup is really valid.
how to reproduce?
step 1: define a function
create or alter function GETVAL ( VAL bigint) returns varchar(8000) as begin return 1; end
step 2: define a table
CREATE TABLE TEST ( ID BIGINT NOT NULL PRIMARY KEY, TXT VARCHAR(80), TXT1 COMPUTED BY ((getval(id))), TXT2 COMPUTED BY ((getval(id))), TXT3 COMPUTED BY ((getval(id))), TXT4 COMPUTED BY ((getval(id))), TXT5 COMPUTED BY ((getval(id))), TXT6 COMPUTED BY ((getval(id))), TXT7 COMPUTED BY ((getval(id))), TXT8 COMPUTED BY ((getval(id))), TXT9 COMPUTED BY ((getval(id))), TXT10 COMPUTED BY ((getval(id))) );
step 3: do a backup, not problem
step 4: try a restore! will always crash with this error
IBE: Starting restore. Current time: 07:45:40 IBE: This operation is not defined for system tables. unsuccessful metadata update. new record size of 320358 bytes is too big. TABLE TEST. Exiting before completion due to errors. ------------------------------------------------ SQLCODE: -607 SQLSTATE: 54000 GDSCODE: 335544351 IBE: Restore completed. Current time: 07:45:40. Elapsed time: 00:00:00
Probably simple fix that restore do not perform check on computed fields at all without calculation about max size. As user normally in select can specify e.g. only one of them.
Computed fields are restored as regular ones at the first stage and only later are altered to be computed ones. This is done to prevent "unrestorable backup" issues when computed fields are used in procedures/triggers. But in this particular case the record size limit plays against.
perhaps they can be restored as a smaller varchar/char compared to the definition of the computed result.
When they are altered later to a computed column, they have already the correct type and increasing the length for char/varchar is by definition no problem when it is not at the max limit.
i love the possibilities to use computed columns also for table structures that extend the 64k record size limit, so have a limit already while add computed columns to a table when admin wants to do this would not be the way i like.
We were considering to get rid of the 64KB record limit (or maybe raise it up to some sanity threshold e.g. 1MB), so this problem may disappear automagically.
We were considering to get rid of the 64KB record limit (or maybe raise it up to some sanity threshold e.g. 1MB), so this problem may disappear automagically.
that would be the best solution not only for this issue
On 10/6/22 12:27, ibexpertHK wrote:
We were considering to get rid of the 64KB record limit (or maybe raise it up to some sanity threshold e.g. 1MB), so this problem may disappear automagically.that would be the best solution not only for this issue
May be (interesting for existing versions) it's worth skipping record length check for calculated (actually) fields in gbak attachments? This may be achieved (for example) using special value for RDB$SYSTEM_FLAG in RDB$RELATION_FIELDS.