cli
cli copied to clipboard
Improve restore-dump flexibility to allow simpler restores into a differently named database
Currently, the restore-dump
subcommand will assume that you are attempting to always restore into a database that has the same name as the <DATABASE_NAME>
prefix that is added into the generated files from the dump
subcommand.
The generated files have a structure that looks like this:
<DATABASE_NAME>.<TABLE_NAME>-schema.sql
<DATABASE_NAME>.<TABLE_NAME>.00001.sql
...
For example, let's say I have a database named old-name
.
This will generate files that have that prefix.
old-name.example-table-schema.sql
old-name.example-table.00001.sql
This works perfectly fine for restore-dump
, so long as the database name you are attempting to restore into matches the filenames, which would be old-name
in this case.
But let's say the old database name actually was wrong, or there was some need to change it over time, so now you want it to be called new-name
.
If you attempted to run a command like this:
pscale database restore-dump new-name main --dir "/planetscale-dumps/old-name" --org=example
You would run into the following error, since the restore-dump
command looks at the filenames within the dump folder and sees the old-name
prefix:
Error: failed to restore database: unknown database 'old-name' (errno 1105) (sqlstate HY000)
The current workaround is to perform an extra step where you need to rename the files in the dump folder to have the new database name like this:
new-name.example-table-schema.sql
new-name.example-table.00001.sql
It would be nice to not have to do this however, particularly because in the restore-dump
example above, we are already providing new-name
as part of the input parameters.
Having an additional flag that would allow the embedded <DATABASE_NAME>
in the filenames to be ignored and automatically use the provided database name passed to restore-dump
as an input parameter would be helpful if it were available and would allow for easier reuse of the dump folders that are generated for different databases.
I had planned to look into this myself at some point and attempt a PR, but if someone tackles it more quickly then that's perfectly fine too.