spanner-migration-tool icon indicating copy to clipboard operation
spanner-migration-tool copied to clipboard

Spanner migration tool helps you migrate from your existing database/dump files to Spanner

Results 86 spanner-migration-tool issues
Sort by recently updated
recently updated
newest added

Currently, we keep appending bad rows to conv till we hit the byte limit and then dump them to dropped.txt. When dealing with large tables, usually we end up storing...

p4

While analyzing schema for a Dynamodb table if there is no data in the table then a synthetic id is created because there is no sample data to analyze the...

p4

Starting with version 9.5, PostgreSQL supports an INHERITS clause: see https://www.postgresql.org/docs/9.5/ddl-inherit.html. HarbourBridge handles this correctly when using direct database access i.e. when using driver=postgres. However, pg_dump support is broken. For...

p4

After data is written to Spanner, read it back and check it matches the data from the pg_dump. At a minimum, we could check row counts for each table. We...

good first issue
p4

In general, HarbourBridge provides a summary of errors and unusual conditions as part of the 'Unexpected Conditions" section of report.txt. However, this section does not including errors encountered while writing...

p4

## Issue The number of rows scanned while performing migration doesn't match with the actual number of rows present in the DynamoDB database. ## Steps which led to the Problem...

p4

According to the manual, the Timestamp included in the CSV must be ISO 8601 compliant. https://github.com/cloudspannerecosystem/harbourbridge/tree/master/sources/csv ``` - The only supported timestamp format right now is **ISO 8601**. ``` ISO...

We support a bunch of input configurations for CSV which makes following the code a little tedious. Improve documentation around it.

Currently, we create secondary indexes with the schema before hand. We want to add a flag so that it can be deferred and created after data migration is complete. This...

good first issue

Currently, we have a constant variable MaxWorkers denoting the number of parallel foreign key requests to be sent. There is scope for improving the throughput while staying just below the...