matcha icon indicating copy to clipboard operation
matcha copied to clipboard

Use version from package.json

Open danielbayerlein opened this issue 9 years ago • 0 comments

The version in lib/matcha.js isn't up to date. Use the version from package.json - only one place for maintaining. 😉

danielbayerlein avatar Jul 02 '16 16:07 danielbayerlein

Is there any comment on this from the Amplify team? Or suggested steps for migrating DB information (are Data Pipeline or Custom CSV functions our only option?)

blazinaj avatar Sep 16 '19 15:09 blazinaj

Migrations mechanism also could help with GSI updates issues .

Ravirael avatar Dec 20 '19 07:12 Ravirael

Not sure if this helps anyone but I created a process for running migrations via an npm run command:

const common = require('./common.js'); const AWS = require('aws-sdk'); const migrations = [ // ensure migratons are in date order (oldest at the top) require('./migrations/20200201-lea-180'), require('./migrations/20200210-lea-184') ]; global.fetch = require('node-fetch');

/**

  • This file is used to data migrations and only data migrations. Schema changes are handled by amplify.
  • In order to run a migration:
    1. Add the file into the migrations folder (copy the template)
    1. Require the reference to the BOTTOM of the migrations array above
  • Best practice, Make no changes to schema that are going to cause backwards compatibility issues.
  • e.g no deleting columns/tables
  • Yes, I realise this will create technical debt with rouge unused colums everywere but Amplify is changing the schema itself
  • We can run clean up at a later date when we know the data that you are migrating has been changed.
  • NOTE: The schema only changes in Appsync not dynamoDB itself, do not expect new columns to appear. */ const environmentName = common.getCurrentEnv();

(async () => { AWS.config.update({region: 'eu-west-2'});

// if we heve no CI vars then use the local creds if (process.argv.length === 2) { AWS.config.credentials = new AWS.SharedIniFileCredentials({profile: 'PROFILE NAME'}); } else { // if CI then use env vars AWS.config.credentials = { accessKeyId: process.argv[ 2 ], secretAccessKey: process.argv[ 3 ] }; }

let dbConnection = new AWS.DynamoDB({apiVersion: '2012-08-10'}); try { // Make sure there is a migrations table console.log('Getting migration table'); let migrationTableName = await common.findTable(dbConnection, 'Migration-' + environmentName, null, true, true);

// If it doens't exist, create it
if (!migrationTableName) {
  console.log('Migration table not found...creating');
  migrationTableName = await createMigrationTable(dbConnection, 'Migration-' + environmentName);
  console.log('Migration created');
}

// Get all migrations that have been ran
const previousMigrationsRaw = await common.getAllItems(dbConnection, migrationTableName);
const previousMigrations = previousMigrationsRaw.map((migration) => migration.migrationName.S);
const successfulMigrations = [];
let rollBack = false;

for (const migration of migrations) {
  // Do I run the migration?
  if (previousMigrations.some((m) => m === migration.name)) {
    console.log('Already ran migration: ' + migration.name);
  } else {
    console.log('Running migration: ' + migration.name);

    // Try to run migration
    try {
      await migration.up(dbConnection, environmentName);
      successfulMigrations.unshift(migration);
      console.log('Successfully ran: ', migration.name);
    } catch (e) {
      console.error('Up Error: ', migration.name, e);
      console.error('Breaking out of migration loop');
      // Push the failed migration so we can run the down
      successfulMigrations.unshift(migration);
      rollBack = true;
      break;
    }
  }
}

// Was there an error? if so run all downs
if (rollBack) {
  console.error('Attempting to revert ' + successfulMigrations.length + ' migrations');
  for (const migration of successfulMigrations) {
    console.error('Attempting to revert ' + migration.name);
    try {
      // Need to down all
      await migration.down(dbConnection, environmentName);
    } catch (e) {
      console.error('Down Error: ', migration.name, e);
    }
  }
} else {
  // Save migration completion
  console.log('Saving migrations to server', successfulMigrations);
  for (const migration of successfulMigrations) {
    await common.putItem(dbConnection, migrationTableName, {
      'migrationName': {
        S: migration.name
      },
      'migrationDate': {
        S: new Date().toISOString()
      }
    });
  }
}

} catch (e) { throw (e); } })();

async function createMigrationTable (dbConnection, tableName) { var params = { AttributeDefinitions: [ { AttributeName: 'migrationName', AttributeType: 'S' }, { AttributeName: 'migrationDate', AttributeType: 'S' } ], KeySchema: [ { AttributeName: 'migrationName', KeyType: 'HASH' }, { AttributeName: 'migrationDate', KeyType: 'RANGE' } ], TableName: tableName, BillingMode: 'PAY_PER_REQUEST' };

// Call DynamoDB to create the table await dbConnection.createTable(params).promise(); return tableName; }

Not the cleanest code but now I just have a folder which contains js files that export a name and an up and a down function which talk to dynamoDB directly. as in the docs: https://docs.amazonaws.cn/en_us/amazondynamodb/latest/developerguide/GettingStarted.JavaScript.html

ghost avatar Feb 06 '20 14:02 ghost

Really?? No comment on this? I don't understand how you're supposed to make any changes if you have an app in production, other than completely ejecting Amplify and managing your stacks et. al. completely yourself once you have live data and users in your app - which isn't a completely unreasonable idea, but I have not seen any mention of this being a purely development-stage only tool.

lukeramsden avatar May 11 '20 11:05 lukeramsden

It's really a surprise that on amplify team member provides any useful information for this request. The feature is a MUST-HAVE feature for a data related solution.

It seems the data model evolution and data migration in amplify are completely forgotten.

ivenxu avatar Jun 09 '20 12:06 ivenxu

I've switched to using Postgraphile w/ graphile-migrate for my backend, once you get the hang of writing your schema (playing around with graphile-starter helped a lot) it's really very nice. Forward-only migrations seem to be working well for me, and a real relational database means I can offload most of the work from the client to the server - a core premise of GraphQL is supposed to be eliminating client data processing, as it get's the data in exactly the format it wants. I still use Amplify to manage my Auth and S3, and for that purpose it works very well.

lukeramsden avatar Jun 09 '20 12:06 lukeramsden

No responses yet ?

luisenaguero avatar Nov 16 '20 14:11 luisenaguero

Trying.

cawfree avatar Nov 20 '20 16:11 cawfree

I have started to invest in the platform but an 18 month old issue like this, with no official comment, doesn't convince me that I would be able to manage a serious production application using amplify/appsync.

markau avatar Dec 12 '20 09:12 markau

Not by any means a scalable/robust migration system for a team but fwiw I have been using an AWS::CloudFormation::CustomResource with a a setupVersion and a setup lambda function.

        "Version": {
          "Ref": "setupVersion"
        },
        "ServiceToken": {
          "Ref": "function..."
        }

Then I've been making idempotent changes on version change via the lambda...works ok for dynamo/etc since you can't make substantial changes anyways but wouldn't be great for sql changes.

cdunn avatar Dec 12 '20 16:12 cdunn

My approach has been the same as @cdunn. To elaborate a little, here are some more implementation details:

I have created a lambda called MigrationService. In the resources section of the template, I have the following custom resource:

"CustomMigrationService": {
      "DependsOn": [
        "AmplifyResourcesPolicy",
        ...
      ],
      "Type": "Custom::MigrationService",
      "Properties": {
        "ServiceToken": {
          "Fn::GetAtt": [
            "LambdaFunction",
            "Arn"
          ]
        },
        "TriggerVersion": 5
      }
    }

The most important thing in this custom resource is the TriggerVersion. If it is incremented, then the lambda will be executed upon deployment. So if you deployed with version 1, then made changes to your code and redeployed without incrementing the TriggerVersion, your lambda will not be executed.

Be sure to give the lambda the necessary access so it can make all the necessary migrations. I have done that by editing the AmplifyResourcesPolicy section and adding statements to the AmplifyResourcesPolicy > Properties > PolicyDocument > Statement. E.g.:

{
              "Effect": "Allow",
              "Action": [
                "cognito-idp:AddCustomAttributes",
                "cognito-idp:AdminAddUserToGroup",
                "cognito-idp:ListUsers"
              ],
              "Resource": [
                {
                  "Fn::Join": [
                    "",
                    [
                      "arn:aws:cognito-idp:",
                      {
                        "Ref": "AWS::Region"
                      },
                      ":",
                      {
                        "Ref": "AWS::AccountId"
                      },
                      ":userpool/",
                      {
                        "Ref": "authcognitoUserPoolId"
                      }
                    ]
                  ]
                }
              ]
            },

or

{
              "Effect": "Allow",
              "Action": [
                "dynamodb:Get*",
                "dynamodb:BatchGetItem",
                "dynamodb:List*",
                "dynamodb:Describe*",
                "dynamodb:Scan",
                "dynamodb:Query",
                "dynamodb:Update*",
                "dynamodb:RestoreTable*"
              ],
              "Resource": [
                {
                  "Ref": "storageddbBlogArn"
                },
                {
                  "Fn::Join": [
                    "/",
                    [
                      {
                        "Ref": "storageddbBlogArn"
                      },
                      "index/*"
                    ]
                  ]
                }
              ]
            }

Next up, the handler of the lambda needs to account for the creation of the custom resource. Here's the skeleton of my code:

exports.handler = async (event) => {
    const cfnCR = require('cfn-custom-resource');
    const physicalResourceId = "physicalResourceId-MigrationService-112233"
    const { sendSuccess, sendFailure } = cfnCR;

    if (event.RequestType === "Delete") {
        const result = await sendSuccess(physicalResourceId, {}, event);
        return result;
    }

    try {
       // your code here 

        const result = await sendSuccess(physicalResourceId, {}, event);
        return result;
    } catch (err) {
        // your code here 
        const result = sendFailure(err, event);
        return result;
    }
};

Probably the most important thing here is to handle the Delete event. Your lambda will be executed if your stack is being rolled back so if your stack is rolling back because the lambda errored out when deploying then calling it again during rollback will end up hanging cloudformation.

Lastly, I've implemented versioning so I do not rerun migration scripts. (Keeping scripts idempotent and re-runnable is always a great idea however, it could get expensive if you have a long list of migration scripts so skipping the ones that have already executed comes in handy. If you have few re-runnable scripts you can potentially skip this.)

In my case, i have 3 environments so I store the latest deployed version number in a dynamodb table. When the lambda is triggered it will pull the latest deployed version number on that environment and will then load+run the migration scripts that have higher version.

My migration scripts folder structure is: migrationScripts | component | version.js

(I have separated the project into a few components that could be deployed independently but you might not need that)

It would have been nice if there was a built-in feature to help with the migration but the good news is that this approach works (given adequate access) for any AWS resource change and not only data.

krikork avatar Dec 22 '20 16:12 krikork

@dabit3 any official statement on this? Is amplify a dev tool only? Please make it clear in the docs that amplify is not suitable for production apps. Many people spend a lot of time on this only to find out that most basic features are missing. Plus, no official statement for more than a year 👎

sasweb avatar Jan 19 '21 08:01 sasweb

Bumping this

jacobsapps avatar Jan 19 '21 11:01 jacobsapps

Yeah, this is critical.

simon-lanf avatar Mar 10 '21 05:03 simon-lanf

Yeah, I've been searching everywhere for an understandable way to do this.

MiladNazeri avatar Mar 30 '21 21:03 MiladNazeri

we're also having an issue with this... any direction in the official docs would be appreciated

rraczae avatar May 07 '21 16:05 rraczae

I would really like to understand what the Amplify team's recommendation is on this... what are best practices, etc.

treystudio3a avatar May 14 '21 17:05 treystudio3a

@dabit3 any official statement on this? Is amplify a dev tool only? Please make it clear in the docs that amplify is not suitable for production apps. Many people spend a lot of time on this only to find out that most basic features are missing. Plus, no official statement for more than a year 👎

Totally agree with you. It's easy to set up projects from scratch. But in long term, when there's a need to changes, we eventually are in hell. Amplify hide very much implementation details, so it's lack of production-grade features.

osddeitf avatar Jul 26 '21 05:07 osddeitf

Glad I ran into this early in my evaluation. It'd have been catastrophic to hit a wall like this in production.

khalibloo avatar Aug 05 '21 18:08 khalibloo

This is also an issue for me. One key requirement is to have rollback support. Our dev team uses multiple independent environments and we often push other branches during code reviews, then push another branch, effectively removing previously added resources.

pagameba avatar Aug 30 '21 13:08 pagameba

No response to this for so long really sucks. @josefaidt I see you've added this to your project board recently... perhaps a quick reply to at least give us some info would be nice?

markymc avatar Sep 03 '21 16:09 markymc

Hey - wanted to drop a note in from the Amplify team. We're looking into some data / schema migration workflows right now, though because this space is really large, we won't address every single use case initially. Soon, we'll launch a mechanism to explicitly opt in to breaking changes during push. After that we'll look into more sophisticated migration workflows.

Question to the community, is this feature valuable already if we enforce to only allow data migrations if the schemas between the environments are exactly the same?

One of our core design challenges right now is to provide a smooth migration experience when it's not so obvious. For example, renamed models or fields, changed field types and nullability all within one "deployment step".

renebrandel avatar Sep 03 '21 18:09 renebrandel

I wonder if there could be some schema markup to help with this...where you make use of temporary @was, or @isNow

Old Schema

type Dog @model { id: ID! name: String! breed: String! favoriteToy: String! }

New Schema

type Animal @model @was('Dog'){ id: ID! name: String! type: String! isNow("Dog") breed: String! favoriteObject: String! @was("favoriteToy") }

@isNow basically fills in the field with a value...maybe could be hooked up to a lambda, or simple logic @was basically moves renames the object

Both of these would only work when the field didn't exist...so the migration only happens the first time it is encountered...and after all environments are migrated, you can safely remove them...

GeorgeBellTMH avatar Sep 03 '21 18:09 GeorgeBellTMH

Hey - wanted to drop a note in from the Amplify team. We're looking into some data / schema migration workflows right now, though because this space is really large, we won't address every single use case initially. Soon, we'll launch a mechanism to explicitly opt in to breaking changes during push. After that we'll look into more sophisticated migration workflows.

Question to the community, is this feature valuable already if we enforce to only allow data migrations if the schemas between the environments are exactly the same?

One of our core design challenges right now is to provide a smooth migration experience when it's not so obvious. For example, renamed models or fields, changed field types and nullability all within one "deployment step".

One of my use cases is that I need to make a change to the schema that involves a breaking change to the data that is already in the schema. For instance, a field that was previously not required becomes required and we need to backfill some data into existing records in order for AppSync to not complain.

What I am looking for is the capability to execute a series of migration scripts during or after the amplify deployment, where the scripts have an 'up' and a 'down' capability in case of rollback. The ideal solution would keep track of which scripts have been executed, execute the 'up' method during migration events, and have some way of rolling back a migration and triggering the 'down' event in the event that the deploy fails for some reason.

Ideally Amplify would provide the infrastructure and scaffolding for this and all I would need to do would be to run an amplify command to create a new migration script and then fill in the details of the up and down.

pagameba avatar Sep 07 '21 12:09 pagameba

@renebrandel This is also related to aws-amplify/amplify-category-api#180

On top the actual implementation that you guys might undergo (hope so), given that lots of people are implementing their own custom approach, I think it would be also very useful to provide guidance and feedback on what is the best route to implement a custom approach here.

Some tricky aspects upon schema update / migration:

  • how to integrate the approach in the build so that it is possible to roll back if there are errors. I get the general idea, but some guidance / best practice on this would be very helpful.
  • how to handle existing user sessions and datastore conflicts (might force a datastore clear).

I'll share some of our notes, it's just a draft: image

In general, what about adding an entry in the amplify docs about data migration, mentioning the plans for implementation and alternative best practices for custom approaches?

Please keep us posted about your implementation schedule.

cfbo avatar Sep 08 '21 07:09 cfbo

@renebrandel is this being worked on in some form or fashion still? If so could you possibly link a branch?

crazyzelot avatar Oct 21 '21 03:10 crazyzelot

@crazyzelot https://github.com/aws-amplify/amplify-cli/pull/8425

v-raja avatar Oct 24 '21 00:10 v-raja

@renebrandel
Any update on this? Or can someone lead me to a best practices guide on this issue? I can't find anything in the docs and I often run into issues after Schema updates (Simply creating a non nullable field which doesn't currently exist within a db table). I'm hoping to start testing my application with live users and I'm certain migrations are necessary for that.

Am I going to have to write my own custom migration mechanism or has the team got something in the works?

Taylor-S avatar Jan 15 '22 23:01 Taylor-S

Hi @Taylor-S For your particular use case, you should be able to use the @default directive in your new field. https://docs.amplify.aws/cli/graphql/data-modeling/#assign-default-values-for-fields

But migration use case are obviously much larger than just that. We're currently working on a @mapsTo directive that allows you to rename an existing field/model to a new name.

renebrandel avatar Jan 15 '22 23:01 renebrandel

@renebrandel , Awesome! That directive will definitely help me out. Obviously I'm very new to graphql and amplify. :) Glad to hear the team has something in the works. I'll keep an eye out for the update. Thanks for the quick reply

Taylor-S avatar Jan 16 '22 00:01 Taylor-S

Hi everyone, while we haven't yet addressed all of the concerns mentioned in this thread, we are excited to announce a new @mapsTo directive to help with certain scenarios. It is available in the latest version of the CLI (7.6.14) as a developer preview. To try it out, you don't need to do anything except start using the directive in your schema.

This directive can be used to rename a GraphQL type but retain the original table and data. Usage looks like:

type Article @model @mapsTo(name: "Blog") {
  id: ID!
  title: String!
}

Where "Blog" is the original name of the "Article" type that contains data you want to retain. For more details, check out the docs PR here: https://github.com/aws-amplify/docs/pull/3890/files

edwardfoyle avatar Jan 31 '22 23:01 edwardfoyle