amazon-personalize-automated-retraining
amazon-personalize-automated-retraining copied to clipboard
Handling of multiple solutions and multiple campaigns
Add the possibility to update multiple solutions (e.g., create solution versions for them).
- [ ] Handling for multiple solutions
- [ ] Handling for multiple campaigns
Hi Till, we have been using your code to retrain our solutions on personalize for while now. Very easy to use and works right out of the box. Now we are training a second dataset. We were wondering if we run make again with the new values, would it override the old set up?
Hey Keyvan,
thanks for the feedback, highly appreciated.
If you run make again with new values, it'll override the deployed stack. It does so, based on the name of the stack which is configured here: https://github.com/aws-samples/amazon-personalize-automated-retraining/blob/master/Makefile#L14
My recommendation for doing this right now would be: Check out this repo twice (once for each dataset), adjust the values accordingly per dataset, and have a distinct --stack-name per dataset.
This is a bit cumbersome, but unfortunately this sample has currently no other way of generalizing over the dataset.
For curiosity: In your use case, are the rest of the parameters equal (e.g., retrain rate, roles, ...)?
--stack-name would do the job! Glad it's possible.
In our use case, most of the parameters would remain the same. We are looking to train two different datasets with different metadata to see which one performs better. I think the params that would change are things like S3 paths and Campaign ARNs.
Perhaps to avoid checking out the repo twice, you can add a feature like determine which params.cfg file to use from env variables? If you think this is a useful feature, I can gladly make a pull request.
Trying to run make with the second stack name. While the stack in cloudformation gets created with the new name, it seems like the Lambda functions retain their name, which causes an error during the deploy process.
Oh, sorry to hear. Hope it didn't break any of your automation or so.
It seems that setting FunctionName in https://github.com/aws-samples/amazon-personalize-automated-retraining/blob/master/stack.template#L125 (there is one occurence for each function) is unnecessary and will generate a random, unique name if not set. So removing all occurences of FunctionName in the template should fix the issue. I didn't test it though and will only be able to do so beginning of next week.
Perhaps to avoid checking out the repo twice, you can add a feature like determine which
params.cfgfile to use from env variables? If you think this is a useful feature, I can gladly make a pull request.
Absolutely, I think its best to extract the config file to a default parameter in the Makefile, e.g like https://github.com/aws-samples/amazon-personalize-automated-retraining/blob/master/Makefile#L4, which defaults to parameters.cfg and you can override it like PARAMETER_FILE=myfile.cfg make all or so
It seems that setting
FunctionNamein https://github.com/aws-samples/amazon-personalize-automated-retraining/blob/master/stack.template#L125 (there is one occurence for each function) is unnecessary and will generate a random, unique name if not set. So removing all occurences ofFunctionNamein the template should fix the issue. I didn't test it though and will only be able to do so beginning of next week.
That worked thanks! The generated function names are actually decently readable, which was my biggest worry.

Do you know of any way to have the template file explicitly generate names like stackname-functionname?
@keyvanm I added https://github.com/aws-samples/amazon-personalize-automated-retraining/pull/5 to include the stack name into the function name, please let me know if this is what you intended.
@Till--H Wow yes that's what I was thinking. Amazing that it's possible