troposphere
troposphere copied to clipboard
PyCharm autocompletion for properties
Due to the fact that the classes have dynamically generated fields for the AWS resource properties, IDEs like PyCharm doesn't suggest those fields during autocompletion.
For example:
from troposphere import Template, apigateway
t = Template()
apiResource = apigateway.Resource("contactcloud")
apiResource.ParentId #<-- won't be suggested by autocompletion!
This could be remedied in two ways:
1. Add type hints to the beginning of classes, e.g.:
class Resource(AWSObject):
ParentId: basestring # Python 3 type hints for fields
PathPart: basestring
RestApiId: basestring
resource_type = "AWS::ApiGateway::Resource"
props = {
"ParentId": (basestring, True),
"PathPart": (basestring, True),
"RestApiId": (basestring, True)
}
2. Generate stub files
This is outlined by PEP 484#stub-files
I've tried to implement alternative 1, which outputs classes like this:
class Resource(AWSObject):
resource_type = "AWS::ApiGateway::Resource"
ParentId = None # type: basestring
PathPart = None # type: basestring
RestApiId = None # type: basestring
props = {
'ParentId': (basestring, True),
'PathPart': (basestring, True),
'RestApiId': (basestring, True),
}
The Problem now is that 2to3 completely ignores the type hints. Maybe a better course of action would be to directly generate python 3 code and have that converted down to python 2?
Does the stub file approach work for both python2 and python3? Those likely could be autogenerated similar to the way I'm generating code from the resource specifications.
@markpeek Stubs should work for both p2 and p3. The problem is that p2 and p3 have different syntax for type hinting (as in p2 doesn't really have one at all), so you can't use the exact same file for both. Conversion seems to be out of the question too.
Would having two extra scripts for generating p2/p3 stubs be an acceptable solution?
I've created a PR #1079
Any further comments on this? Does the PR need more work?
Sorry for the delay, I've been contemplating the right way to handle it. Here are some thoughts:
- The main issue around python 2/3 is the use of basetype in troposphere. I'm debating making that more uniform by introducing a string_type based on the version of Python (similar to how six deals with it). This would be a broad change touching all of the files (best done by me in a single commit).
- Really there should only be 1 generator script for both troposphere code and the autocompletion.
- The autocompletion code should output both 2 and 3 into the same file since type checkers are supposed to honor version and platform checking. Question: does PyCharm support this?
- The autocompletion code could be emitted during packaging rather than checking in these files. Need to think more about which is better.
- This is tricky, using an abstraction class might confuse some people, especially if they're new to python. Maybe I'm just overthinkinking it though. At any rate this seems unavoidable if the same stubs shall be used for both python 2 and 3.
- I'll try to get that done by sunday.
- Neat, I didn't know that. Pycharm does support it. (Also just tested it)
- You could argue that the code should also be emitted during packaging then. I think it's more obvious where it came from if it's also committed.
Cheers for the feedback!
Changes are done. See PR #1079
Any update on this?
Would really love avoiding the cmd click for props everytime.
I looked at it today and I think we need to make a few changes. I'll follow up in PR #1079. But I hope to get it merged and then fix it up real soon to get it out.
I saw the pr has already been merged, but I don't get autocompletion. Do I need to do something else? I tried searching the docs but couldn't find anything. Thanks!
The generator was merged, but sadly the generated files never were added to the repo/package.
Are there any plans to add those files to the package? In the meantime I'll see if I can generate them myself and have PyCharm detect the stub files. Thanks!
That is up to @markpeek ! I'd love to see my labour bear fruit :)
@DrLuke , @xyvark , @grkking, @markpeek Please take a look at my project https://github.com/MacHu-GWU/troposphere_mate-project
It built a thin wrapper layer on top of troposphere using attrs. It is exactly the pythonic implementation with nicer API. And type hint is enabled with type hint markup described https://www.jetbrains.com/help/pycharm/type-hinting-in-product.html.
And it comes with a feature called auto-reference/associate. You don't need to remember the detailed properties and the reference syntax to reference each other, you just use this, it automatically does the work for you:
associate(lambda_func, iam_role)
associate(lambda_func, subnet1)
associate(lambda_func, subnet2)
associate(lambda_func, security_group)
So it gives you auto suggestions on type and properties name.


If my works are helpful, I am happy to see you star my project :)
Thank you @MacHu-GWU
Check how Pydantic does this. It does validation with type hints alone. Also, you get autocomplete in constructor keyword arguments with "pydantic" PyCharm plugin
https://pydantic-docs.helpmanual.io/usage/models/
So Troposphere no longer supports Python 2, this makes this problem easier right now.
I have a bit of time through the end of this year that I can put towards some work on implementing this if we can decide on a path forward.
I put together a PoC rewriting troposphere.__init__ to use pydantic models as the base class for Template and BaseAWSObject with the goal of making as few changes as possible. I rewrote 2 simple cloudformation resource classes to use this new format to give an idea of how they will look/function. The project can be found here: https://github.com/ITProKyle/troposphere-pydantic. It also includes an example template to show usage.
So far, the biggest breaking change I see for this route would be title & template arguments are no longer positional and must be provided as keyword arguments.
The least effort path forward if this route is chosen would likely be to manually update all classes to use the new format. We can then look to automatically build the classes using jinjia as a future step unless someone else has some time in the next few weeks to get that started as it may burn too much of the time I have available right now to figure out.
Open to any suggestions/feedback on my PoC. I'll hold off on working on it further until a maintainer chimes on.
@ITProKyle there's spec files that AWS maintain that regularly with all resource, property information. I create an open source project that automatically keep the Resource model class up-to-date along with the spec file.
https://cottonformation.readthedocs.io/en/latest/
It gives you "Required Property", "Optional Property", "Return Value" hint and auto complete with built-in type hint and built in validator.

Also more design pattern such as these are included:
- deploy from python
- nested stack and template
- group your resources for incremental debug
I wish troposphere could also provide those features that can improve productivity. Feel free to borrow my work that automatically generate source code based on the spec file.
@MacHu-GWU Yeah, this is neat (cottonformation) and will be useful to pull the schemas with it.
User feedback though on cottonformation-> the problem and typically the reason why I really dislike CDK as well is, you renamed every single attribute from the CFN properties -> rp_AssumeRolePolicyDocument instead of simply AssumeRolePolicyDocument breaks the integration that is in Troposphere today which makes it so easy to import CFN resources and get them as pyhon objects. Is there a particular reason for the properties to have been renamed and have rp_ prefixed to it?
@JohnPreston yes. Everything you declare a resource, there are some option are Required and Optional. rp_ prefix tells you what is RequiredPropert, p_ prefix tells you regular optional property. rv_ prefix tells you Returned Value.
In my experience, it fully leverages the power of modern IDE, this free you from looking up documents and allow you to focus on logic.
@JohnPreston yes. Everything you declare a resource, there are some option are
RequiredandOptional.rp_prefix tells you what isRequiredPropert,p_prefix tells you regular optional property.rv_prefix tells you Returned Value.In my experience, it fully leverages the power of modern IDE, this free you from looking up documents and allow you to focus on logic.
I get that. But my point is one cannot then just import property name for propety name. The renaming of the attributes although with sense, is really annoying IMHO. With troposphere following the names of properties strictly and yes still be able to define validation and required/optional works just fine.
And as a CFN engineer you should know that the required / not required is not enough at all to illustrate whether or not two options are compatible.
In my experience, reading the docs saves kittens. And there is no shortcut for it, however hard you try
@JohnPreston I totally agree with you. Type prefix like rp_ and `p_`` are really annoying to CFN engineer. But Python is a dynamic type language. This enables before-runtime check in IDE if you write the wrong code.
My initiative is to allow low experienced engineer to be able to develop CFT easier with less learning curve. And the trade off is to use magic prefix _ to trigger the auto complete and hint. I have to admit that it is not perfect though :)
You don't need those prefixes for IDE support, see e.g. PyCharm Pydantic plugin
@MacHu-GWU - Just want to echo the feedback that everyone else has provided. Changing the names of the properties kills its usability for me and my teams. I get that you are trying to differentiate required vs optional parameters but the appropriate way to do that would be by making the type annotation of the property Optional[type] not by changing the documented names of the properties. This is something that all modern IDEs display and provide feedback for before runtime (type checkers like pyright can also be used). I my opinion and that of other I work with, changing the names of the properties adds more complexity especially for the "low experience engineers" you are targeting.
That's not to say cottonformation isn't a nice library. It's just not a replacement for Troposphere for a company with hundreds of Troposphere templates (which would take a significant amount of time to update EVERY property) and is not a solution to this problem.
I've been busy but keeping an eye on this thread. Here's some comments:
- First off, I will be making a series of changes to the layout of troposphere to keep backward compatibility but move the validators into a separate directory. This is needed to allow better code generation from the resource specification file.
- With the generator changes for the above, I will get the stub files generated (thanks @DrLuke) alongside the code files.
- In looking at the stub files it appears to allow for matching the parameters but not the types. This appears due to the use of kwargs in troposphere and there isn't a great solution for typing on each kwarg.
- In this I want to move incrementally and preserve as much backward compatibly as possible, given the amount of existing code in production, while improving the ease of use.
- I will have to look harder at pydantic but initially I'm not a fan of the "title" change. Also, it seems PyCharms has a pydantic module but I don't believe VS Code has one so there is still a gap.
Anyway, wanted to provide a quick update from my viewpoint. The key for me is to rely more on the generator to keep the library up to date with the resource spec, ensure reproducible results (due to renames that are required), and to generate the corresponding stubs accurately.
To expand on what @ITProKyle said and to give my 2cts on the whole pydantic story:
I got introduced to pydantic mostly from those convos, and went off, created projects where I would laze off using the JSON schema to deal with creating my classes just because it feels then easy to just deal with objects and there properties instead of dealing with dictionaries / lists etc. Very cool, very familiar to using the CFN CLI tool to create resources/modules.
However, although I find the principle of generating the Troposphere properties from the JSON schema exposed by AWS CFN sounds amazing, I think that moving from the current object model which is superbly easy to read and understand, moving to auto-generated classes, even with the ability to override the Base class, feels like a giant step back and would break so much code logic that has been implemented so far.
I rely on the current structure of troposphere (i.e issubclass(x, y) so much that I would publish my own fork version of troposphere to Pypi to keep compatibility of the current model going.
Plus the logic to go over a root AWS Resource down to its nested AWS Property is fairly straightforward. I might have already mentioned that, with the current model, you can pass a whole AWS Resource from an existing template and get its Troposphere version in a flash. For my usage, the compatibility is such that I can take the example for a given resource type from the AWS docs, and it works without a miss (modulo Intrinsic functions)
Where I do like what cottonformation does for example, which I implemented my own way for my own purpose, is providing a way to know what the valid return values are (from Ref / GetAtt etc.) but that is then again something that is fairly straightforward to implement.
So, in my opinion, going the pydantic way instead of good old **kwargs is going to be a nightmare for anyone using Troposphere as a pure library.
However, for the use of the JSON definition of the CFN resources as exposed by the CFN team, I would much rather see using that purely as a json schema validation in the validate() process
If for example we had a python library with all the CFN schemas in there (probably would be huge, but pydantic is not light anyway) we could use that as source for jsonschemas validation very easily (that's what I do for compose-spec among many other things).
To your point @markpeek the one advantage I see with the generator is to avoid human error (which I know I did in one of my PRs) but I find the community has been pretty great at doing just that. So if we combined JSON validation from the CFN official models to validating PR changes we could be keeping up with the changes very fast and very nicely with a high grade of confidence.
Thanks for reading. Go Troposhere!
PS: If someone knows a way to open PR to the AWS CFN team to improve on the JSON definitions, can someone point me in the right direction ?
@MacHu-GWU - Just want to echo the feedback that everyone else has provided. Changing the names of the properties kills its usability for me and my teams. I get that you are trying to differentiate required vs optional parameters but the appropriate way to do that would be by making the type annotation of the property
Optional[type]not by changing the documented names of the properties. This is something that all modern IDEs display and provide feedback for before runtime (type checkers like pyright can also be used). I my opinion and that of other I work with, changing the names of the properties adds more complexity especially for the "low experience engineers" you are targeting.
@ITProKyle Thank you for the feedback! Great to hear different sounds with very solid reasons.
On the topic of validators, there's much less need for maintaining basic validation functions with the use of pydantic as its all done automatically by the library. It even handles converting a dict into an object if the user wants to provide a dict instead.
...appears due to the use of kwargs in troposphere and there isn't a great solution for typing on each kwarg
Adding types for kwargs is quite easy but i'm not sure how we would go about auto-generating them for troposphere stubs. To it it, it would require changing to **kwargs to something like **, Kwag1: str, Kwarg2: Optional[str] = None for each property where Kwarg1 is required and Kwarg2 is optional. (PEP 3102)
I will have to look harder at pydantic but initially I'm not a fan of the "title" change. Also, it seems PyCharms has a pydantic module but I don't believe VS Code has one so there is still a gap.
The "title" change is partially pydantic and partially a new PEP being worked on to unify how type checkers treat custom classes that work like dataclasses (discussion).
The __init__ method of a pydantic model can be overwritten to accept "title" as a positional argument but it would break the native autocomplete support provided by the PEP being drafted as it would no longer work like a dataclass. I can't say im particularly happy about this change either, specificity when supplying arguments is never a bad thing. There is also precedence baked into python for instantiating objects using only explicit kwargs (dataclasses).
As for vscode support, it is supported natively with the next release of pydantic (currently available from the master branch) which implements the PEP being worked on by pyright. Pyright/pylance is use by default by vscode's python extension.
I rely on the current structure of troposphere (i.e
issubclass(x, y)so much that I would publish my own fork version of troposphere to Pypi to keep compatibility of the current model going.
This is something that would remain viable through the use of pydantic. Looking at my PoC, the only base classes being changes is that of the troposphere.BaseAWSObject and troposphere.Template from object to pydantic.BaseModel. All other classes would have the same exact base classes.
So, in my opinion, going the pydantic way instead of good old **kwargs is going to be a nightmare for anyone using Troposphere as a pure library.
How would it be a nightmare? It would work exactly the same but actually define the kwargs it wants instead of letting the user guess what it wants. **kwargs is great for low effort code to just get something working but it's a horrible user experience to not have explicitly defined keyword arguments.
As nice as it would be from a maintainer standpoint to have troposphere auto-generated from the CFN spec, it adds little value to the end user for the large time investment it will require. Not to say that it doesn't need done - there are just more impactful projects for the users of troposphere such as this one.
A possible stopgap solution would be to validate troposphere against the CFN spec instead of generating from it. For example - for each resource in the spec, check to ensure that a class exists and that it's properties match. This would ensure that troposphere matches the spec (at least loosely) now and that it continues to match the spec as full auto-generation is worked towards.