Version increments should distinguish discovery file updates from feature updates
Today the minor version number is incremented for both true code changes (big fixes, new features) and routine updates to the API discovery docs.
Ideally the version increment would reflect which of these is changing. My suggestion would be to increment patch number (the 0 in 1.86.0) on discovery file only updates and reserve minor version increments for true code changes.
Hello @jay0lee,
This is a great suggestion. I'll pass it along to my team to get their thoughts as well.
We have context aware commits in our Cloud Client Libraries where we provide better release notes. Regretfully, we don't have this infrastructure for clients which are built dynamically from the discovery docs. Please can you clarify if there is anything blocking from migrating to Cloud Client Libraries? As an example, see the release notes for google-cloud-speech here.
Feel free to reach out to me at jayhlee@ if you want to discuss further.
I primarily use this library for Workspace APIs which do not have Cloud client libraries. Frankly I really like the discovery approach of this library and while it does present some stability issues I prefer it as it means new API features are available in the library on day one.
Ideally we would make this versioning change in the next major version. There are a few open issues that may require bumping the major verison:
- https://github.com/googleapis/google-api-python-client/issues/1118
- https://github.com/googleapis/google-api-python-client/issues/1490
- https://github.com/googleapis/google-api-python-client/issues/2100
Please can you clarify if there is anything blocking from migrating to Cloud Client Libraries?
This is OT for this issue, but I see this question - "Why can't you just use our newer libraries" - getting asked a lot. One reason not to use them is their higher memory requirements: https://github.com/googleapis/python-logging/issues/623#issuecomment-1532913477
Another reason I've seen brought up by others is batch operations, which we use extensively.
Until the memory issue is addressed and the client libraries all offer batched operations, there will always be use cases where this library is the better (edit: or even only reasonable) choice. There could very well be other very good reasons.