open-subsonic-api
open-subsonic-api copied to clipboard
Add OpenAPI json file
This PR attempts to add an openapi.json to the project for client library/documentation generation. This is a big one.
The file should be usable but I will try generating a Python client library soon to see how well it works, but work on the PR can be started even now. Ideally all extension should be added the schema. So far I added extension endpoints, parameters, responses, http post endpoints.
It's Github, there is a live URL to the json file, so technically it can be used for stuff straight away...
Everything below will need to be resolved one way or another, they need decisions by the maintainers of the API specification..
Required work to be done before merge
- [x] Split out all endpoints, responses and component schemas out to files matching the folder structure using relative path refs. All files should resolve properly in editors for ease of editing.
- [x] Add build command that resolves all relative refs (and only those) properly handling nested refs
- If top level ref: dumps contents into final
openapi.json - If nested ref: convert into
#/...format (final place in theopenapi.jsonshould be inferred from the file path alone, e.g./content/en/docs/Responses/ArtistID3.jsongoes to#/components/schemas/ArtistID3)
- If top level ref: dumps contents into final
- [x] Add step to validate final
openapi.jsonduring build and for PRs - [x] Integrate openapi.json schema building into hugo somehow
- [x] OpenAPI online (Swagger/Redoc) documentation generated along with the API spec
Work checklist(s)
Decide to Approve/Change/Fix every difference between openapi.json and OpenSubsonic API spec
- [ ] Some typo fixes I failed to document :( I will need to find them again and add them to the section below
- [ ] Added
minimum: 0to integer types where it's implied (count,offset, unix timestamp,position) - [ ] forces
jsonformat on every endpoint.xmlresponse formats should not be a big hassle but client library generation will probably become problematic. - [ ] All extensions are added to the schema and tagged as "Extension", and have a 404 return type as well that described as "Not Implemented"
- [ ] Excluded
examples, opted to useexternalDocsonly because it made the Swagger and Redoc generated documents harder to read. I might add them if they make sense for generated client code. - [ ] Query Parameters only existing via Extensions are always added and marked in their
description field - [ ]
HTTP form POSTextension support might be stricter than what's allowed (global params - auth and format params - only allowed as query params, endpoint specific params are the only ones allowed in request body)
Decide to Schedule/Postpone/Fix/Resolve all issues/inconsistencies found in spec
They might need to be resolved before/along with this PR or they can be resolved at a later time. It would be nice if all of them could be answered.
- [ ]
ItemDatedescription and spec doesn't match (in required fields) - [ ] Disc number must be a number but obviously it can be other things
- [ ]
getAlbumInfo2return type incorrect (based onAlbumList)? - [ ]
downloadPodcastEpisodereturn type incorrect? - [ ]
deleteBookmarksummary and description is wrong - [ ]
downloaderror payload schema missing completely - [ ]
getArtistspage firstartistslink is broken - [ ]
getAvatarusernameparameter is "required" and has a "default" as well. it should be one or the other - [ ]
getCaptionsreturn payload not clear - [ ]
getCoverArtsizeparam type not specified, I assumeintas pixels - [ ]
IndexesignoredArticlespattern is unclear, it was comma-separated list as string before - [ ]
internetRadioStationhomePageUrlhas wrong details - [ ]
getLyricsartistandtitlequery parameters are really optional? - [ ] is
getLyricsreturns only one element? - [ ] Extension not supported error is unclear, assumed servers will 404 on the request.
- [ ]
PodcastEpisodedescription "PodcastEntry extends Child". Wrong name used? - [ ]
NowPlayingEntryminutesAgoformat is unclear - [ ]
PlayQueuedescription incorrect - [ ]
PodcastChannelerrorMessageexistence could depend onstatusin documentation? - [ ]
GetSimilarSongsandGetSimilarSongs2response type is the same in the spec (not the names)? - [ ]
Userexample uses booleans and integers as strings, but they are typed as booleans and integers. - [ ]
VideoInfois not documented - [ ]
Videosis not documented - [ ]
hlsendpoint summary matchesdownload - [ ] Typo in
searchalbumparameters's comment - [ ]
searchanyparameter type unclear - [ ]
SearchResultis not documented - [ ]
SearchResult2all field details are incorrect - [ ]
streamtimeOffsetparam's description typo, "Transcode Offet" - [ ] tokenInfo data is too deeply nested in docs?
- [ ] tokenInfo says "A subsonic-response element with a nested tokenInfo on success, or error 44 on invalid token.". Does that mean it cannot return any other error code on error?
- [ ]
unstarsummary is incorrect - [ ]
updateuseris missingplaylistRoleparameter? - [ ]
createuseris missingmaxBitRateparameter?
Schedule/Require/Postpone/Request future improvements for openapi.json
- [ ] Could use inheritance for ArtistInfo and ArtistInfo2
- [ ]
getPodcastspayload dependent on query paramincludeEpisodes, but it doesn't indicate that structurally. - [ ]
jukeboxControlpayload dependent on query paramaction: get, but it doesn't indicate that structurally. - [ ]
getAlbumListandgetAlbumList2POST params can be a tagged union - [x]
jukeboxcontrolPOST params can be a tagged union - [ ] Add version info for each endpoint and add 404 responses where the endpoint didn't exist since v1
- [ ] add (back) examples
Review generated OpenAPI documentation to verify they are OK
- [x] SwaggerUI
- [x] ReDoc
Verify this OpenAPI schema against servers
~Using at least one of the tools from here the openapi.json should be tested against all endpoints request/response payloads on the servers below. Any mismatches/errors should be fixed if openapi.json doesn't follow the spec. Ideally will be no behaviors implemented in servers that are different in the official API spec.~
Way out of scope.
Client libraries generated from this openapi.json are generated OK
Python
- [x] synchronous library
- [x] async library
- [x] library using
pydanticfor response payload validation - no such library found
Proposed client generation libraries for testing
- https://github.com/openapi-generators/openapi-python-client
- https://openapi-generator.tech/docs/generators/python/
Typescript
- [x] Working async library
- [x] Working library using
zodfor response payload validation
Proposed client generation libraries for testing
- https://www.npmjs.com/package/openapi-typescript
- https://www.npmjs.com/package/@openapitools/openapi-generator-cli (not really good but official)
- https://www.npmjs.com/package/openapi-zod-client
Kotlin
- [ ] Working library
- [ ] Use library against Navidrome and test every endpoint
- [ ] Use library against any server requested by maintainers and test every endpoint
Proposed client generation libraries for testing
- https://github.com/Aallam/openai-kotlin
- https://openapi-generator.tech/docs/generators/kotlin/
Any other language required by maintainers
- Dart?
- C#?
Major Future Work
- Documentation could be generated from the
openapi.json(in its current form)
Deploy Preview for opensubsonic ready!
| Name | Link |
|---|---|
| Latest commit | f4d0c8185f3e328c41b42af8ea024f8c5964fdf8 |
| Latest deploy log | https://app.netlify.com/projects/opensubsonic/deploys/682ce4895f11760008d3e9ee |
| Deploy Preview | https://deploy-preview-137--opensubsonic.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify project configuration.
As explained before IMO
Break up openapi.json to smaller chunks - and add build step to merge and rebuild it into one file that can be served.
Should be done first, PR merging, rebase and everything will be a pain with such a single large file.
There's tons of tools to rebuild the large file from the split version.
Should be done first, PR merging, rebase and everything will be a pain with such a single large file.
For the purposes of this PR it will probably be done last, breaking it up now will slow down making sweeping changes (if requested) and testing it's validity.
Your future work was not clear if it was tied to the PR or for future PR. If the split is made before it's ready to merge then no problem.
Your future work was not clear if it was tied to the PR or for future PR. If the split is made before it's ready to merge then no problem.
It's a PR, it's meant to be reviewed and changed to fit the needs of the project.
Although yes, I hoped I won't need to write GitHub actions and a build step for the file for the first version as it probably won't be that simple if we want client (and server) generation work nicely (which is my priority).
(which is my priority).
The priority is actually that anything that is merged is stable and maintainable, this is a public documentation of a public API used by many servers and clients, breaking changes must be avoided at all costs.
If we go single file, but can't split because it would trigger some small incompatible changes we are doomed.
One solution could be to publish the OpenAPI spec as alpha / unsupported until we are ready to commit to it, but the priority is stability not rushing.
The generation does not need to be a github action, the project is already provided with docker and it can be a simple command in it to generate the json, this comes with the added benefits of content validation at that step to avoid PR with errors in the files, this also avoids all formatting issues, greatly simplifies future reviews, ...
You missed the part where I changed the PR description to require your requested changes.
I can reiterate that I wrote before.
It's a PR, it's meant to be reviewed and changed to fit the needs of the project.
My motivation and priorities are not something that will change (for this API to be easier to use and adopt) but it also won't affect this PR.
The generation does not need to be a github action, the project is already provided with docker and it can be a simple command in it to generate the json, this comes with the added benefits of content validation at that step to avoid PR with errors in the files, this also avoids all formatting issues, greatly simplifies future reviews, ...
I simply meant some form of pipeline step is needed on every PR to preserve the structural validity of the final generated JSON.
I simply meant some form of pipeline step is needed on every PR to preserve the structural validity of the final generated JSON.
In all cases we need a way to validate the file(s) if there's one or many that does not change the need, else how to you review the PRs? Humans can't spot a typo or a missing comma in a large commit.
@Gr3q Do you intend to still work on this ?
Yes, when I have time.
What about the huge checklists below the "Required work to be done before merge" section?
I can take the lack of comments on them that they are not important for you guys for this PR?
Most don't care about the doc part details so it's hard to get engagement until it's finished.
For my case, this work was planned to be done by me to unlock the new json endpoints so it's very important, and your list of required correspond to what I requested here and would have done so not much to comment.
Edit: Sorry for close missclick
@Tolriq which folder structure and target folder do you prefer for the broken out schemas?
Contextual requirements/options
- Target Folder options:
content/en/docsopenapi- anywhere you want really
- The base
openapi.jsonshould be above the target folder so path traversals are not needed for it's references (No"$ref": "../{schemapath}" - All broken out schema filenames (responses, endpoints etc) should match the original documentation filename in
content/en/docs
Options
Options below are using Subsonic Response, content/en/docs as target folder and openapi.json placed into the root folder as an example.
Option one: flat list placed directly into the target folder/schema type
openapi.json
content/en/docs/Responses/
...
Subsonic-baseResponse.json
Subsonic-failureResponse.json
Subsonic-response.json
Subsonic-response.md
Subsonic-successResponse.json
...
Option 2: Supplementary schemas go into subfolder of the same name
openapi.json
content/en/docs/Responses/
...
Subsonic-response/
Subsonic-baseResponse.json
Subsonic-failureResponse.json
Subsonic-successResponse.json
Subsonic-response.json
Subsonic-response.md
...
Option 3: All json shemas go into a subfolder of the same name
openapi.json
content/en/docs/Responses/
...
Subsonic-response/
Subsonic-baseResponse.json
Subsonic-failureResponse.json
Subsonic-successResponse.json
Subsonic-response.json
Subsonic-response.md
...
IMO it should be under an openapi folder to not mix with the .md files. Because for now it's beta stuff and in the far future as you said we could generate the md files from the openapi files and so it's better not to mix them in the hierarchy.
For the folders under openapi I have no strong opinion except that it's better if we have some kind of minimal organization.
I don't know the best practices here as I still have not had found time to fully investigate this stuff.
Thanks for all the hard and tedious work here.
I've searched in OpenApi specs but could not find an easy to indicate that some fields are OS only. Not a big deal, but if you could confirm.
Can you also add a simple documentation page to state, that this is a WIP, is only relevant to OpenSubsonic servers due to above point but that new PR, additions, ... should also update the OpenApi part.
And about the OpenApi tags, do you think it's relevant to also add the specific OS tags too (Addition, change, ...) or that's not really how OpenApi tags should be used for the documentation part ?
I've searched in OpenApi specs but could not find an easy to indicate that some fields are OS only. Not a big deal, but if you could confirm.
Unfortunately the only way to do that is in the description. Which I didn't do because at the time I though this will be an OS-only schema because I though maybe there are new OS fields that are required and I wanted to mark them as such. It's not hard to maintain a pure Subsonic version of openapi.json as well with a little bit of work (and inheritance) but that probably wasn't the question.
Can you also add a simple documentation page to state, that this is a WIP, is only relevant to OpenSubsonic servers due to above point but that new PR, additions, ... should also update the OpenApi part.
I'll add that, I was writing a readme last night into the openapi folder but I'll see where is a better place to do so.
And about the OpenApi tags, do you think it's relevant to also add the specific OS tags too (Addition, change, ...) or that's not really how OpenApi tags should be used for the documentation part ?
It's kind of a layered question (because of the answer.
In normal APIs (Addition, change, ...) tags are not needed because it's simply denoted by the HTTP method used (Get, Put, Delete, Post etc) so it's always clear. Here it's not the case, while we can have multiple tags in the OpenAPI spec in generated documentation the endpoints will show up multiple times (in multiple sections). So in practice in almost every spec there is one tag for one endpoint. I wish tags could be just tags for better searchability.
Plus I guess the netlify build command need to be updated with the command to bundle the schema for serving.
This I cannot do.
In normal APIs (Addition, change, ...) tags are not needed because it's simply denoted by the HTTP method used (Get, Put, Delete, Post etc) so it's always clear.
In the case of OS this is not related to what the command do but the changes from the original specs, Addition, Change, Clarification, Extension.
While they are not necessary in the scope of the main purpose of the OpenAPI, they are important for the documentation part if one day we generate docs from this. The documentation should help clients and servers to easily see changes needed to be OpenSubsonic compliant when they are currently only Subsonic compliant.
This is not needed to move on with this, but something we need to keep in mind for the future to keep other servers / clients to easily join OpenSubsonic.
Plus I guess the netlify build command need to be updated with the command to bundle the schema for serving. This I cannot do.
Pretty sure @deluan will be able to help for that small part.
In the case of OS this is not related to what the command do but the changes from the original specs, Addition, Change, Clarification, Extension.
I see, in that case I recommend adding those tags even if the outputted swagger/redoc interactive documentation shows endpoints twice.
By the way swagger-cli is not good enough at validating the output, redocly is better but it has other issues (although it has a nice command to split a json to separate files...would've come in handy a few days ago...).
So I recommend vacuum with the command vacuum lint static/openapi.json -d -e to be used for validation instead. On the other hand it's not OK at bundling at all...
I cannot do the Build+Validation changes because the netlify build process is not part of the repo.
@deluan Since you're back, can you delegate me the permissions on Netlify to make the changes ?
@deluan Since you're back, can you delegate me the permissions on Netlify to make the changes ?
Later this weekend, I'll check if we can add multiple members in a Netlify team without cost, and also evaluate the migration to GitHub Pages, so all members of the organization can manage it.
Anyway, AFAICT, there's no need to change anything in Netlify dashboard. All build configuration is done in the netlify.toml file for this project. @Gr3q, plese check the docs here: https://docs.netlify.com/configure-builds/overview/
Also note that our website can show the SwaggerUI for the OpenAPI spec: https://www.docsy.dev/docs/adding-content/shortcodes/#swaggerui
It all works now.
If you put a working demo server URL that accepts CORS requests into the server block The Swagger and Redoc pages will be fully functional. Currently they can only be used for browsing.
Discuss if the location of the output openapi.json is OK, after it's out it might be harder to change it in case other people automate against it's url in the future (I guess a redirect can always be added).
This looks great!
If you put a working demo server URL that accepts CORS requests into the server block The Swagger and Redoc pages will be fully functional. Currently they can only be used for browsing.
Not sure if I understand correctly what you are saying, but both Redoc and SwaggerUI are fully functional right now. Check the preview URL provided by Netlify (https://deploy-preview-137--opensubsonic.netlify.app)
What I meant e.g. in Swagger the "Try Out"->"Execute" flow fails.
If you point it to a server I described above it could actually make real requests against it if someone wants to look at real data.
We should just have the server part be editable https://swagger.io/docs/specification/v3_0/api-host-and-base-path/ so that people can test against their server.
Considering there's, many different implementation and servers.
Just checked and all looks nice, thanks again for the work. Only issue I see is that swagger does not show the example answers while redoc does.
We should just have the server part be editable https://swagger.io/docs/specification/v3_0/api-host-and-base-path/ so that people can test against their server.
Done, although they still can't test against their server if CORS is not allowed on it, which should be all servers.
Only issue I see is that swagger does not show the example answers while redoc does.
I think this is a shortcoming of the Swagger shortcode in the docs, my local preview works fine. Time to actually test building the docs locally...and install the ancient version of hugo used for it.
I will see if explicitly adding examples makes them show up in the docs
I will see if explicitly adding examples makes them show up in the docs
The answer is no, there were already some examples added to the success response and they don't show up.