readthedocs.org
readthedocs.org copied to clipboard
Upload pre-built docs
I wonder if that would be possible? Would be extremely useful, since I'm building and deploying my packages with travis, and could perform the doc building also during this runs.
You may ask why this is necessary, since its not obvious that I can not use your build environment for documentation. I've to convert some ipython notebooks with cabal to rst and include them in my sphinx build. This aint possible on RTD and probably will never be.
Thanks for your effort!
We agree this would be an awesome feature, we've already talked about how this might work and it's vaguely on our roadmap already.
what is the status of APIv2.1?
:+1: to this. I could really use a "dumb file upload" API, where I can just provide a zip/tar/whatever of HTML+CSS files, along with some basic metadata like what version it's for.
I'm also very interested in this. Since I try to generate as much documentation via the code comments it would be very easy to, during CI, generate the pages, and upload to RTD - without the need for checking-in these generated files!
IIRC this used to work, or at least, there was a file upload option but it was removed?
What needs to be done to overcome the original problems with this feature? It would be nice to have again, and could even allow us to offload some build overhead from RTD.
Maybe it has been removed, so people don't misuse rtfd as a free hosting service?
I believe it was once a feature of RTD, but it was removed. We'd like to have this supported with an overhaul of the API -- a task we haven't added to our roadmap yet.
The more pressing issue we need to take a stance on is what are we accepting via html upload. RTD doesn't aim to be just a file host -- you could output and host documentation almost anywhere on the internet. Our goal is making the documentation authoring and viewing experience great, with Sphinx being our primary focus. The documentation experience would go south if we just took any old html and hosted it. We don't have the resources or community support to improve the experience of al the various documentation output folks use.
We're still shaping our opinion here, but we look to support some form of integration with externally built sphinx documentation, via some local command wrappers that you could run on external CI services/etc. We don't have an ETA on when we might drop that on our roadmap yet, but it is on our radar.
It seems the biggest issue is then just ensuring a certain level of quality, and now allowing spam and or non-docs to be hosted. Perhaps the feature could be enabled only for validated users. Or maybe even paying (?) users? I can see how it would be difficult to handle that resource-wise though.
This feature would be great, even it was turned off on the public RTD server, but available for corporate-hosted ones in which the concerns over hosting generic HTML would not be an issue. The builds of our repos are just to contorted to expect a RTD server to check out and build, whereas our Jenkins CI server does all of that already.
I can't get RTD to compile my project because of external dependencies that require OS modules. I'd love to be able to upload my compiled docs.
Since this issue isn't moving I have to rely on pythonhosted and kill the RTD entry for my project, loosing some SEO love....
@mistercrunch if you don't use these dependencies within your documentation at all, you can mock them out or use conda.
I would love to see this too. We are having hard time mocking our deps in RTD and we already build on Travis. Conda may help here, but still it seems like duplicating efforts given that we already have everything in Travis...
Alternatively, if we could run in RTD the same docker containers that we already use in Travis, that would also be great.
Hi all! This topic was raised again and we started discussing about it and having a brainstorming of ideas. Although, we don't have a clear direction on what to support and/or how and/or where (community/corporate site).
We have some things to consider for anything that we implement:
- how to deal with spam/non-docs projects
- upload PDF/epub/singlehtml versions
- once a project upload docs what to do with "building docs on RTD"
- output is valid for RTD (contains our custom JS files that serve ads, or the flyout menu, for example)
- generate search index
- support multiple versions for uploaded docs
- support translations
On the other hand, we need to consider other things around the workflow,
- write the logic as a Sphinx extension to be able to get metadata from inside and rely on the user to install all the dependencies and deal with setup, latex, etc or,
- give users a Docker image (dealing with
rootpermissions of output files) with everything pre-installed to build the docs and a guide to extend it or, - ... another option
Please, if you have some ideas to share about this topic, write them down here to help us to take the right direction on this. We still don't know where this feature fits better, though.
The docker option also sounds very interesting as it would make mocking c-extensions completely obsolete.
Personally, we already moved our doc generation to a travis+github pages based solution (using doctr, see https://github.com/taurus-org/taurus/pull/572).
Please do not take it as if I was saying "Too late - I don't care" , but in the sense that IMHO, the best option is to support something that can be integrated in a travis (and/or gitlab-ci) build.
In this way you do not need to provide an special API for building (nor give CPU time for builds yourselves). Just provide a docker image that can be used to easily build the docs (according to your specifications) in travis/gitlab-ci and to deploy to RTD.
Here is another use case for this.
Currently non-setuptools builds (like Poetry) are supported via PEP 517. Unfortunately, these builds are not deterministic because lock files (like poetry.lock) are not taken into account. Also, you need to manage the documentation build dependencies via extras, rather than development dependencies.
(Of course, an alternative way to support this would be to extend the python.install configuration option to support tools like Poetry directly.)
@cjolowicz is completely right (IMO). I specifically want support for Poetry, but judging from this thread, a generic pre-built option might be the best. There should also be support for automated deployment to ReadTheDocs.org from the /docs folder, similar to GitHub pages.
by the way, we already support poetry using pip. This is thanks to PEP 517 https://python-poetry.org/docs/pyproject/#poetry-and-pep-517
It'd be convenient if RTD could automatically install dev dependencies relating to click. For example, I have sphinx-click in my dev dependencies, but I still need to add a requirements.txt with a sphinx-click entry so that RTD knows to install it.
I have sphinx-click in my dev dependencies,
Do you mean you have them in your setup.py file? Then you should use the setup method to install your package https://docs.readthedocs.io/en/stable/config-file/v2.html#packages
sphinx-click is in the dev dependencies section of my pyproject.toml (I don't have a setup.py). I think RTD detects if it's installing from a pyproject.toml, and if so, install dev dependencies that depend on sphinx.
@sumanthratna I think a current workaround is using extras https://github.com/readthedocs/readthedocs.org/issues/4912#issuecomment-457219385 https://python-poetry.org/docs/pyproject/#extras
Hi @sumanthratna! If you have any suggestion that it's not related to "Upload pre-built docs", please open a new issue. There are people subscribed here waiting for this issue and receiving notification of an un-related feature right now. Appreciate :)
We are generating documentation input from executables built from C++ sources in an expensive build process. Since we are a subproject of a larger project using readthedocs being, able to use readthedocs ourself would be great. It looks like our only option with this unimplemented is to redirect to some externally hosted documentation output.
We are generating documentation input from executables built from C++ sources in an expensive build process.
@bbannier If it's only a problem about resources, please open a new issue so we can take a deeper look and maybe we can assign the resources your project needs. Thanks!
My project requires building the docs from a C++ library that requires GCC-10 or llvm+libc++ to build, so it cant be built on the RTD docker images. this would be nice to see
What about allowing certain docker images for building (on RTD of course), validating the build output outside of that and then, if a "stack"-reviewer accepts it, release it into the wild wild internet. I guess there must be a duplicate already suggesting this.
:wave: So, just an update here for anyone following along:
This is loosely going on our roadmap for sometime this year. However, we have a number of technical pieces we need to handle first. There are some big question marks here still.
That said, a number of the issues in comments here already have solutions available, without requiring upload of pre-built documentation. In the better part of a decade this issue has been open, we've addressed a number of build and dependency issues with new features:
My project requires special OS packages
You can use build.apt_packages to hopefully install what you need now:
- https://docs.readthedocs.io/en/stable/config-file/v2.html#build-apt-packages
My project needs to execute special commands before build
We'll have a beta in the next couple weeks for build.jobs configuration option, enabling commands pre/post build steps. This should solve a number of issues users have with building their documentation outside their local dev environment.
- https://github.com/readthedocs/readthedocs.org/pull/9016
My project executes different commands than Read the Docs
We are gathering feedback for this feature. Perhaps build.jobs implemented in #9016 works for most projects already.
- #9062
If you have other use cases that are not well covered by our build infrastructure now, feel free to open a separate issue.
I'll keep this issue open, but we'll be putting discrete issues on our roadmap. I'll remove the design decision designation, not to communicate we have a concrete plan here, but we'll have discussion on individual issues as they come up. There are certainly a lot of questions here still.
We haven't had any serious progress here yet this year, but I am currently experimenting with a solution for this using build.jobs and magic-wormhole for file transfer towards RTD. So far, a script starts a new build and then sends files to the build process directly, while the build process waits for these files.
Noting for later, my proof of concept is at: https://github.com/readthedocs/test-builds/tree/wormhole
I wouldn't recommend this for use yet, there are a few things that will likely change here. I think a one-time codeword would be a good addition and wormhole relays seem unstable. A drop in replacement would be croc or maybe even running a private relay
I'm also very interested in this feature. Our build processes are quite complex and involve a substantial amount of work to complete. Documentation is extracted partly from the compiled code (notably Python extension modules built using C++ and Boost.Python), so running those build processes within RTD is not really an option. The best solution would really be to have a (REST ?) API to push the generated docs directly into RTD, rather than building them on-site.