dojo.io
dojo.io copied to clipboard
Reduce number of tutorial rebuilds that CI performs
When dojo.io is published, it currently verifies every tutorial demo by installing the dependencies and building them. This has the effect of making the deployment times very long and ties up multiple Travis build jobs while this work is done. With the coming of the web-editor (ref #93) this situation promises to get worse since the web-editor needs each tutorial have its dependencies installed (again) in order to export it in the format that the editor needs.
The build should be made more intelligent so that unchanged tutorial demos are not re-tested and exported.
One approach that could be used is to create a hash for each demo project and store it every time a demo is processed. A task could then be added to grunt ci
that would compare each demo's current has to the previously calculated one. A demo would be processed only if the hash changes.
I see two options on how to do this:
- Have Travis track and update the hashes or,
- Give developers the ability to update the process the demos and update the hashes for ones that change.
While option (1) would be preferable from an automation standpoint, it seems that it would necessitate allowing Travis to update the master
branch in order to update the hashes for the processed demos. I'm not very comfortable with this. The would not be required if Travis had some other way of storing this kind of metadata, but I'm not aware of one. This should be researched and, if one is available, this option would be my recommended path forward.
The second option requires contributors to remember to run the update task and, therefore, will be prone to human error. The impact of failing to do this would be small, however, since Travis would just conduct unnecessary processing. While this does waste time, it does not introduce the possibility of publishing the site with out of date artifacts. This option should be pursued if no acceptable way is found to fully automate the verification and update process.
If the processing and hash updates are done as part of a contributor-driven task, it could be preceded by the task that zips up each demo to simplify the creation of the hashes.
Using the .zip files won't work since the file changes each time, presumably due to metadata that is stored in the zip file (time of archiving, etc.). Instead, I am going to use a directly walker that concatenates the hash for each file and hashes the final result. This should be just about as fast and doesn't add too much complexity.