DeepSpeech
DeepSpeech copied to clipboard
Moving away from TaskCluster
TaskCluster is a CI service provided by Mozilla, and available to both Firefox development (Firefox-CI instance) and Community on Github (Community TaskCluster). It’s being widely used across some Mozilla projects, and it has its own advantages. In our case, the control over tasks, over workers for specific needs and long build time was easier to achieve working with the TaskCluster team rather than relying on other CI services.
However, this has lead to the CI code being very specific to the project, and kind of a source of frustration for non employees trying to send patches and get involved in the project ; specifically because some of the CI parts were “hand-crafted” and triggering builds and tests requires being a “collaborator” on the Github project, which has other implications making it complicated to enable it easily to anyone. In the end, this creates an artificial barrier to contributing to this project, even though we happily run PRs manually, it is still frustrating for everyone. The issue https://github.com/mozilla/DeepSpeech/issues/3228 was an attempt to fix that, but we came to the conclusion it would be more beneficial for everyone to switch to some well known CI service and setup that is less intimidating. While TaskCluster is a great tool and has helped us a lot, we feel its limitations now makes it inappropriate for the project to stimulate and enable external contributions.
We would like to take this opportunity to also enable more contributors to hack and own the code related to CI, so discussion is open.
Issues for GitHub Actions:
- [x] Basic macOS #3573
- [ ] Basic Linux #3574
- [ ] Basic Windows #3575
What do you think about GitLabs builtin CI features?
I'm using it for my Jaco-Assistant project and I'm quite happy with it because currently it supports almost all my requirements. The pipeline does linting checks and some code statistics calculation and I'm using it to provide prebuilt container images (You could build and provide the training images from there for example). See my CI setup file here.
There is also an official tutorial for usage with github: https://about.gitlab.com/solutions/github/ And its free for open source projects.
What do you think about GitLabs builtin CI features?
That would mean moving to gitlab, which raises other questions. I dont have experience with their ci even though i use gitlab for some personal project (from gitorious.org).
Maybe i should post a detailed explanation of our usage of taskcluster to help there ?
That would mean moving to gitlab
No, you can use it with github too.
From: https://docs.gitlab.com/ee/ci/ci_cd_for_external_repos/
Instead of moving your entire project to GitLab, you can connect your external repository to get the benefits of GitLab CI/CD.
Connecting an external repository will set up repository mirroring and create a lightweight project with issues, merge requests, wiki, and snippets disabled. These features can be re-enabled later.
To connect to an external repository:
From your GitLab dashboard, click New project.
Switch to the CI/CD for external repo tab.
Choose GitHub or Repo by URL.
The next steps are similar to the import flow.
Maybe i should post a detailed explanation of our usage of taskcluster to help there ?
I think this is a good idea. But you should be able to do everything on gitlab ci as soon you can run it in a docker container without special flags.
in a docker container
We also need support for Windows, macOS and iOS that cannot be covered by Docker
Our current usage of TaskCluster:
We leverage the current features:
- building a graph of tasks with dependencies: https://github.com/mozilla/DeepSpeech/blob/master/taskcluster/tc-decision.py
- artifact with indexes: https://community-tc.services.mozilla.com/tasks/index/project.deepspeech
- building multiple archs:
- linux/amd64 (via docker-worker)
- linux/aarch64 (cross-compilation, docker-worker)
- linux/rpi3 (cross-compilation, docker-worker)
- android/armv7 (cross-compilation, docker-worker)
- android/aarch64 (cross-compilation, docker-worker)
- macOS/amd64 (native, generic-worker, deepspeech-specific hardware deployment, generic-worker)
- iOS/x86_64 (native, reusing the macOS infra)
- iOS/aarch64 (native, reusing the macOS infra)
- Windows/amd64 (native, generic-worker, deepspeech pool managed by taskcluster team)
- testing on multiple archs:
- linux/amd64 (docker-worker)
- linux/aarch64 (native, deepspeech specific hardware, docker-worker)
- linux/rpi3 (native, deepspeech specific hardware, docker-worker)
- android/armv7 (docker-worker + nested virt)
- android/aarch64 (docker-worker + nested virt)
- macOS/amd64 (native, deepspeech specific hardware deployment, generic-worker)
- iOS/x86_64 (native, reusing macOS infra)
- Windows/amd64 (native, generic-worker, deepspeech pool managed by taskcluster team)
- Windows/CUDA (native, generic-worker with NVIDIA GPU, deepspeech pool managed by taskcluster team)
- Documentation on ReadTheDocs + Github webhook to generate on PR/push/tag
- Pushing to repos:
- Docker Hub via CircleCI
- Everything else via scriptworker instance running on Heroku:
- NPM
- Pypi
- Nuget
- JCenter
- Github
Hardware:
- Set of GCP VMs for Linux+Android builds/tests
- Set of AWS VMs for Windows builds/tests
- 4x MacBook Pro for macOS setups, with VMare Fusion and sets of builds/tests VMs configured
- ARM hardware self-hosted:
- 6x LePotato boards for Linux/Aarch64 tests
- 6x RPi3 boards for Linux/ARMv7 tests
-
tc-decision.py
is in charge of building the whole graph of tasks describing a PR or a Push/Tag:
- PRs runs tests
- Push runs builds
- Tag runs builds + uploads to repositories
- YAML description files in
taskcluster/*.yml
to describe tasks - dependencies between tasks based on
.yml
filename (without.yml
) - decision task created by
.taskcluster.yml
(canonical entry point of tasckluster / github integration) +taskcluster/tc-schedule.sh
- https://community-tc.services.mozilla.com/docs
-
LC_ALL=C GITHUB_EVENT="pull_request.synchronize" TASK_ID="aa" GITHUB_HEAD_BRANCHORTAG="branchName" GITHUB_HEAD_REF="refs/heads/branchName" GITHUB_HEAD_BRANCH="branchName" GITHUB_HEAD_REPO_URL="aa" GITHUB_HEAD_SHA="a" GITHUB_HEAD_USER="a" GITHUB_HEAD_USER_EMAIL="a" python3 taskcluster/tc-decision.py --dry
-
LC_ALL=C GITHUB_EVENT="tag" TASK_ID="aa" GITHUB_HEAD_BRANCHORTAG="branchName" GITHUB_HEAD_REF="refs/heads/branchName" GITHUB_HEAD_BRANCH="branchName" GITHUB_HEAD_REPO_URL="aa" GITHUB_HEAD_SHA="a" GITHUB_HEAD_USER="a" GITHUB_HEAD_USER_EMAIL="a" python3 taskcluster/tc-decision.py --dry
Execution encapsulated within bash scripts:
- Only bash for ease of hacking
- Re-usable accross all platforms (Linux, macOS, Windows) whereas Docker would cover only Linux
- TensorFlow build:
-
tf_tc-setup.sh
: perform setup steps for TensorFlow builds (install Bazel, CUDA, etc.) -
tf_tc-build.sh
: perform build of TensorFlow -
tf_tc-package.sh
: package the TensorFlow build dir ashome.tar.xz
for re-use - exact re-use of tensorflow is required for Bazel to properly re-use its caching
-
- DeepSpeech build
- same architecture, span over:
-
taskcluster/tc-all-utils.sh
-
taskcluster/tc-all-vars.sh
-
taskcluster/tc-android-utils.sh
-
taskcluster/tc-asserts.sh
-
taskcluster/tc-build-utils.sh
-
taskcluster/tc-dotnet-utils.sh
-
taskcluster/tc-node-utils.sh
-
taskcluster/tc-package.sh
-
taskcluster/tc-py-utils.sh
I have been using GitLab CI (the on-prem community edition) for about three years at my workplace, and so far I have been very happy with it. @lissyx I believe GitLab CI supports all the requirements you listed above - I've personally used most of those features.
The thing I really like about GitLab CI is that it seems to be a very important feature for the company - they release updates frequently.
@lissyx I believe GitLab CI supports all the requirements you listed above - I've personally used most of those features.
Don't hesitate if you want, I'd be happy to see how you can do macOS or Windows builds / tests.
Windows builds might be covered with some of their beta features: https://about.gitlab.com/blog/2020/01/21/windows-shared-runner-beta/
For iOS I think you would need to create your own runners on the macbooks and link them to the CI. They made a blog post for this: https://about.gitlab.com/blog/2016/03/10/setting-up-gitlab-ci-for-ios-projects/
Windows builds might be covered with some of their beta features: https://about.gitlab.com/blog/2020/01/21/windows-shared-runner-beta/
For iOS I think you would need to create your own runners on the macbooks and link them to the CI. They made a blog post for this: https://about.gitlab.com/blog/2016/03/10/setting-up-gitlab-ci-for-ios-projects/
I have no time to take a look at that, sadly.
@DanBmh @opensorceror Let me be super-clear: what you shared looks very interesting, but I have no time to dig into that myself. If you guys are willing, please go ahead. One thing I should add is that for macOS, we would really need something to be hosted: the biggest pain was on maintaining this. If we move to GitLab CI but there is still need to babysit those, it's not really worth the effort.
Personally I'm a bit hesitant to work on this by myself, because the CI config of this repo seems too complex for a lone newcomer to tackle.
FWIW, I did a test connecting a GitHub repo with GitLab CI...works pretty well.
I'm not sure where we would find hosted macOS options though.
Personally I'm a bit hesitant to work on this by myself, because the CI config of this repo seems too complex for a lone newcomer to tackle.
Of course
FWIW, I did a test connecting a GitHub repo with GitLab CI...works pretty well.
That's nice, i will have a look.
I'm not sure where we would find hosted macOS options though.
That might be the biggest pain point.
FWIW, I did a test connecting a GitHub repo with GitLab CI...works pretty well.
Can it do something like we do with TC, i.e., precompile bits and fetch them at need? This is super-important, because when you have to rebuild TensorFlow with CUDA, we're talking about hours even on decent systems.
So to overcome this, we have https://github.com/mozilla/DeepSpeech/blob/master/taskcluster/generic_tc_caching-linux-opt-base.tyml + e.g., https://github.com/mozilla/DeepSpeech/blob/master/taskcluster/tf_linux-amd64-cpu-opt.yml
It basically:
- do a setup + bazel build step on tensorflow with the parameters we need
- produce a tar we can re-use later
- store it on taskcluster index infrastructure
Which allows us to have caching we can periodically update, as you can see there: https://github.com/mozilla/DeepSpeech/blob/master/taskcluster/.shared.yml#L186-L260
We use the same mechanisms for many components (SWIG, pyenv, homebrew, etc.) to make sure we can keep build time decent on PRs (~10-20min of build more or less, ~2min for tests) so that a PR can complete under 30-60 mins.
That would be possible, it's also called artifacts in gitlab. You should be able to run the job periodically or only if certain files did change in the repo.
I'm doing something similar here, saving the following image, which I later use in my readme.
That would be possible, it's also called artifacts in gitlab. You should be able to run the job periodically or only if certain files did change in the repo.
I'm doing something similar here, saving the following image, which I later use in my readme.
Nice, and can those be indexed like what TaskCluster has?
an those be indexed like what TaskCluster has?
Not sure what you mean by this. You can give them custom names or save folders depending on your branch names for example, if this is what you mean.
an those be indexed like what TaskCluster has?
Not sure what you mean by this. You can give them custom names or save folders depending on your branch names for example, if this is what you mean.
Ok, I think I will try and use GitLab CI on gitlab for a pet-project of mine that lacks CI :), that will help me get a grasp of the landscape.
@DanBmh @opensorceror I have been able to play with a small project of mine with GitLab CI, and I have to admit after scratching the surface, it seems to be nice. I'm pretty sure we can replicate the same things, but obviously it requires rework of the CI handling.
However, I doubt this can work well on a "free tier plan", so I think if there's a move in that direction it will require some investments, including to have support for Windows and macOS. We have been able to get access to our current TaskCluster cost usages, and thanks to the latest optimization we landed back in august, we could run the same workload as previously for a fairly small amount of money.
I guess it's mostly a question of people stepping up and doing, at some point :)
@lissyx you can also look into azure pipelines, it has a free tier and self-hosted agents that can be run locally
@lissyx you can also look into azure pipelines, it has a free tier and self-hosted agents that can be run locally
Thanks, but I'm sorry I can't spend more time than I already spent, I'm not 100% anymore on DeepSpeech, and I have been spending too much time on it in the past weeks.
@DanBmh @opensorceror @stepkillah Do you know something that would allow us to have beefy managed macOS (and Windows) instances on GitLab CI ? After a few weeks of hacking over there, I'm afraid we'd be exactly in the same position as we are today with TaskCluster, with the big difference that we know taskcluster, and we are still in direct contact with the people managing it so fixing issues is quite simple for us.
I insist on beefy, because building tensorflow on the machines we have (MacBook Pro circa 2017, running several VMs) even on bare-metal already takes hours. Now we have some caching in place everywhere to limit the impact, but even brew
needs).
@DanBmh @opensorceror @stepkillah Do you know something that would allow us to have beefy managed macOS (and Windows) instances on GitLab CI ? After a few weeks of hacking over there, I'm afraid we'd be exactly in the same position as we are today with TaskCluster, with the big difference that we know taskcluster, and we are still in direct contact with the people managing it so fixing issues is quite simple for us.
I insist on beefy, because building tensorflow on the machines we have (MacBook Pro circa 2017, running several VMs) even on bare-metal already takes hours. Now we have some caching in place everywhere to limit the impact, but even
brew
needs).
Define "beefy".
Could you please remind me, what was the reason for building tensorflow ourself?
If building tensorflow is really that complicated an time consuming, wouldn't be using a prebuilt version for all gpu devices and using the tflite runtime (optionally with a non quantized model) for all other devices an easier option?
@DanBmh @opensorceror @stepkillah Do you know something that would allow us to have beefy managed macOS (and Windows) instances on GitLab CI ? After a few weeks of hacking over there, I'm afraid we'd be exactly in the same position as we are today with TaskCluster, with the big difference that we know taskcluster, and we are still in direct contact with the people managing it so fixing issues is quite simple for us. I insist on beefy, because building tensorflow on the machines we have (MacBook Pro circa 2017, running several VMs) even on bare-metal already takes hours. Now we have some caching in place everywhere to limit the impact, but even
brew
needs).Define "beefy".
At least 8GB of RAM, preferably 16GB, and at least 8 CPUs.
Could you please remind me, what was the reason for building tensorflow ourself?
If building tensorflow is really that complicated an time consuming, wouldn't be using a prebuilt version for all gpu devices and using the tflite runtime (optionally with a non quantized model) for all other devices an easier option?
libdeepspeech.so
statically links TensorFlow, plus we need to have some patches.
We already on TaskCluster have some prebuilding in place, but producing this artifact takes various times
- ~3h on our current macOS infra
- ~20min on our current linux builders
So each time we work on TensorFlow (upgrading to newer releases, etc.), it's "complicated". Currently, what we achieve is "sustainable" altough painful. However, given the performances of what I could test on GitLab CI / AppVeyor, it's not impossible this would take our build time much more skyrocketting, and thus it would significantly slow down things.
One thing we could do is simply drop support for the full TF runtime on macOS. I can't think of a reason to use the TF runtime on anything that isn't a server-based deployment, where everyone will be running Linux.
The TFLite setup is much simpler to support and also much more useful for everyone deploying DeepSpeech client-side/on-device.
The TFLite setup is much simpler to support and also much more useful for everyone deploying DeepSpeech client-side/on-device.
That being said, macOS "free tier" infra that I could test on AppVeyor only allows one parallel build, so even if we limit the amount of things, it'd be complicated.
One thing we could do is simply drop support for the full TF runtime on macOS. I can't think of a reason to use the TF runtime on anything that isn't a server-based deployment, where everyone will be running Linux.
Wouldn't be the same then also be true for Windows?
Maybe another option would be to use the official tensorflow builds, and let users install those if they really want to use the extra gpu power?
Wouldn't be the same then also be true for Windows?
Yes. But Windows servers despite rare at least are a thing, and the platform is not as hard to support as macOS. Official TensorFlow builds on macOS don't have GPU support anymore so I don't see how doing the work to move to them would be beneficial.