rust-playground icon indicating copy to clipboard operation
rust-playground copied to clipboard

Put the playground UI inside docker

Open aidanhs opened this issue 7 years ago • 4 comments

I've not yet updated the dev workflow since doing that inside docker isn't ideal (one issue is cargo redownloading packages every time).

aidanhs avatar Jul 28 '17 17:07 aidanhs

When you asked for the Docker version on the official server, I realized that you don't have access to the current setup. It's different from what's documented in the README (that's the i32 deployment). I've been holding off documenting it because I was under the impression that we were going to follow erickt's path where it was very likely we'd need to be building an AMI anyway.

That said, here's the top-of-mind structure as I remember it now:

  1. We run the complete build on every commit to master as well as at least daily using Travis' cron feature. Example build. This is broken up into multiple stages to allow more things to happen in parallel, reducing the total wallclock time.

  2. The build produces a number of artifacts

    • containers for each of the 3 channels — uploaded to Docker Hub
    • containers for rustfmt and clippy — uploaded to Docker Hub
    • The server binary proper — uploaded to S3
    • The frontend files — uploaded to S3
  3. The server has a crontab configured with AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY running hourly:

    #!/bin/bash
    
    set -euv -o pipefail
    
    root=/home/ec2-user
    
    # Get new docker images
    $root/fetch.sh
    
    # Clean old docker images
    docker system prune -f || true
    
    # Get new artifacts
    aws s3 sync s3://playground-artifacts $root/playground-artifacts
    # These artifacts don't change names and might stay the same size
    # https://github.com/aws/aws-cli/issues/1074
    aws s3 sync \
        --exclude='*' \
        --include=ui \
        --include=build/index.html \
        --include=build/robots.txt \
        --exact-timestamps \
        s3://playground-artifacts $root/playground-artifacts
    chmod +x $root/playground-artifacts/ui
    
    # Restart to get new server binary
    sudo stop playground || true
    sudo start playground
    
  4. The server has an Upstart service to ensure the binary keeps running

    description "The Rust Playground"
    author      "Jake Goulding"
    
    start on filesystem or runlevel [2345]
    stop on shutdown
    
    env TMPDIR=/mnt/playground
    env RUST_LOG=info
    env PLAYGROUND_UI_ADDRESS=0.0.0.0
    env PLAYGROUND_UI_PORT=8080
    env PLAYGROUND_UI_ROOT=/home/ec2-user/playground-artifacts/build
    env PLAYGROUND_CORS_ENABLED=1
    
    chdir /home/ec2-user/playground-artifacts
    script
        exec >>/var/log/upstart/playground.log 2>&1
        ./ui
    end script
    

Pieces of this that I find important:

  • Continuous deployment — I don't want to SSH into the server except for troubleshooting.
  • The build in Travis is parallelized and optimized as much as possible — pull requests have to go through a similar process, but I skip building the containers and uploading the artifacts. At some future point, I want to actually run the Ruby integration tests in CI.

Pieces that may or may not be important:

  • The playground assets are cached reasonably heavy. The index page is cached for an hour and the assets proper are cached for a year. To date, I've just been letting old assets pile up because they aren't that big. Since they should be cached client-side, this is probably not a big worry.

Pieces that probably aren't important:

  • I've been cross-compiling to musl simply so I could run the binary directly on Amazon Linux without installing a bunch of gunk. If we are going to be running inside a container where we have control over the runtime, this can probably be removed.

shepmaster avatar Jul 29 '17 14:07 shepmaster

Now that I've had a chance to read through, I'm thinking we might be able to do this:

  1. Reuse the existing build and upload to S3 phases.
  2. Add a new phase at the end that downloads the artifacts from the previous phase, then builds the combined docker image using them and uploads it to Docker Hub

This ensures that we are using a slightly more optimized build (see .travis/compile-frontend.sh). There's also some nuance that I don't currently remember around the specific musl environment to build in — it needs some C dependencies compiled for musl that our own nightly doesn't have (see the chosen Docker image in .travis/compile-backend.sh)

The easiest thing to do would be to copy .travis/build-containers.sh to a new name and have it only call ui/build.sh.

What do you think?

shepmaster avatar Jul 29 '17 15:07 shepmaster

[compiling to musl] can probably be removed.

To be clear, I don't think we should do that during this PR; I'd rather have fewer moving parts.

shepmaster avatar Jul 29 '17 15:07 shepmaster

(it looks like you get parallel builds from travis, so I'll continue based on that assumption)

Just a thought: since the playground frontend and backend don't actually depend on rustfmt or clippy, you can cut ~6min from your build time by just moving them into the tools stage.

If you were then to combine frontend and backend, you'd still be saving 5min30s due to how long clippy takes to compile. This lets you eliminate s3.

I think this is a nice result both because of fewer moving parts and because it lets people run the exact same build scripts themselves with no changes.

aidanhs avatar Aug 01 '17 16:08 aidanhs