tilt
tilt copied to clipboard
Containerized Builds with Tilt
I'm trying to set up a monorepo with multiple microservices for local development using Tilt, and I’ve hit a few roadblocks with live reloading and syncing that I’m hoping you can help with.
Project setup
- Monorepo: Multiple microservices are located in first-level subdirectories, each written in different languages (e.g., Rust, Go, Node.js).
- Tiltfile: There's a single Tiltfile at the root of the monorepo that describes how these microservices work together in the development environment.
- Local Kubernetes Cluster: All microservices, along with their dependencies (e.g., Postgres, Kafka, Redis), run on a local Kubernetes cluster.
- Containerized Builds: Importantly, each microservice is built inside a container, so developers don’t need to install language-specific build tooling on their host machines. Each service comes with its own Dockerfile.dev and Deployment.yaml.
Example Dockerfile for a Rust Microservice:
FROM rust:1.81
RUN apt-get update && apt-get install -y lldb
WORKDIR /app
COPY ./Cargo.toml ./Cargo.lock ./
RUN mkdir -p src && echo "fn main() {}" > src/main.rs && cargo build
COPY ./src ./src
ENV RUST_LOG=debug
CMD ["cargo", "run"]
The Challenge
While building inside containers is fantastic for keeping the developer's machine clean, I’m running into issues with Tilt’s live reload and syncing features, which seem more geared towards builds happening on the host machine. Here are some of the problems I’ve encountered:
- Building Inside the Container with
cargo watch: I run cargo watch inside the container to automatically detect source code changes. I usesync()inlive_update()to transfer source files into the container. However:- Problem 1: If I only sync source files, build artifacts (like Rust’s
target/directory) are generated inside the container, andrust-analyzeron the host machine doesn’t work because it requires access totarget/to provide its code analysis features.
- Problem 1: If I only sync source files, build artifacts (like Rust’s
- Syncing the
target/Directory: To resolve this, I triedsync()ing thetarget/directory back to the host, but:- Problem 2: Syncing the
target/directory causes any change intarget/to trigger a live reload, which leads to a continuous loop of rebuilding defeating the purpose of live reloading efficiently.
- Problem 2: Syncing the
- Mounting
target/with Kubernetes: I could theoretically mount thetarget/directory using a Kubernetes volume (hostPath), but:- Problem 3: Kubernetes requires an absolute path on the host for mounting, which isn't ideal because I want to keep the setup agnostic of each engineer's specific directory structure on their dev machine.
The Ideal Setup
- Build inside the container: All build artifacts, such as
target/, should be generated inside the container. - Sync source code and
target/: Source files should be synced into the container, andtarget/should be synced back to the host for tools likerust-analyzerto function correctly. - Avoid reload triggers on
target/changes: I need tosync()target/back to the host without triggering live reloads whenever thetarget/directory is updated.
Question
How can I approach this setup using Tilt to achieve:
- Containerized builds without needing local tooling on the host.
- Working
rust-analyzeron the host with access to thetarget/directory. - Syncing of the
target/directory from container to host, without triggering live reload ontarget/changes. - Avoiding Kubernetes
hostPathabsolute path requirements to make this setup agnostic of engineers’ local directory structures.
Am I thinking about this the wrong way altogether?
Looking forward to your thoughts!
I'd probably try something like:
- syncback to sync the target back
- .tiltignore / watch_settings(ignore=) to ignore target on the host
Came to say the same thing. Here's the syncback extension.
Ah, but syncback says:
creates a manually-triggered local resource to sync files from a Kubernetes container back to your local filesystem
This means that every time the developer makes changes to the source code, once cargo watch inside the container picks up the changes and rebuilds the project, they would still need to manually trigger the syncback via the Tilt UI to sync the target/ directory.
This extra manual step makes syncback impractical for any reasonable workflow.
Is there a workaround for this, or have we hit a dead-end?
Noodling on this a bit. Have you played around at all with Buildkit exporters? https://docs.docker.com/build/exporters/#multiple-exporters
I might try setting it up like this:
- Use tilt's custom_build to invoke images builds with a shell script
- Write a script that runs the build with multiple exporters
- The first exporter creates the image
- The second exporter loads the
targetdirectory to local disk
what do you think? (Multiple exporters only came out earlier this year and we haven't added native tilt support yet but they might work really well for this use-case)