Documentation Request: Running FrankenPHP behind Caddy/Load Balancer
Hey,
I'm having a hard time find any documentation about running FrankenPHP behind a (Caddy) load balancer.
I'm assuming that I'd need a container running plain Caddy with Vulcain, Mercure and Souin, and then reverse proxy to multiple FrankenPHP instances built without vulcain, mercure and souin?
Or should I just run multiple FrankenPHP+Vulcain+Mercure+Souin containers and slap something like a AWS ALB load balancer before it?
The beauty of FrankenPHP is that you can run it at the edge (no need for caddy -> fpm or nginx -> fpm). IIRC, Souin now supports FrankenPHP so you can use it in tandem with all the things you need in one Caddy. The end process is lightweight enough to scale horizontally pretty well.
Note that if you are going to put a load balancer (NLB in TCP/UDP mode - to get that http3 support) in front of Caddy/FrankenPHP and have Caddy do TLS via ACME, you may need an S3 storage plugin for certificates (otherwise it is likely you'll run into rate limits).
So the only thing I'd have to do is put Mercure into a separate container, and then Vulcain/Souin will "just work"?
I'm assuming I'd have to run Souin with something like Olric Embedded or Redis?
Mainly asking because Vulcain currently doesn't seem to work when it's used on the same instance
Out of curiosity: where are you deploying containers on the edge @withinboredom, is it a distributed system?
We're also running FrankenPHP behind a loadbalancer (and/or CDN) and it sadly makes some features redundant.
@AlliBalliBaba how are you running it behind the load balancer?
One application we manage runs on AWS ECS (Fargate) behind a loadbalancer, multiple FrankenPHP containers that scale horizontally. If it's an application loadbalancer, TLS needs to be handled by the LB and Caddy just runs on port 80. Additionally you can put cloudfront/cloudflare in front to do some caching via headers. Works pretty well and can handle a ton of load, only thing I'm not happy about is how much AWS costs in general (and how complex it is).
Another application I once helped set up is just a single instance on a digital ocean droplet. Not much traffic and not distributed, but just 6$ a month, hard to beat that.
One application we manage runs on AWS ECS (Fargate) behind a loadbalancer, multiple FrankenPHP containers that scale horizontally. If it's an application loadbalancer, TLS needs to be handled by the LB and Caddy just runs on port 80. Additionally you can put cloudfront/cloudflare in front to do some caching via headers. Works pretty well and can handle a ton of load, only thing I'm not happy about is how much AWS costs in general (and how complex it is).
Another application I once helped set up is just a single instance on a digital ocean droplet. Not much traffic and not distributed, but just 6$ a month, hard to beat that.
So, which Caddy/FrankenPHP features became redundant then?
In that setup redundant are: automatic TLS (happens on the loadbalancer), Souin (if caching isn't too fancy it can happen with cloufront/cloudflare via headers), potentially early hints due to cached responses. Haven't tried Vulcain and Mercure in that setup, I assume they'd work fine with some tweaks if you need them.
In that setup redundant are: automatic TLS (happens on the loadbalancer), Souin (if caching isn't too fancy it can happen with cloufront/cloudflare via headers), potentially early hints due to cached responses. Haven't tried Vulcain and Mercure in that setup, I assume they'd work fine with some tweaks if you need them.
Fair, I'm using a Network Load Balancer (layer 4) instead of alb as per the suggestion of @withinboredom
So Caddy/FrankenPHP should be able to handle certificate stuff and all the other goodies without issue.
I'm using the s3 storage plugin so I won't hit the rate limits of LE, along with Souin with the Redis storage plugin (leveraging Elasticache for that)
Deploying it in AWS ECS Fargate, using the AWS ADOT container as sidecar to forward caddy/FrankenPHP opentelemetry metrics to Cloudwatch so I can autoscale on those metrics instead of just CPU/Memory usage.
Autoscaling based on metrics is actually pretty smart 👍 which metrics are you using?
Probably some combination of:
- frankenphp_queue_depth
- frankenphp_busy_threads
- frankenphp_total_threads
Hi @AlliBalliBaba , this issue is quiet old, but i'm planning a first ECS deployment for a FrankenPHP app . As it has already been stated TLS is handled by LB.
I've not yet tried Worker mode for the deployment. I was wondering how you manage Frankenphp as Messenger consumer in your ECS setup ( if you have any ? ) Do you use supervisor or something else ?
Thanks
Yeah I'm currently just using supervisor for starting the scripts, but it might be possible in the future to also configure consumers via Caddyfile, depending on where #1883 will be going.
Hi @AlliBalliBaba , this issue is quiet old, but i'm planning a first ECS deployment for a FrankenPHP app . As it has already been stated TLS is handled by LB.
I've not yet tried Worker mode for the deployment. I was wondering how you manage Frankenphp as Messenger consumer in your ECS setup ( if you have any ? ) Do you use supervisor or something else ?
Thanks
I personally just run a frankenphp container in its own service with a overridden command (php bin/console messenger:consume)