octane
octane copied to clipboard
No ability to deploy without downtime
Octane Version
2.8.1
Laravel Version
10.48.28
PHP Version
8.3.16
What server type are you using?
Swoole
Server Version
6.0.1
Database Driver & Version
No response
Description
I'm using Envoyer to deploy a Laravel Octane application, taking advantage of its zero-downtime deployment features.
However, Octane currently does not support zero-downtime deployment because it does not follow symlink directories. It always remains in the actual directory (instead of the symlinked one) where the Octane server was initially started. When the outdated release directory is deleted, Octane continues running in that location, causing errors on every request and resulting in 500 response codes on the live application.
Error thrown:
Warning: require(/var/www/domain.com/releases/202502010200023/vendor/laravel/octane/bin/bootstrap.php): Failed to open stream: No such file or directory in /var/www/domain.com/releases/202502010200023/vendor/laravel/octane/bin/swoole-server on line 18
Fatal error: Uncaught Error: Failed opening required '/var/www/domain.com/releases/202502010200023/vendor/laravel/octane/bin/bootstrap.php' (include_path='.:/usr/bin/[email protected]/8.3.16/share/[email protected]/pear') in /var/www/domain.com/releases/202502010200023/vendor/laravel/octane/bin/swoole-server:18
Stack trace:
#0 /var/www/domain.com/releases/202502010200023/vendor/laravel/octane/bin/swoole-server(95): {closure}(Array)
#1 [internal function]: {closure}(Object(Swoole\Http\Server), 0)
#2 /var/www/domain.com/releases/202502010200023/vendor/laravel/octane/bin/swoole-server(170): Swoole\Server->start()
#3 {main}
thrown in /var/www/domain.com/releases/202502010200023/vendor/laravel/octane/bin/swoole-server on line 18
#1 [internal function]: {closure}(Object(Swoole\Http\Server), 1)
#1 [internal function]: {closure}(Object(Swoole\Http\Server), 3)
#1 [internal function]: {closure}(Object(Swoole\Http\Server), 2)
#1 [internal function]: {closure}(Object(Swoole\Http\Server), 4)
#1 [internal function]: {closure}(Object(Swoole\Http\Server), 5)
Steps To Reproduce
Use any 0-downtime deployment or test it manually using the following instructions:
- Use the Swoole driver as an example.
cdone directory up from the project's base path.- Create a
currentsymlink directory for your project using the command:ln -nsf ./octane-project-test ./current - Start the Octane server:
php ./current/artisan octane:start - Copy your project to another directory:
cp -R ./octane-project-test ./octane-project-test-new - Activate the new release:
ln -nsf ./octane-project-test-new ./current - Reload the Octane server:
php ./current/artisan octane:reload - Remove the original project directory:
rm -rf ./octane-project-test
Thank you for reporting this issue!
As Laravel is an open source project, we rely on the community to help us diagnose and fix issues as it is not possible to research and fix every issue reported to us via GitHub.
If possible, please make a pull request fixing the issue you have described, along with corresponding tests. All pull requests are promptly reviewed by the Laravel team.
Thank you!
Hello! We are experiencing the same issue. Every time we deploy a very small project, there is a brief downtime, and open connections are abruptly closed, returning 500 errors.
drwxr-sr-x 16 www-data www-data 4096 Mar 4 06:09 api-backend-build-1606 drwxr-sr-x 16 www-data www-data 4096 Mar 4 06:15 api-backend-build-1607 drwxr-sr-x 16 www-data www-data 4096 Mar 4 06:25 api-backend-build-1608 lrwxrwxrwx 1 www-data www-data 40 Mar 4 06:26 lastbuild -> /var/www/releases/api-backend-build-1608
It is not possible to run php artisan octane:reload because it does not detect that it is running in the directory with the symbolic link. So, we reload Supervisor instead, but this closes connections and returns some errors.
We haven't found a better way to handle this at the moment. We deploy to a server in a very simple way and do not consider using load balancers or deploying servers in Docker for this client.
@EduardoMateos It's important to mention that in Laravel, you should keep the storage directory outside the release directory and symlink it to the release directory to avoid various issues. However, it still won't work with Swoole until a fix is released.
@crynobone @taylorotwell The fix is in PR #1009. I manually tested all scenarios, including those without the zero-downtime deployment strategy, and all tests passed.
Verified on both MacOS and Linux.
Whoever is interested in using this ability right now, before the official Octane release, can add the following extra lines to composer.json before the "require" directive.
composer.json
...
"repositories": [
{
"type": "vcs",
"url": "https://github.com/mikkpokk/octane"
}
],
...
I don't know about you guys , but I do have around 2k requests per seconds and am using octane . I made my own docker compose file and docker file . Everytime I after development I push the docker image to my docker registry then I on my VPS I just pull it and run it , the downtime is extremely minimal
I don't know about you guys , but I do have around 2k requests per seconds and am using octane . I made my own docker compose file and docker file . Everytime I after development I push the docker image to my docker registry then I on my VPS I just pull it and run it , the downtime is extremely minimal
At a volume of 2,000 req/s, that downtime (ranging from 5s to nearly 30s with Docker) is already considered a disaster.
with health checks and Nginx using docker DNS , whenever the new container is up and healthy the traffic is forwarded to it , the downtime is <1s . for some projects it might be a disaster , ofc would be even better with out any downtime , for me for now its not critical and it works ( updates usually at nights )
Hello! In case it's helpful, when I deploy a new version to avoid downtime, I do the following:
- First, I run the new version of Laravel Octane on a random port.
- I update the Nginx configuration to point to this random port.
- I keep the old Laravel Octane service alive for 60 seconds.
- I terminate the old service.
I have this process automated with a script that runs in the pipeline after the tests.
In my case, my project is a simple API and doesn't use storage.
As other users have mentioned, it might be more interesting to do it with Docker.
Hello! In case it's helpful, when I deploy a new version to avoid downtime, I do the following:
- First, I run the new version of Laravel Octane on a random port.
- I update the Nginx configuration to point to this random port.
- I keep the old Laravel Octane service alive for 60 seconds.
- I terminate the old service.
I have this process automated with a script that runs in the pipeline after the tests.
In my case, my project is a simple API and doesn't use storage.
As other users have mentioned, it might be more interesting to do it with Docker.
That's a well-known, complex workaround on the internet - complex because you have to dynamically modify and reload both the Nginx and Supervisor configurations. Not to mention the temporary overhead: for one minute, your server tries to handle double the number of workers. If your server is under heavy load, this may cause it to freeze.
Swoole itself is like PM2 and a Node.js server combined - two in one. However, Swoole is not responsible for autostart or autorestarts; that must be handled via Supervisor.
Docker doesn't reduce real server downtime - in fact, it does the opposite. If your Docker setup runs across multiple servers (load balancers), visitors will encounter fewer 500 errors because a percentage of visitors (including automated load-balancer health checks) will trigger an "unhealthy" signal when hitting an unresponsive server. The load balancer will then stop directing traffic to that server until it becomes healthy again. This may create a false impression that it's a bulletproof deployment strategy - but it is not. Moreover, it can still cause resource shortages due to deployment.
Other than that, I don't see why people spent a couple of years using complex workarounds instead of simply reporting and fixing the issue directly in the Octane configuration. At least now it's resolved, and no workarounds are needed.
EDIT: Even if you don't use storage yourself, Laravel framework and Octane does. That's why it's important to keep it symlinked. You lose state and log files otherwise.
h,ey @mikkpokk i already reported this 2 months ago , https://github.com/laravel/octane/issues/996 , but i am using roadrunner. you think your PR https://github.com/laravel/octane/pull/1009 can be ported to roadrunner? (and frankenphp)
ok, just for the record i solved this hacky way, in the mean time that PRs are done:
STEP 1 : create a normal directory (not a symbolic link),
let say that directory is: /home/aiku/aiku/anchor/
as part of your deployment actions copy your project sources there. e.g. i using deployer, just find the equivalent in envoy or whatever you using
desc('Sync octane anchor');
task('deploy:sync-octane-anchor', function () {
run("rsync -avhH --delete {{release_path}}/ {{deploy_path}}/anchor/octane");
});
STEP 2: then your octane in supervisor conf file, the octane start command should be run in that directory
;etc/supervisor/aiku-production-octane.conf
[program:aiku-octane-production_boro]
process_name=%(program_name)s
command=/usr/bin/php8.3 /home/aiku/aiku/anchor/octane/artisan octane:start -q --workers=32
...
STEP 3 , in your config/octane.php you must add this(in may case is roadrunner you need to find out the correct config for swoole, and frankenphp )
'roadrunner' => [ 'command' => env('OCTANE_ROADRUNNER_WORKER_PATH', base_path('vendor/bin/roadrunner-worker')), ]
then put in your .env
OCTANE_ROADRUNNER_WORKER_PATH=../../../../../home/aiku/aiku/anchor/octane/vendor/bin/roadrunner-worker
now you can run octane:reload with no downtime 🥳
@inikoo Hey, you can try implementing fixes for RoadRunner and/or FrankenPHP using ideas from my PR and push a new PR for FrankenPHP and/or RoadRunner if you have the knowledge to develop and test those servers.
I don't know much about RoadRunner or FrankenPHP, and I don't have the ability or knowledge to test them thoroughly.
@inikoo Hey, you can try implementing fixes for RoadRunner and/or FrankenPHP using ideas from my PR and push a new PR for FrankenPHP and/or RoadRunner if you have the knowledge to develop and test those servers.
I don't know much about RoadRunner or FrankenPHP, and I don't have the ability or knowledge to test them thoroughly.
i could test in RoadRunner, but i can't implement fixes for RoadRunner
IMHO, this issue is not related to Octane or Laravel, but rather to the deployment strategy itself. I would recommend using containers to minimize downtime, especially considering how the underlying Octane runtime works. For small projects, docker stack can be used, to achieve zero-downtime rolling updates.
@MatusBoa You're describing a deployment downtime solution. The strategy you've described has already been discussed in this issue - please read above.
The question isn't how to minimize downtime; the question is how to avoid downtime caused by deployment. That is the problem Octane is meant to solve. I've already solved it for Swoole. The solution just hasn't been merged into Octane's repository, and no reasonable explanation has been provided for that.
Docker with load balancers does cause downtime. Someone must hit an HTTP 503 before the load balancer detects the failure and reroutes traffic to a healthy instance. If you deploy simultaneously to all instances, then all of them may return 503 errors for several seconds - or even minutes.
Without a load balancer, you experience 100% downtime for all requests during deployment.
the question is how to avoid downtime caused by deployment. That is the problem Octane is meant to solve
No, it isn't. Octane is meant to bring PHP up-to-par with other languages and frameworks in terms of latency and RPS. It has nothing to do with deployments.
Docker with load balancers does cause downtime. Someone must hit an HTTP 503 before the load balancer detects the failure and reroutes traffic to a healthy instance.
Neither Docker nor load balancers cause downtime. When deploying, you have both the old and new applications running. When you're ready, you just forward the traffic on your load balancer from the old port to the new one. Both versions of the application run until you're certain that the old one is no longer needed and isn't handling any requests.
That's a basic blue green deployment. It really isn't hard to implement and will be much more reliable and safe than trying to do weird symlinking magic with live reloads on production. You're literally blindly reloading Octane hoping that it doesn't crash and works as expected.
No, it isn't. Octane is meant to bring PHP up-to-par with other languages and frameworks in terms of latency and RPS. It has nothing to do with deployments.
Yes, it is. Octane, by its nature, acts as a deployment engine between Swoole/FrankenPHP/etc., Laravel, and PHP. That means it is responsible for deployment as well. There's no straightforward way to manage Octane's deployment strategy without hacks, because Octane handles it internally. To draw a parallel: saying Octane isn't responsible for deployment would be like saying a medicine producer only needs to provide the raw chemicals, and each consumer should measure and mix them themselves. In reality, the producer delivers the medicine in a usable form (pill, liquid, etc.), just as Octane delivers a usable deployment layer (which is buggy unfortunatelly).
Neither Docker nor load balancers cause downtime. When deploying, you have both the old and new applications running.
As for Docker and load balancers: it's only half correct to say they don't cause downtime. While old and new applications do briefly run at the same time, they are not running on the same port or production domain. To switch the new container to the production port or domain, the old version must be stopped. That switch itself is downtime. And for this purpose, you don't need Docker at all - you could simply restart the Octane server, and the same downtime occurs.
As already mentioned above, this Docker approach can also lead to system overload. In fact, all of these points have already been discussed earlier in the thread.
There's no need to continue this topic with comments like "huh, actually I don't need it because I run my service for 100 customers, I don't have a deep understanding of system administration, and I haven't seen downtime because I usually check my production app 30 minutes after deployment." The issue itself is practically solved (#1009), but it hasn't been merged by Octane’s author for reasons that don't make much sense. You're free to use forked fix however if needed.
Whoever is interested in using this ability right now, before the official Octane release, can add the following extra lines to
composer.jsonbefore the"require"directive.composer.json
... "repositories": [ { "type": "vcs", "url": "https://github.com/mikkpokk/octane" } ], ...
acts as a deployment engine
No, it acts as an integration layer between Laravel and long-running HTTP servers - Swoole, Franken and RoadRunner. You won't find any mentions of deployments in either 3 of those, nor will you find any in Octane docs - because none of them are built to handle deployments.
There's no straightforward way to manage Octane's deployment
Exactly, because it's not built for that. There's no straightforward way to manage deployments with php-fpm either, or with Swoole/Franken/RoadRunner directly. That's expected.
To switch the new container to the production port or domain, the old version must be stopped.
All major services that you can think of run zero-downtime blue-green deployments, or versioned deployments, or a mix of both. In either case, there are two versions of the same service running on different ports (or same port, different IPs/hostnames - doesn't matter), and then traffic is gradually switched between the two using a load balancer. The old version is only stopped after it stops receiving any traffic and is completely idle.
With any modern load balancer, even something as simple as Caddy, that switch between load balancer's destinations (two versions of the service) doesn't require physically restarting Caddy and causes 0 downtime. Of course, all major cloud platforms also offer solutions for that (AWS ECS/Fargate for example).
As already mentioned above, this Docker approach can also lead to system overload. In fact, all of these points have already been discussed earlier in the thread.
No, it cannot, because you're still splitting traffic between two versions of the service. 10% is handled by the new version and 90% by the old, then 50%/50%, then 100%/0%. At no point in time can it be 100%/100%. The only overhead is additional memory use, but that's negligible when SWAP exists and of course isn't even applicable to any horizontally scalable system, where you would simply deploy new version on a different machine altogether.
I don't have a deep understanding of system administration, and I haven't seen downtime because I usually check my production app 30 minutes after deployment.
Each month we have ~10 production deployments (none of which cause downtime) and handle 100s of millions of API requests. I'm pretty sure I understand how to do deployments. You don't seem to.
The issue itself is practically solved (https://github.com/laravel/octane/pull/1009), but it hasn't been merged by Octane’s author for reasons that don't make much sense.
The reason is clear: you're using the wrong tool for the job. Next you'll ask to add health checks for workers, running migrations, installing dependencies and whatnot else, because you seem to believe that Octane handles deployments. It doesn't. It has one tool for that - octane:reload, which is just hot reload. It's as if instead of building frontend you were to run build --watch in production - you could do that, but it's a terrible idea. And so is using octane:reload in production.
You're free to use forked fix however if needed.
We'll keep using proper blue green deployments in containers, thanks :)
usually check my production app 30 minutes after deployment.
Don't you see the irony? With octane:reload, you can't even health check your application until after you already reload it and after it's already broken.
Dear Oleksandr Prypkhan (@autaut03),
Do you realize that you're contradicting yourself? The fact that you mention octane:reload means you're aware of commands like octane:start, octane:stop, and octane:reload. These commands are meant for deployment, and they're developed in this repository (laravel/octane).
With octane:reload, you can perform a health check on your application because it never stops the app. The command simply clears the file cache without ever stopping the application itself.. Or at least, it should. Currently, without using the fixed fork, it only partially clears the file cache, and that's the bug we're encountering in this issue.
It's clear you don't understand how things work under the hood. Please keep this space clean and stop posting irrelevant or misleading comments. Leave this discussion for people who understand the system and need these features to work properly.
That said, you're free to use containers and Docker old-fashioned way. As long as your application requires multiple servers, there's nothing wrong with that approach. You obviously accept the performance overhead, higher infrastructure costs, and you're not aiming for zero downtime caused by developers pushing their work to production server(s).
Do you realize that you're contradicting yourself? The fact that you mention
octane:reloadmeans you're aware of commands likeoctane:start,octane:stop, andoctane:reload. These commands are meant for deployment, and they're developed in this repository (laravel/octane).
There are also migrate:fresh and seed commands. The fact that they exist in laravel/framework doesn't mean they're suitable for all environments. octane:start is the only way to start an Octane server, so of course it is suitable for production. The other two aren't.
With
octane:reload, you can perform a health check on your application because it never stops the app.
Which means you'll be performing a health check after you reload your application. When it may already be broken, and it's already too late.
https://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/introduction.html#benefits-of-bluegreen
Traditional deployments with in-place upgrades make it difficult to validate your new application version in a production deployment while also continuing to run the earlier version of the application. After you deploy the green environment, you have the opportunity to validate it. You might do that with test traffic before sending production traffic to the green environment, or by using a very small fraction of production traffic, to better reflect real user traffic.
performance overhead
https://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/introduction.html#benefits-of-bluegreen
Blue/green deployments provide a level of isolation between your blue and green application environments. This helps ensure spinning up a parallel green environment does not affect resources underpinning your blue environment.
higher infrastructure costs
https://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/introduction.html#benefits-of-bluegreen
Blue/green deployments conducted in AWS also provide cost optimization benefits. You’re not tied to the same underlying resources. So, if the performance envelope of the application changes from one version to another, you simply launch the new environment with optimized resources, whether that means fewer resources or just different compute resources. You also don’t have to run an overprovisioned architecture for an extended period of time. During the deployment, you can scale out the green environment as more traffic gets sent to it and scale the blue environment back in as it receives less traffic. Once the deployment succeeds, you decommission the blue environment and stop paying for the resources it was using.
zero downtime caused by developers pushing their work to production server(s)
https://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/introduction.html#bluegreen-deployment-methodology
Blue/green deployments provide releases with near zero-downtime and rollback capabilities.
There are also migrate:fresh and seed commands. The fact that they exist in laravel/framework doesn't mean they're suitable for all environments. octane:start is the only way to start an Octane server, so of course it is suitable for production. The other two aren't.
The other two are exclusively meant for production environments. In a development environment, you use CMD + C or CTRL + C to shut down the server. In a production environment, you're required to call octane:stop to exit. Moreover, in development, you use octane:start with the --watch flag, which provides FPM-like performance, but you don't need to reload the server manually when making code changes. In production, you do need octane:reload for that.
Which means you'll be performing a health check after you reload your application. When it may already be broken, and it's already too late.
You're describing a rollback mechanism that triggers when untested code is pushed to production. There are two ways to prevent that:
- Test the code on a staging server without letting real users to suffer before deploying it to production.
- Use a rollback deployment strategy. This means you deploy and release the code, then perform a health check on the service. If the service is down, your deployment system automatically rolls back to the previous release. The number of servers you use doesn't really matter, and this approach doesn't require containers or a load balancer. Load balancers just provide a more graceful transition period.
Technically, what blue/green deployment means is that you spin up another server (or environment) and verify the new version with a small portion of test traffic (for example, 1% of total traffic). If things go wrong, that version never goes live for the remaining 99% of users. However, that 1% of traffic still experiences downtime during the testing period.
Of course, you pay extra for that - after all you are consuming 2x server resources during that time.
This is not the question or issue in this (#1004) topic.
Blue/green deployments provide releases with near zero-downtime and rollback capabilities.
We're not talking about near zero-downtime solutions here — we're talking about zero-downtime solutions. We're also not discussing new features or fancy extras. The point is to fix octane:reload so it actually works as intended (and I've already fixed it, so no need to worry about that). Deployment scripts are an entirely different topic.
And while it's not really the focus here, yes - having automatic rollback capability in any deployment is basic stuff. You don't need AWS load balancers for that.
Your argument essentially says, "we don't need octane:reload to work properly because we can just pay for double the resources during deployment." That just shows how disconnected you are from what's actually happening under the hood. You’re just used to relying on easy click-and-pay services and I really don't know what you try to prove here.
The other two are exclusively meant for production environments.
Please quote the documentation or Octane's code when making such claims. Anything "hot reload"-like is de-facto for local development.
Test the code on a staging server without letting real users to suffer before deploying it to production.
Successful staging deployment doesn't guarantee a successful production deployments. They are not identical environments - the API keys for external services may be different, or the configuration, resource limits etc.
Use a rollback deployment strategy. This means you deploy and release the code, then perform a health check on the service. If the service is down, your deployment system automatically rolls back to the previous release.
If the service is down after you did an octane:reload, it means that a 100% of your users are already unable to access your service.
If things go wrong, that version never goes live for the remaining 99% of users. However, that 1% of traffic still experiences downtime during the testing period.
Exactly, and your solution with octane:reload doesn't allow that. It's either 100% on the old version, or a 100% on the new version. If things go wrong, you're screwed.
Of course, you pay extra for that - after all you are consuming 2x server resources during that time.
Yes, you pay double for the whole 5 minutes that a deployment lasts.
We're not talking about near zero-downtime solutions here
It is zero downtime. The only downtime you could get is if your new environment passes health checks, but can't actually handle real traffic for some reason - which is the case with your solution as well.
And while it's not really the focus here, yes - having automatic rollback capability in any deployment is basic stuff. You don't need AWS load balancers for that.
You don't need AWS for that. You DO need blue-green OR versioned deployments. You can't have reliable rollbacks without running both the old and the new version of your service concurrently, and being able to instantly switch traffic between them. Having two versions and the ability to switch traffic is the definition of blue green. Unless you're doing blue green, no, you don't have rollback capabilities. octane:reload to an older version is not reliable - your old service may fail to start for whatever reason.
Your argument essentially says, "we don't need octane:reload to work properly because we can just pay for double the resources during deployment."
No, my argument is you are trying to hammer in a screw. You are using the wrong tool for the job. I'm not arguing that you should migrate to a different deployment strategy. I'm arguing that your changes should never be part of octane:reload.
You’re just used to relying on easy click-and-pay services and I really don't know what you try to prove here.
Is extra $2-3 per month really worth all the risks? Additionally, what stops you from implementing blue green using free self-hosted tools, for example using Traefik/Kubernetes?
Blue-green is one of many strategies you can choose — but that doesn’t mean you have to. I’ve been using basic rolling updates for a while, which also provide true zero-downtime deployments (depending on your health checks) and don’t require a whole new environment. Rolling updates start creating new containers, and once they pass health checks, traffic is routed to the new version.
@autaut03 is trying to help you, but I guess it’s pointless — you clearly already know best.
@MatusBoa I don't see any help there or here. I raised the issue and solved the issue. You just wisecracking about things that aren't related at all. And believe me, most DevOps engineers know about these expensive strategies - that's not the solution to the issue whatsoever.
Quite the opposite, I've tried to educate you both multiple times about how things really work under the hood, but you keep arguing and promoting unrelated stuff here, looking for your own truth where there is none.
EDIT: The issue is open only because the maintainer decided to stall the merge into the laravel/octane repository.