frankenphp
frankenphp copied to clipboard
Upstream prematurely closed connection while reading response header from upstream
What happened?
Need help, facing Nginx Error Log "...upstream prematurely closed connection while reading response header from upstream, client: 112.215.65.47, server: xxxxxxxxxx, request: "POST /livewire/update HTTP/2.0", upstream: "http://127.0.0.1:8000/livewire/update"
- Deployed using PLOI
- Ubuntu 22.04
- Lavarel Octane Frankenphp
- Cloudflare
- Https only
Anyone experienced this? Or can point where to look for resolution...
Build Type
Custom (tell us more in the description)
Worker Mode
Yes
Operating System
GNU/Linux
CPU Architecture
x86_64
Relevant log output
...upstream prematurely closed connection while reading response header from upstream, client: 112.215.65.47, server: xxxxxxxxxx, request: "POST /livewire/update HTTP/2.0", upstream: "http://127.0.0.1:8000/livewire/update
Do you have logs from frankenphp?
Do you have logs from frankenphp?
did you know where is frankenphp log located? I've googled and didn't find any clue...
It's usually in regular output, so it largely depends on how you are running it. If you are running it via systemd, then it will be with all your other logs (journalctl
). If you are running it in a docker container, then it will be where logs are for docker containers.
when I ran the laravel octane (frankenphp) with --log-level=debug I was stumbled with this info, is there any important information in there?
have the same error with laravel forge and octane (frankenphp)
Is this endpoint a long running connection (SSE or something like that)? Of yes, this is likely a timeout issue. Is the PHP timeout disabled? Caddy timeout should be disabled by default.
It is not a long running process/request. We just request a basic login page, then a submit a login then error.
On Mon, 12 Feb 2024, 04:22 KΓ©vin Dunglas, @.***> wrote:
Is this endpoint a long running connection (SSE or something like that)? Of yes, this is likely a timeout issue. Is the PHP timeout disabled? Caddy timeout should be disabled by default.
β Reply to this email directly, view it on GitHub https://github.com/dunglas/frankenphp/issues/557#issuecomment-1937859338, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA6ADM2QWPECURM5AHHFRTTYTESAHAVCNFSM6AAAAABC4A433GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMZXHA2TSMZTHA . You are receiving this because you authored the thread.Message ID: @.***>
According to the logs, the error is on POST /livewire/update
, it doesn't look like a login page.
According to the logs, the error is on
POST /livewire/update
, it doesn't look like a login page.
yes the error is on 'POST /livewire/update' in on laravel using livewire, but I'm not really sure what is that really entail...I'm googling the error but do not find sameone has the same problem, in my local I've install the app on laradock which is inside a docker and the frankenphp is running well, but not on server (PLOI), I hope this information is useful for you...
The only error I could find was as described in the Relevant Log Output
I wrote to Forge support team. They tested my application on the server and it is their answer:
It appears like the FrankenPHP server crashes whenever it receives a request. It does so without writing anything to the log files. You'll need to follow the issue reported on the FrankenPHP repo.
I also use Livewire
Would you be able to give me access to the crashing code, even privately?
I also encounter this problem when deploying my app using the frankenphp image as a base image in kubernetes. It keeps restarting after receiving a request. But, it works fine when I run the image using docker-compose on my server.
here is the log
2024-03-06 16:43:34.136 ICT
{level: info, msg: FrankenPHP started π, php_version: 8.2.16, ts: 1709718214.1368062}
2024-03-06 16:43:34.137 ICT
{address: [::]:80, http3: false, level: debug, logger: http, msg: starting server loop, tls: false, ts: 1709718214.1371646}
2024-03-06 16:43:34.140 ICT
{level: info, logger: http.log, msg: server running, name: srv0, protocols: [β¦], ts: 1709718214.140442}
2024-03-06 16:43:34.142 ICT
{file: /config/caddy/autosave.json, level: info, msg: autosaved config (load with --resume flag), ts: 1709718214.1424348}
2024-03-06 16:43:34.142 ICT
{level: info, msg: serving initial configuration, ts: 1709718214.142536}
2024-03-06 16:43:34.142 ICT
{level: info, logger: tls, msg: cleaning storage unit, storage: FileStorage:/data/caddy, ts: 1709718214.1426692}
2024-03-06 16:43:34.143 ICT
{level: info, logger: tls, msg: finished cleaning storage units, ts: 1709718214.1429553}
2024-03-06 16:50:15.402 ICT
{config_adapter: caddyfile, config_file: /etc/caddy/Caddyfile, level: info, msg: using provided configuration, ts: 1709718615.4026613}
2024-03-06 16:50:15.405 ICT
{adapter: caddyfile, file: /etc/caddy/Caddyfile, level: warn, line: 16, msg: Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies, ts: 1709718615.4050312}
2024-03-06 16:50:15.406 ICT
{address: localhost:2019, enforce_origin: false, level: info, logger: admin, msg: admin endpoint started, origins: [β¦], ts: 1709718615.4067924}
2024-03-06 16:50:15.407 ICT
{http_port: 80, level: warn, logger: http.auto_https, msg: server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server, server_name: srv0, ts: 1709718615.407489}
2024-03-06 16:50:15.408 ICT
{http: {β¦}, level: debug, logger: http.auto_https, msg: adjusted config, tls: {β¦}, ts: 1709718615.4078734}
2024-03-06 16:50:15.408 ICT
{level: info, msg: FrankenPHP started π, php_version: 8.2.16, ts: 1709718615.4087079}
2024-03-06 16:50:15.409 ICT
{address: [::]:80, http3: false, level: debug, logger: http, msg: starting server loop, tls: false, ts: 1709718615.409097}
2024-03-06 16:50:15.410 ICT
{level: info, logger: http.log, msg: server running, name: srv0, protocols: [β¦], ts: 1709718615.4101481}
2024-03-06 16:50:15.410 ICT
{file: /config/caddy/autosave.json, level: info, msg: autosaved config (load with --resume flag), ts: 1709718615.4107764}
2024-03-06 16:50:15.411 ICT
{level: info, msg: serving initial configuration, ts: 1709718615.4109845}
2024-03-06 16:50:15.411 ICT
{cache: 0xc00045f100, level: info, logger: tls.cache.maintenance, msg: started background certificate maintenance, ts: 1709718615.4077792}
2024-03-06 16:50:15.413 ICT
{level: info, logger: tls, msg: cleaning storage unit, storage: FileStorage:/data/caddy, ts: 1709718615.413479}
2024-03-06 16:50:15.413 ICT
{level: info, logger: tls, msg: finished cleaning storage units, ts: 1709718615.4138448}
this problem when deploying my app using the frankenphp image as a base image in kubernetes. It keeps restarting after receiving a request. But, it works fine when I run the image using docker-compose on my server.
This could be happening for any number of reasons, but your pod security policy is the most likely one. What does your deployment yaml file look like?
I solve it, well not really...I created image docker for my project and frankenphp in it, after that I combine with nginx proxy manager
@withinboredom
here's my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
name: api
namespace: staging-api
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: api
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
kubectl.kubernetes.io/restartedAt: "2024-03-07T07:30:36Z"
creationTimestamp: null
labels:
app: api
spec:
automountServiceAccountToken: false
containers:
- envFrom:
- configMapRef:
name: api-config
optional: false
- secretRef:
name: api-secret
optional: false
image: <PRIVATE_REPO>/api-franken:staging
imagePullPolicy: Always
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- cp -Lr /api-storage-secret/* /app/storage/
name: api
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /api-storage-secret
mountPropagation: None
name: api-storage-secret
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: false
nodeSelector:
environment: development
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
shareProcessNamespace: false
terminationGracePeriodSeconds: 30
volumes:
- name: api-storage-secret
secret:
defaultMode: 420
optional: false
secretName: api-storage-secret
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2024-03-06T09:43:35Z"
lastUpdateTime: "2024-03-06T09:43:35Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2024-02-15T09:19:08Z"
lastUpdateTime: "2024-03-07T07:30:40Z"
message: ReplicaSet "api-597bbd9578" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 65
readyReplicas: 1
replicas: 1
updatedReplicas: 1
I also encounter this problem when deploying my app using the frankenphp image as a base image in kubernetes. It keeps restarting after receiving a request. But, it works fine when I run the image using docker-compose on my server.
Having the same issue. It works with the same image in docker-compose, but on Kubernetes it fails.
@withinboredom could you maybe elaborate more on how podsecuritypolicy could affect this?
If you are running in k8s as a non-root user, you have to give it CAP_NET_BIND_SERVICE. Even if you aren't opening a lower port. I have no idea why.
Oh, got it. I already tried that and got the same issue. I'm going to work to make a reproducible proof of concept in another repo so anyone can try it.
Here's a policy that works on a production cluster
securityContext:
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
Yup, that is a a thing that I have in my cluster and after 1 or 2 requests the pod errors out and restarts. Nothing usable in the logs with debug on, tho.
Ok, apologies, my bug is not directly related to this, as soon as I disabled latest datadog PHP tracer (0.99.1) everything works.
I think it is related to this https://github.com/dunglas/frankenphp/issues/458#issuecomment-1973318107
Closing as can't reproduce (and maybe already reported or even fixed). Feel free to reopen if you have more information.