frankenphp icon indicating copy to clipboard operation
frankenphp copied to clipboard

Upstream prematurely closed connection while reading response header from upstream

Open TC74 opened this issue 1 year ago β€’ 22 comments

What happened?

Need help, facing Nginx Error Log "...upstream prematurely closed connection while reading response header from upstream, client: 112.215.65.47, server: xxxxxxxxxx, request: "POST /livewire/update HTTP/2.0", upstream: "http://127.0.0.1:8000/livewire/update"

  • Deployed using PLOI
  • Ubuntu 22.04
  • Lavarel Octane Frankenphp
  • Cloudflare
  • Https only

Anyone experienced this? Or can point where to look for resolution...

Build Type

Custom (tell us more in the description)

Worker Mode

Yes

Operating System

GNU/Linux

CPU Architecture

x86_64

Relevant log output

...upstream prematurely closed connection while reading response header from upstream, client: 112.215.65.47, server: xxxxxxxxxx, request: "POST /livewire/update HTTP/2.0", upstream: "http://127.0.0.1:8000/livewire/update

TC74 avatar Feb 06 '24 13:02 TC74

Do you have logs from frankenphp?

withinboredom avatar Feb 06 '24 13:02 withinboredom

Do you have logs from frankenphp?

did you know where is frankenphp log located? I've googled and didn't find any clue...

meeftah avatar Feb 07 '24 15:02 meeftah

It's usually in regular output, so it largely depends on how you are running it. If you are running it via systemd, then it will be with all your other logs (journalctl). If you are running it in a docker container, then it will be where logs are for docker containers.

withinboredom avatar Feb 07 '24 17:02 withinboredom

photo_2024-02-08_10-08-03

when I ran the laravel octane (frankenphp) with --log-level=debug I was stumbled with this info, is there any important information in there?

meeftah avatar Feb 08 '24 03:02 meeftah

have the same error with laravel forge and octane (frankenphp)

indigoram89 avatar Feb 11 '24 15:02 indigoram89

Is this endpoint a long running connection (SSE or something like that)? Of yes, this is likely a timeout issue. Is the PHP timeout disabled? Caddy timeout should be disabled by default.

dunglas avatar Feb 11 '24 20:02 dunglas

It is not a long running process/request. We just request a basic login page, then a submit a login then error.

On Mon, 12 Feb 2024, 04:22 KΓ©vin Dunglas, @.***> wrote:

Is this endpoint a long running connection (SSE or something like that)? Of yes, this is likely a timeout issue. Is the PHP timeout disabled? Caddy timeout should be disabled by default.

β€” Reply to this email directly, view it on GitHub https://github.com/dunglas/frankenphp/issues/557#issuecomment-1937859338, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA6ADM2QWPECURM5AHHFRTTYTESAHAVCNFSM6AAAAABC4A433GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMZXHA2TSMZTHA . You are receiving this because you authored the thread.Message ID: @.***>

TC74 avatar Feb 11 '24 22:02 TC74

According to the logs, the error is on POST /livewire/update, it doesn't look like a login page.

dunglas avatar Feb 11 '24 23:02 dunglas

According to the logs, the error is on POST /livewire/update, it doesn't look like a login page.

yes the error is on 'POST /livewire/update' in on laravel using livewire, but I'm not really sure what is that really entail...I'm googling the error but do not find sameone has the same problem, in my local I've install the app on laradock which is inside a docker and the frankenphp is running well, but not on server (PLOI), I hope this information is useful for you...

The only error I could find was as described in the Relevant Log Output

meeftah avatar Feb 14 '24 05:02 meeftah

I wrote to Forge support team. They tested my application on the server and it is their answer:

It appears like the FrankenPHP server crashes whenever it receives a request. It does so without writing anything to the log files. You'll need to follow the issue reported on the FrankenPHP repo.

indigoram89 avatar Feb 14 '24 05:02 indigoram89

I also use Livewire

indigoram89 avatar Feb 14 '24 05:02 indigoram89

Would you be able to give me access to the crashing code, even privately?

dunglas avatar Feb 27 '24 21:02 dunglas

I also encounter this problem when deploying my app using the frankenphp image as a base image in kubernetes. It keeps restarting after receiving a request. But, it works fine when I run the image using docker-compose on my server.

here is the log

2024-03-06 16:43:34.136 ICT
{level: info, msg: FrankenPHP started 🐘, php_version: 8.2.16, ts: 1709718214.1368062}
2024-03-06 16:43:34.137 ICT
{address: [::]:80, http3: false, level: debug, logger: http, msg: starting server loop, tls: false, ts: 1709718214.1371646}
2024-03-06 16:43:34.140 ICT
{level: info, logger: http.log, msg: server running, name: srv0, protocols: […], ts: 1709718214.140442}
2024-03-06 16:43:34.142 ICT
{file: /config/caddy/autosave.json, level: info, msg: autosaved config (load with --resume flag), ts: 1709718214.1424348}
2024-03-06 16:43:34.142 ICT
{level: info, msg: serving initial configuration, ts: 1709718214.142536}
2024-03-06 16:43:34.142 ICT
{level: info, logger: tls, msg: cleaning storage unit, storage: FileStorage:/data/caddy, ts: 1709718214.1426692}
2024-03-06 16:43:34.143 ICT
{level: info, logger: tls, msg: finished cleaning storage units, ts: 1709718214.1429553}
2024-03-06 16:50:15.402 ICT
{config_adapter: caddyfile, config_file: /etc/caddy/Caddyfile, level: info, msg: using provided configuration, ts: 1709718615.4026613}
2024-03-06 16:50:15.405 ICT
{adapter: caddyfile, file: /etc/caddy/Caddyfile, level: warn, line: 16, msg: Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies, ts: 1709718615.4050312}
2024-03-06 16:50:15.406 ICT
{address: localhost:2019, enforce_origin: false, level: info, logger: admin, msg: admin endpoint started, origins: […], ts: 1709718615.4067924}
2024-03-06 16:50:15.407 ICT
{http_port: 80, level: warn, logger: http.auto_https, msg: server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server, server_name: srv0, ts: 1709718615.407489}
2024-03-06 16:50:15.408 ICT
{http: {…}, level: debug, logger: http.auto_https, msg: adjusted config, tls: {…}, ts: 1709718615.4078734}
2024-03-06 16:50:15.408 ICT
{level: info, msg: FrankenPHP started 🐘, php_version: 8.2.16, ts: 1709718615.4087079}
2024-03-06 16:50:15.409 ICT
{address: [::]:80, http3: false, level: debug, logger: http, msg: starting server loop, tls: false, ts: 1709718615.409097}
2024-03-06 16:50:15.410 ICT
{level: info, logger: http.log, msg: server running, name: srv0, protocols: […], ts: 1709718615.4101481}
2024-03-06 16:50:15.410 ICT
{file: /config/caddy/autosave.json, level: info, msg: autosaved config (load with --resume flag), ts: 1709718615.4107764}
2024-03-06 16:50:15.411 ICT
{level: info, msg: serving initial configuration, ts: 1709718615.4109845}
2024-03-06 16:50:15.411 ICT
{cache: 0xc00045f100, level: info, logger: tls.cache.maintenance, msg: started background certificate maintenance, ts: 1709718615.4077792}
2024-03-06 16:50:15.413 ICT
{level: info, logger: tls, msg: cleaning storage unit, storage: FileStorage:/data/caddy, ts: 1709718615.413479}
2024-03-06 16:50:15.413 ICT
{level: info, logger: tls, msg: finished cleaning storage units, ts: 1709718615.4138448}

garinichsan avatar Mar 07 '24 01:03 garinichsan

this problem when deploying my app using the frankenphp image as a base image in kubernetes. It keeps restarting after receiving a request. But, it works fine when I run the image using docker-compose on my server.

This could be happening for any number of reasons, but your pod security policy is the most likely one. What does your deployment yaml file look like?

withinboredom avatar Mar 08 '24 13:03 withinboredom

I solve it, well not really...I created image docker for my project and frankenphp in it, after that I combine with nginx proxy manager

meeftah avatar Mar 10 '24 05:03 meeftah

@withinboredom

here's my deployment yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
  name: api
  namespace: staging-api
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: api
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        kubectl.kubernetes.io/restartedAt: "2024-03-07T07:30:36Z"
      creationTimestamp: null
      labels:
        app: api
    spec:
      automountServiceAccountToken: false
      containers:
      - envFrom:
        - configMapRef:
            name: api-config
            optional: false
        - secretRef:
            name: api-secret
            optional: false
        image: <PRIVATE_REPO>/api-franken:staging
        imagePullPolicy: Always
        lifecycle:
          postStart:
            exec:
              command:
              - /bin/sh
              - -c
              - cp -Lr /api-storage-secret/* /app/storage/
        name: api
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /api-storage-secret
          mountPropagation: None
          name: api-storage-secret
          readOnly: true
      dnsPolicy: ClusterFirst
      enableServiceLinks: false
      nodeSelector:
        environment: development
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      shareProcessNamespace: false
      terminationGracePeriodSeconds: 30
      volumes:
      - name: api-storage-secret
        secret:
          defaultMode: 420
          optional: false
          secretName: api-storage-secret
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2024-03-06T09:43:35Z"
    lastUpdateTime: "2024-03-06T09:43:35Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2024-02-15T09:19:08Z"
    lastUpdateTime: "2024-03-07T07:30:40Z"
    message: ReplicaSet "api-597bbd9578" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 65
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

garinichsan avatar Mar 12 '24 06:03 garinichsan

I also encounter this problem when deploying my app using the frankenphp image as a base image in kubernetes. It keeps restarting after receiving a request. But, it works fine when I run the image using docker-compose on my server.

Having the same issue. It works with the same image in docker-compose, but on Kubernetes it fails.

@withinboredom could you maybe elaborate more on how podsecuritypolicy could affect this?

nklmilojevic avatar Apr 06 '24 14:04 nklmilojevic

If you are running in k8s as a non-root user, you have to give it CAP_NET_BIND_SERVICE. Even if you aren't opening a lower port. I have no idea why.

withinboredom avatar Apr 06 '24 15:04 withinboredom

Oh, got it. I already tried that and got the same issue. I'm going to work to make a reproducible proof of concept in another repo so anyone can try it.

nklmilojevic avatar Apr 06 '24 15:04 nklmilojevic

Here's a policy that works on a production cluster

securityContext:
    runAsNonRoot: true
    allowPrivilegeEscalation: false
    capabilities:
      drop:
        - ALL
      add:
        - NET_BIND_SERVICE

withinboredom avatar Apr 06 '24 16:04 withinboredom

Yup, that is a a thing that I have in my cluster and after 1 or 2 requests the pod errors out and restarts. Nothing usable in the logs with debug on, tho.

nklmilojevic avatar Apr 06 '24 16:04 nklmilojevic

Ok, apologies, my bug is not directly related to this, as soon as I disabled latest datadog PHP tracer (0.99.1) everything works.

I think it is related to this https://github.com/dunglas/frankenphp/issues/458#issuecomment-1973318107

nklmilojevic avatar Apr 06 '24 17:04 nklmilojevic

Closing as can't reproduce (and maybe already reported or even fixed). Feel free to reopen if you have more information.

dunglas avatar Apr 15 '24 17:04 dunglas