feedanal

Results 25 comments of feedanal

I tried pulling `latest` PH container and it works fine! thanks @xrdt for a prompt fix , we were really stuck here ![image](https://github.com/PostHog/posthog/assets/111871756/fb452262-48a5-463b-bc3c-d5ade7189dd8)

So, what I did today: - waited for offset to reach watermark value to ensure that session recordings start flowing in; - waited for several hours and updated (rebuild) posthog...

restarted containers upon update: ``` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 293a2541b3ba caddy:2.6.1 "caddy run --config " 9 minutes ago Up 9 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 443/udp,...

Within 3-4 days ~~SpaceHog~~ PostHog ate ALL available server space (0.5Tb). Luckily it's a fixed-cost bare-metal server, not some auto-expanding and auto-billed cloud instance. Moving it to 2tb server now...

I believe session recordings blobs also can be a problem: https://posthog.com/docs/session-replay/data-retention#:~:text=Recordings%20are%20automatically%20deleted%20after,manually%20deleted%20via%20the%20UI.

Thanks for suggestion, I've limited logfiles to 10mb in `docker-compose.yml` and now it's sorta under control, but hell, there should be a way to set log level... ``` plugins: extends:...

> i _think_ that logically the presence of this log means either we're trapped in `destroying` state for a recording that's receiving traffic or your events don't have timestamps 🤯...

> you can set log level using the LOG_LEVEL environment variable > > https://github.com/PostHog/posthog/blob/37a08e808ca198f2e26916bf9294069f6080819f/plugin-server/README.md?plain=1#L133 > > with supported values here > > https://github.com/PostHog/posthog/blob/224a5d5d0c07f880b19dbc02cce2f07b965023c0/plugin-server/src/types.ts#L39-L46 > > the default level if not...

Another problem is Kafka logging: ```sh Every 60.0s: du -h --max-depth=1 /var/lib/docker/overlay2 | sort -rh | head -15 t.xfeed.com: Mon May 13 02:15:13 2024 238G /var/lib/docker/overlay2 207G /var/lib/docker/overlay2/8f83ff9dce79104e554c7e76c7805cd77af31cd15eb183f1ac6a518dadfaa389 5.0G /var/lib/docker/overlay2/f80e07831c327082c849d0839efadda5d280e1858b782028594347aeec75b7d7...

I think this can be closed, as setting `LOG_LEVEL`, limiting logging and adusting Kafka log retention in `docker-compose.yml` completely fixed this for us: now we accumulate just couple of gigs...