benchmarks
benchmarks copied to clipboard
Benchmark Supabase (fly.io)
Chore
- Running Database on Supabase
- Attached PostgREST directly to the Supabase database
- Running benchmarks directly to the PostgREST (no kong)
Results
| Instance | vCPU | RAM | Price | Read | Write | Comments |
|---|---|---|---|---|---|---|
| AWS t3a.micro | 2 | 1 | $6.80 | 303/s | 307/s | |
| Fly micro-1x | shared | 128MB | $2.67 | 171/s | 235/s | Reached request duration threshold |
| Fly micro-2x | shared | 512MB | $8 | 206/s | 277/s |
Fly config
app = "postgrest"
[build]
image = "postgrest/postgrest"
[[services]]
internal_port = 3000
protocol = "tcp"
[services.concurrency]
hard_limit = 25
soft_limit = 20
[[services.ports]]
handlers = ["http"]
port = "80"
[[services.ports]]
handlers = ["tls", "http"]
port = "443"
[[services.tcp_checks]]
interval = 10000
timeout = 2000
Read benchmarks

Write benchmarks

Just to make sure I am running with the same amount of data in the benchmark database, and on the same hardware, here is my run for Supabase:
Read

Write

TLDR
Setup:
- AWS: t3a.micro (Kong -> Postgres) -> database (EC2 t3a.micro)
- Fly: (Postgres) -> database (EC2 t3a.micro)
Read
- AWS:
303/s - Fly:
206/s
Write
- AWS
307/s - Fly:
277/s
Notes
- Running from my local machine, so latency included
- AWS is going through cloudflare (DNS only), and then Kong, then Postgrest.
- Strange that read is slower than write (?)
- All systems deployed in Singapore
why is your AWS run so much slower than my results
I was wondering the same actually. I just ran this code against our Benchmarks project. Is that how you did it?
Benchmarks on Fly micro 1x: not a huge difference in throughput, but it did result in some duration thresholds to drop out


I'll add a comparison table to the top comment