cache-handler
cache-handler copied to clipboard
Using cache-handler in production
Hi! I fell in love with Caddy after years of using Nginx Proxy Manager. I mostly use it for reverse proxies and sometimes caching or responses is a must when serving content that takes some computation to generate but is otherwise the same for every user.
I run Caddy in Docker and build new image according to docs to include caching module and it works with minimal setup from README but there is an issue of high RAM consumption and eventually a crash of Caddy container. I would like to avoid caching in memory but documentation on how to use cache providers is somewhat unclear to me. There are some issues mentioned with Badger and that NutsDB should be used but there is no basic steps to achieve this so I'd like to ask some pointers:
- Is this module production ready?
- Is NutsDB/Badger cache provider production ready?
- When using NutsDB/Badger do I have to deploy it or is it included in the module or docker container of Caddy already?
Are there any other steps I should take to keep cache on disk only?
Thank you in advance for any help.
Hello @RobsonMi thank you for your feedback.
First, this module is production ready because some people are using it in production.
Second, nutsdb is production ready but badger is not (because of concurrency issues and bad writes).
Third, all in-memory providers are embedded in the module so you don't have to deploy anything else. If you want a distributed storage then you can use redis but you have to deploy either the redis service or the redis cluster by yourself.
If you want to use disk only, then you can configure nutsdb to do that (in the configuration block set the EntryIdxMode HintBPTSparseIdxMode). By default it will keep the key indexes in RAM and the values are stored on the disk.
You can also have a look to the documentation website (e.g. https://docs.souin.io/docs/middlewares/caddy/) and tell me if it's not clear enough to add more examples, or be more explicit about some parts.
@darkweak It might be worth adding some more high level configuration options. For example, I want to set Nuts to in-memory only and reduce the cache size to 128MB.
To do that, I think I need to set EntryIdxMode to "HintKeyValAndRAMIdxMode" and SegmentSize to 134217728, but this isn't clear. Do I have it right? Because when I set this, I still get a 256MB 0.dat file in /tmp/souin-nuts.
@darkweak Thank you for information. So a proper configuration for nutsdb using disk storage for data and RAM only for index will look like this:
{
order cache before rewrite
cache {
ttl 1200s
stale 3h
default_cache_control public, s-maxage=1200
api {
debug
prometheus
souin
}
nuts {
configuration {
Dir /tmp/nuts
EntryIdxMode HintBPTSparseIdxMode
RWMode 0
SegmentSize 1024
NodeNum 1
SyncEnable true
StartFileLoadingMode 1
}
}
}
}
~~Will this work inside the docker? I suppose I have to create a volume for the /tmp/nuts directory, right?~~
I have figured it out and added a docker volume for tmp directory so data is stored on host properly.
I confirm what @DzigaV observed that SegmentSize does not seem to affect .dat file size, it's still 256MB.
I am not sure if I correctly understand Segment is just a Chunk and if cache keeps growing it will eventually mean multiple .dat files in /tmp/nuts, am I correct? The only limitation is for a single cache item to be under 256MB?
I've got a problem still: initially it's working fine but after some time I can see "Incoming request" in the log but response is never given. Page loads forever... I've had this before even with in-memory caching. Is my configuration wrong?
EDIT:
I am also observing interesting issue. When I create a simple server that just returns current time and use mode bypass_request to ignore no-cache then it works as expected. For the duration of TTL the time remains the same and response if served from cache. When however on the same simple server I return a PNG image then Caddy caches it with every request again and again never serving it from cache. I have no idea what is going on with this and would like to solve it as PNG caching is what I need most.
@RobsonMi Thank you for the possible reproducible scenario. I will investigate on it tomorrow.
@darkweak let me know if you need any logs. I'll share what I can.
By default it will keep the key indexes in RAM and the values are stored on the disk.
FYI @darkweak, it looks like you sanitize the options passed to NutsDB, so the only valid value for EntryIdxMode is 1, which sets it to HintKeyAndRAMIdxMode. Otherwise, the default is HintKeyValAndRAMIdxMode.
This is the behaviour I want, and reflects NutsDB's default, so please don't change that 🙂
@darkweak Hi. are there any news? Any way I can help debug?
@RobsonMi I tried to reproduce without any success
With the caddyfile
{
order cache before rewrite
cache {
ttl 10s
stale 3h
default_cache_control public, s-maxage=10
api {
debug
prometheus
souin
}
nuts {
configuration {
Dir /tmp/nuts
EntryIdxMode HintBPTSparseIdxMode
RWMode 0
SegmentSize 1024
NodeNum 1
SyncEnable true
StartFileLoadingMode 1
}
}
}
}
localhost
route {
cache {
ttl 10s
mode bypass_request
}
reverse_proxy http://127.0.0.1:8080
}
With this "upstream" server that serves a png:
package main
import (
"fmt"
"log"
"net/http"
"os"
)
func main() {
handler := http.HandlerFunc(handleRequest)
http.Handle("/image", handler)
fmt.Println("Server started at port 8080")
http.ListenAndServe(":8080", nil)
}
func handleRequest(w http.ResponseWriter, r *http.Request) {
buf, err := os.ReadFile("image.jpg")
if err != nil {
log.Fatal(err)
}
fmt.Println("Serve from server")
w.Header().Set("Content-Type", "image/jpg")
w.Write(buf)
}
@DzigaV I'm writing a patch for the sanitization issue.