logspout
logspout copied to clipboard
Who's using Logspout?
If your company or project is using Logspout or some variation, please list yourself below!
Yup - we're using it within our start-up. Nice low overhead API. Watching feature additions closely (DNS TTL and health check in particular!)
Yep...we (Signiant) are using it on Amazon ECS (with a small fork change) and all of out internal Swarm nodes. Very cool solution.
We have one small issue with containers getting disconnected but looks like that was just fixed in master so waiting on the next release.
Yup being using it for about 2 months now, very easy integration with Kibana.
Yep, we've been using it to collect all of our logs at a 5-year's in startup and it has been working great until recently, we have noticed that a number of containers seem to arbitrarily stop collecting logs at some point....
@326TimesBetter have you resolved your issue with logs stopping?
Have been using it in production on a fairly large project for close to a year. In one word: indispensable! Using my own fork which fixes duplicate logs on container restart, at least until the PR is merged. Thanks for creating this!
@markine @326TimesBetter I'm also noticing logs stopping being collected
This particular issue probably isn't the place to comment on that. If you're seeing an issue in the latest master/stable release, please open up a different issue.
On the off-chance that you have a patch for issues, I'd be open to reviewing/merging once multiple users also confirm your patch. If you don't, thats cool too, more information is always better.
Thanks for understanding, and hopefully we can square away any issues users have with logspout!
Side-Joke: On the off chance the issue has been fixed for you in master and you're just waiting on a release, I take bribes for pushing the release button :P
@rosskukulinski @326TimesBetter @josegonzalez The fix that has been working for us is here: https://github.com/gliderlabs/logspout/pull/204
At Blendle, we're using Logspout to route all our Kubernetes-hosted application logs to Papertrail. Works brilliantly, especially with fun little "hacks" like this:
SYSLOG_HOSTNAME='{{ range $i, $e := .Container.Config.Env }}{{if gt (len $e) 9}}{{if and (and (and (and (and (and (and (eq (index $e 0) 65) (eq (index $e 1) 80)) (eq (index $e 2) 80)) (eq (index $e 3) 95)) (eq (index $e 4) 78)) (eq (index $e 5) 65)) (eq (index $e 6) 77)) (eq (index $e 7) 69)}}{{$e}}{{end}}{{end}}{{end}}{{ range $i, $e := .Container.Config.Env }}{{if gt (len $e) 14}}{{if and (and (and (and (and (and (and (and (and (and (and (and (eq (index $e 0) 65) (eq (index $e 1) 80)) (eq (index $e 2) 80)) (eq (index $e 3) 95)) (eq (index $e 4) 67)) (eq (index $e 5) 79)) (eq (index $e 6) 77)) (eq (index $e 7) 80)) (eq (index $e 8) 79)) (eq (index $e 9) 78)) (eq (index $e 10) 69)) (eq (index $e 11) 78)) (eq (index $e 12) 84)}}-{{$e}}{{end}}{{end}}{{end}}'
SYSLOG_TAG='{{ index .Container.Config.Labels "io.kubernetes.pod.namespace" }}'
Yip, using it with Docker Cloud and Loggly.
Yup we are using it here at XM, in conjunction with Logentries.
@JeanMertz I am curious about that monstrous template for syslog_hostname, what does it do? :P
Haha @itajaja, "monstrous" indeed.
Basically, we set the environment variables APP_NAME and APP_COMPONENT for all our services on Kubernetes. Unfortunately, Logspout has no way (yet) to get the values of a specific environment variable. What you do get, is an array of KEY=VALUE strings. So our developer @koenbollen came up with an ingenious solution that works around this issue, and makes use of Golangs limited templating language:
traverse the array, find a stirng that has the right minimum length, and traverse each character of the string, to verify that it actually reads APP_NAME=... or APP_COMPONENT=..., then use those two to generate the hostname to be send to our logging service (which means the hostname ends up as APP_NAME=hello-APP_COMPONENT=world).
It gets the job done, and that's what matters in the end 😉
@JeanMertz @koenbollen:

Using logspout at GS Shop in our Mesos infrastructure. Logs are shipped to on-prem Logstash and Elasticsearch cluster.
Using it at healfies.com. GKE environment, sending logs to Papertrail.
@ardigo I'm sometimes having problems with logspout not working anymore. My use case is the same as yours. I use the following DaemonSet:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: logspout
labels:
tier: monitoring
app: logspout
version: v1
spec:
template:
metadata:
labels:
name: logspout
spec:
hostPID: true
hostIPC: true
hostNetwork: true
containers:
- resources:
requests:
cpu: 0.2
memory: 200Mi
securityContext:
privileged: true
env:
- name: SYSLOG_TAG
value: '{{ index .Container.Config.Labels "io.kubernetes.pod.namespace" }}[{{ index .Container.Config.Labels "io.kubernetes.container.name" }}]'
- name: SYSLOG_HOSTNAME
value: '{{ index .Container.Config.Labels "io.kubernetes.pod.name" }}-{{ index .Container.Config.Labels "io.kubernetes.pod.namespace" }}'
- name: ROUTE_URIS
value: syslog+tls://logs5.papertrailapp.com:xxxxxx
image: gliderlabs/logspout
name: logspout
volumeMounts:
- name: log
mountPath: /var/run/docker.sock
volumes:
- name: log
hostPath:
path: /var/run/docker.sock
After about 10 hours the containers stop sending their logs to papertrail but the pods are still running :(
@thecodeassassin I'm experiencing the same issue here, but it seems to be intermittent. Sometimes the pods stop reporting to papertrail many times a day, sometimes we run for weeks with no sign of problems. At kubernetes, the pods report as healthy and running, so I need to manually restart them.
It seems logspout image has not been updated for a long time, so not seem to be update-related.
My DaemonSet is practically the same, except I'm not limiting memory and cpu is set to _0.15`.
Any ideas on how to debug are highly appreciated.
@ardigo @thecodeassassin As I mentioned above:
This particular issue probably isn't the place to comment on that. If you're seeing an issue in the latest master/stable release, please open up a different issue.
On the off-chance that you have a patch for issues, I'd be open to reviewing/merging once multiple users also confirm your patch. If you don't, thats cool too, more information is always better.
Thanks for understanding, and hopefully we can square away any issues users have with logspout!
Side-Joke: On the off chance the issue has been fixed for you in master and you're just waiting on a release, I take bribes for pushing the release button :P
@thecodeassassin came into this while browsing the repo:
https://github.com/gliderlabs/logspout/pull/204 https://github.com/gliderlabs/logspout/pull/204/commits/c38b7f7ef02e87dc1ecddb596b0449b8019c70e6
Will give it a try.
@ardigo @josegonzalez my apologies. I opened a separate ticket: https://github.com/gliderlabs/logspout/issues/298
We are using logspout at Options Cafe. Love it. Thanks!!!