logzio-es2graphite
logzio-es2graphite copied to clipboard
Getting InvalidURL exception
Trying to run this in kubernetes and getting an InvalidURL exception.
Started w/ this configuration
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: es2graphite
labels:
component: elasticsearch
role: graphiteexporter
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: graphiteexporter
spec:
containers:
- name: es2graphite
securityContext:
image: logzio/es2graphite
imagePullPolicy: Always
resources:
limits:
cpu: "4"
memory: 4Gi
requests:
cpu: "100m"
memory: 20Mi
env:
- name: "ELASTICSEARCH_ADDR"
value: "elasticsearch.infrastructure.svc.xxx.xxx.com"
- name: "GRAPHITE"
value: "xxx.xxx.com"
- name: "GRAPHITE_PORT"
value: "2003"
And, get this error
Traceback (most recent call last):
File "/root/go.py", line 79, in <module>
auth=(elasticsearch_user_name, elasticsearch_password)).json()
File "/usr/lib/python2.7/site-packages/requests/api.py", line 65, in get
return request('get', url, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/api.py", line 49, in request
response = session.request(method=method, url=url, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 447, in request
prep = self.prepare_request(req)
File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 378, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "/usr/lib/python2.7/site-packages/requests/models.py", line 303, in prepare
self.prepare_url(url, params)
File "/usr/lib/python2.7/site-packages/requests/models.py", line 356, in prepare_url
raise InvalidURL(*e.args)
requests.exceptions.InvalidURL: Failed to parse: elasticsearch.infrastructure.svc.xxx.xxx.com:tcp:
The elasticsearch url is valid. From a container running kubernetes, I can curl against the elasticsearch cluster.
$ curl -v http://elasticsearch.infrastructure.svc.xxx.xxx.com:9200
* Rebuilt URL to: http://elasticsearch.infrastructure.svc.xxx.xxx.com:9200/
* Trying 172.31.253.218...
* Connected to elasticsearch.infrastructure.svc.xxx.xxx.com (172.31.253.218) port 9200 (#0)
> GET / HTTP/1.1
> Host: elasticsearch.infrastructure.svc.xxx.xxx.com:9200
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 354
<
From the stacktrace, this is the URL it is trying to use: elasticsearch.infrastructure.svc.xxx.xxx.com:tcp:
Can you please try to run the container without kubernetes just so we can eliminate it?
I forked this and added some debugging code. The problem is that kubernetes provides service discovery in two ways. First (and prefered) is an internal dns service. The second is it injects into every running container environmental variables. One of which happens to be
ELASTICSEARCH_PORT=tcp://172.31.253.218:9200
I can certainly hack this on my fork to make it work. But, I'd prefer to do something that you are willing to merge upstream. Any suggestions on how you would like to see this fixed? The easiest and best way imo would be to prefix all of the expected environmental variables w/ es2graphite.
Answered in #6