Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes:
Helm : 2.11.0
K8S : 1.11.2
Environment : Azure AKS
Which chart: stable/grafana
What happened:
The pod grafana and its container grafana-sc-datasources has more than 20 restart for the past 10 hours after I installed the chart.
This is the error in logs :
Working on configmap <one of my project configmap>
Working on configmap <one of my project configmap>
Working on configmap <one of my project configmap>
Working on configmap <one of my project configmap>
Working on configmap monitoring-prometheus-oper-grafana-datasource
Configmap with label found
File in configmap datasource.yaml ADDED
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 543, in _update_chunk_length
self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 302, in _error_catcher
yield
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 598, in read_chunked
self._update_chunk_length()
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 547, in _update_chunk_length
raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/sidecar.py", line 58, in <module>
main()
File "/app/sidecar.py", line 54, in main
watchForChanges(label, targetFolder)
File "/app/sidecar.py", line 23, in watchForChanges
for event in w.stream(v1.list_config_map_for_all_namespaces):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 124, in stream
for line in iter_resp_lines(resp):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 45, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 626, in read_chunked
self._original_response.close()
File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 320, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
When I look into the configmap monitoring-prometheus-oper-grafana-datasource, I see this content:
Name: monitoring-prometheus-oper-grafana-datasource
Namespace: monitoring
Labels: app=prometheus-operator-grafana
chart=prometheus-operator-0.1.16
grafana_datasource=1
heritage=Tiller
release=monitoring
Annotations: <none>
Data
====
datasource.yaml:
----
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://monitoring-prometheus-oper-prometheus:9090/
access: proxy
isDefault: true
And a curl to this url returns:
root@my-shell-796b6f7d5b-qbvbz:/# curl 'http://monitoring-prometheus-oper-prometheus:9090/'
<a href="/graph">Found</a>.
root@my-shell-796b6f7d5b-qbvbz:/#
What you expected to happen:
The container should not errored all the time
How to reproduce it (as minimally and precisely as possible):
I assume a standard installation of the chart would reproduce this error.
Note: in my case, I installed the grafana chart through the stable/prometheus-operator chart.
It seems that the container grafana-sc-dashboard in the pod grafana has the same type of issue:
Working on configmap monitoring-prometheus-oper-grafana-dashboard-k8s-resources-pod
Configmap with label found
File in configmap grafana-dashboard-k8s-resources-pod.json ADDED
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 543, in _update_chunk_length
self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 302, in _error_catcher
yield
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 598, in read_chunked
self._update_chunk_length()
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 547, in _update_chunk_length
raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/sidecar.py", line 58, in <module>
main()
File "/app/sidecar.py", line 54, in main
watchForChanges(label, targetFolder)
File "/app/sidecar.py", line 23, in watchForChanges
for event in w.stream(v1.list_config_map_for_all_namespaces):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 124, in stream
for line in iter_resp_lines(resp):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 45, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 626, in read_chunked
self._original_response.close()
File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 320, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
However this time the configmap monitoring-prometheus-oper-grafana-dashboard-k8s-resources-pod is something different with a dashboard grafana.
@jbouzekri The grafana used by this chart is its own separate chart used as a dependency. I suggest that you update the issue so that the maintainers of stable/grafana are able to find it more easily, they are more likely to have an understanding of the behaviour you are seeing.
There is a potential solution you may want to try here: https://github.com/kiwigrid/k8s-sidecar/pull/6#issuecomment-437158647
@vsliouniaev: thanks, issue updated.
@jbouzekri We have seen this issue as well. It seems that the current python code has an issue as it is collecting all configMaps in all existing namespaces. We got it fixed by switching to the latest image version for the dashboard sidecar which is named kiwigrid/k8s-sidecar:0.0.6.
Hi I'm still having this issue with k8s-sidecar:0.0.6
Helm Chart Version: stable/grafana 1.19.0
Grafana appVersion: 5.3.4
SideCar: kiwigrid/k8s-sidecar:0.0.6
grafana-sc-datasources logs
Working on configmap monitoring/prometheus-operator-grafana-dashboard-k8s-resources-cluster
Working on configmap monitoring/prometheus-prometheus-operator-prometheus-rulefiles-0
Working on configmap monitoring/prometheus-operator-grafana-dashboard-k8s-resources-pod
Working on configmap monitoring/prometheus-operator-grafana-datasource
Configmap with label found
File in configmap datasource.yaml ADDED
Traceback (most recent call last):
File "/app/sidecar.py", line 99, in <module>
main()
File "/app/sidecar.py", line 95, in main
watchForChanges(label, targetFolder, url, method, payload, namespace)
File "/app/sidecar.py", line 53, in watchForChanges
for event in stream:
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 122, in stream
resp = func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 11650, in list_namespaced_config_map
(data) = self.list_namespaced_config_map_with_http_info(namespace, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 11753, in list_namespaced_config_map_with_http_info
collection_formats=collection_formats)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 321, in call_api
_return_http_data_only, collection_formats, _preload_content, _request_timeout)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 155, in __call_api
_request_timeout=_request_timeout)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 342, in request
headers=headers)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 231, in GET
query_params=query_params)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Fri, 07 Dec 2018 06:35:39 GMT', 'Content-Length': '186'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"resourceVersion: Invalid value: \\"None\\": strconv.ParseUint: parsing \\"None\\": invalid syntax","code":500}\n'
grafana-sc-dashboard log
Working on configmap monitoring/prometheus-operator-grafana-coredns-k8s
Configmap with label found
File in configmap grafana-coredns-k8s.json ADDED
Working on configmap monitoring/prometheus-operator-grafana-config-dashboards
Working on configmap monitoring/prometheus-operator-grafana-dashboard-nodes
Configmap with label found
File in configmap grafana-dashboard-nodes.json ADDED
Traceback (most recent call last):
File "/app/sidecar.py", line 99, in <module>
main()
File "/app/sidecar.py", line 95, in main
watchForChanges(label, targetFolder, url, method, payload, namespace)
File "/app/sidecar.py", line 53, in watchForChanges
for event in stream:
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 122, in stream
resp = func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 11650, in list_namespaced_config_map
(data) = self.list_namespaced_config_map_with_http_info(namespace, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 11753, in list_namespaced_config_map_with_http_info
collection_formats=collection_formats)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 321, in call_api
_return_http_data_only, collection_formats, _preload_content, _request_timeout)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 155, in __call_api
_request_timeout=_request_timeout)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 342, in request
headers=headers)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 231, in GET
query_params=query_params)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Fri, 07 Dec 2018 06:47:51 GMT', 'Content-Length': '186'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"resourceVersion: Invalid value: \\"None\\": strconv.ParseUint: parsing \\"None\\": invalid syntax","code":500}\n'
Grafana keep restarting, anything I can do to fix this ?
NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-operator-alertmanager-0 2/2 Running 0 45m
prometheus-operator-grafana-849685dbd7-zq7mg 3/3 Running 12 28m|
Hello, we also experience the same issue on latest chart version.
prometheus-operator-0.1.29
chart: grafana-1.19.0
release: prometheus
image: kiwigrid/k8s-sidecar:0.0.6
Grafana keeps crashing:
Working on configmap monitoring/prometheus-prometheus-prometheus-oper-prometheus-rulefiles-0
Working on configmap monitoring/prometheus-prometheus-prometheus-oper-prometheus-rulefiles-0
Traceback (most recent call last):
File "/app/sidecar.py", line 99, in <module>
main()
File "/app/sidecar.py", line 95, in main
watchForChanges(label, targetFolder, url, method, payload, namespace)
File "/app/sidecar.py", line 53, in watchForChanges
for event in stream:
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 122, in stream
resp = func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 11650, in list_namespaced_config_map
(data) = self.list_namespaced_config_map_with_http_info(namespace, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 11753, in list_namespaced_config_map_with_http_info
collection_formats=collection_formats)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 321, in call_api
_return_http_data_only, collection_formats, _preload_content, _request_timeout)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 155, in __call_api
_request_timeout=_request_timeout)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 342, in request
headers=headers)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 231, in GET
query_params=query_params)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Sun, 09 Dec 2018 20:58:09 GMT', 'Content-Length': '186'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"resourceVersion: Invalid value: \\"None\\": strconv.ParseUint: parsing \\"None\\": invalid syntax","code":500}\n'
Shall we reopen the ticket?
@jbouzekri , please reopen this ticket ^^^
@MIllgner : I don't have the permission to do it so I will just see if by commenting, the robot reopens it ;)
Since I've got grafana installed as dependency for prometheus-operator it didn't have "NAMESPACE" env var to force /app/sidecar.py look for configmaps in specified namespace only.
But I'm pretty sure it's not the root cause. Looks like strconv.ParseUint unable to parse configmap due to invalid characters in it.
UPD: didn't help. Same problem
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Mon, 10 Dec 2018 10:22:35 GMT', 'Content-Length': '186'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"resourceVersion: Invalid value: \\"None\\": strconv.ParseUint: parsing \\"None\\": invalid syntax","code":500}\n'
@savealive I get the same output in the logs. Grafana sidecars constantly restart and trigger Alertmanager.
Exactly same as mine. Hope it get fixed.
@jbouzekri do you know how to reopen this issue?
@MIllgner : no when I commented back, I tought it would reopen it but it didn't. I think you can open a new issue with a reference to this one.
Same here, Grafana restarts occasionally with the error message above in grafana-sc-datasources container.
Same here, this issue is still ongoing for me. The new sidecar doesn't handle 500 errors from the API server, and restarts occur.
Traceback (most recent call last):
File "/app/sidecar.py", line 99, in <module>
main()
File "/app/sidecar.py", line 95, in main
watchForChanges(label, targetFolder, url, method, payload, namespace)
File "/app/sidecar.py", line 53, in watchForChanges
for event in stream:
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 122, in stream
resp = func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 11650, in list_namespaced_config_map
(data) = self.list_namespaced_config_map_with_http_info(namespace, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 11753, in list_namespaced_config_map_with_http_info
collection_formats=collection_formats)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 321, in call_api
_return_http_data_only, collection_formats, _preload_content, _request_timeout)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 155, in __call_api
_request_timeout=_request_timeout)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 342, in request
headers=headers)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 231, in GET
query_params=query_params)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'Audit-Id': '3cb62b1c-3b60-4f15-bab8-3b0233346fb0', 'Content-Type': 'application/json', 'Date': 'Mon, 28 Jan 2019 16:20:44 GMT', 'Content-Length': '186'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"resourceVersion: Invalid value: \\"None\\": strconv.ParseUint: parsing \\"None\\": invalid syntax","code":500}\n'
same error for me too, using latest Helm chart with sidecar v0.0.6:
Traceback (most recent call last):
File "/app/sidecar.py", line 99, in <module>
main()
File "/app/sidecar.py", line 95, in main
watchForChanges(label, targetFolder, url, method, payload, namespace)
File "/app/sidecar.py", line 53, in watchForChanges
for event in stream:
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 122, in stream
resp = func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 11650, in list_namespaced_config_map
(data) = self.list_namespaced_config_map_with_http_info(namespace, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 11753, in list_namespaced_config_map_with_http_info
collection_formats=collection_formats)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 321, in call_api
_return_http_data_only, collection_formats, _preload_content, _request_timeout)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 155, in __call_api
_request_timeout=_request_timeout)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 342, in request
headers=headers)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 231, in GET
query_params=query_params)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'Audit-Id': '67e02e13-ce7a-4890-9512-0a974ae3cee6', 'Content-Type': 'application/json', 'Date': 'Tue, 12 Feb 2019 14:25:57 GMT', 'Content-Length': '186'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"resourceVersion: Invalid value: \\"None\\": strconv.ParseUint: parsing \\"None\\": invalid syntax","code":500}\n'
Same issue here with sidecar v:0.0.6. We have multiple crashes a day.. any workaround?
Traceback (most recent call last):
File "/app/sidecar.py", line 99, in
main()
File "/app/sidecar.py", line 95, in main
watchForChanges(label, targetFolder, url, method, payload, namespace)
File "/app/sidecar.py", line 53, in watchForChanges
for event in stream:
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 122, in stream
resp = func(args, *kwargs)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 11650, in list_namespaced_config_map
(data) = self.list_namespaced_config_map_with_http_info(namespace, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 11753, in list_namespaced_config_map_with_http_info
collection_formats=collection_formats)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 321, in call_api
_return_http_data_only, collection_formats, _preload_content, _request_timeout)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 155, in __call_api
_request_timeout=_request_timeout)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 342, in request
headers=headers)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 231, in GET
query_params=query_params)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'Audit-Id': '1cf66c77-bf9b-4fd1-9f25-b04e7770c5ea', 'Content-Type': 'application/json', 'Date': 'Thu, 14 Feb 2019 06:15:09 GMT', 'Content-Length': '186'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"resourceVersion: Invalid value: \"None\": strconv.ParseUint: parsing \"None\": invalid syntax","code":500}n'
Had the same issue: updating the sidecar version to v0.0.13 seems to have fixed the issue.
File in configmap nodes.json ADDED
Working on configmap monitoring/prometheus-prometheus-oper-k8s-cluster-rsrc-use
Configmap with label found
File in configmap k8s-cluster-rsrc-use.json ADDED
Traceback (most recent call last):
File "/app/sidecar.py", line 147, in <module>
main()
File "/app/sidecar.py", line 143, in main
watchForChanges(label, targetFolder, url, method, payload, namespace)
File "/app/sidecar.py", line 84, in watchForChanges
for event in stream:
File "/usr/local/lib/python3.7/site-packages/kubernetes/watch/watch.py", line 128, in stream
resp = func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 11854, in list_namespaced_config_map
(data) = self.list_namespaced_config_map_with_http_info(namespace, **kwargs)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 11957, in list_namespaced_config_map_with_http_info
collection_formats=collection_formats)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 321, in call_api
_return_http_data_only, collection_formats, _preload_content, _request_timeout)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 155, in __call_api
_request_timeout=_request_timeout)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 342, in request
headers=headers)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 231, in GET
query_params=query_params)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Thu, 28 Mar 2019 16:39:23 GMT', 'Content-Length': '186'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"resourceVersion: Invalid value: \\"None\\": strconv.ParseUint: parsing \\"None\\": invalid syntax","code":500}\n'
We are still having the same issue. We are on V0.0.18. Please look into it.
```kubernetes.client.rest.ApiException: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'Audit-Id': 'caaa9879-0ee0-4a07-ae08-a54a29d2be8c', 'Content-Type': 'application/json', 'Date': 'Thu, 08 Aug 2019 19:57:26 GMT', 'Content-Length': '186'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"resourceVersion: Invalid value: \"None\": strconv.ParseUint: parsing \"None\": invalid syntax","code":500}n'
Process for configmap died. Stopping and exiting
Traceback (most recent call last):
File "/app/sidecar.py", line 54, in
main()
File "/app/sidecar.py", line 50, in main
payload, namespace, folderAnnotation, resources)
File "/app/resources.py", line 163, in watchForChanges
raise Exception("Loop died")
Exception: Loop died
```
I observe the same issue on latest Grafana chart.
August 9th 2019, 07:31:01.857 | Process Process-2:
-- | --
聽 | August 9th 2019, 07:31:01.858 | Traceback (most recent call last): File "/usr/local/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/usr/local/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "/app/resources.py", line 120, in _watch_resource_loop _watch_resource_iterator(*args) File "/app/resources.py", line 82, in _watch_resource_iterator for event in stream: File "/usr/local/lib/python3.7/site-packages/kubernetes/watch/watch.py", line 128, in stream resp = func(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 11854, in list_namespaced_config_map (data) = self.list_namespaced_config_map_with_http_info(namespace, **kwargs) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 11957, in list_namespaced_config_map_with_http_info collection_formats=collection_formats) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 321, in call_api _return_http_data_only, collection_formats, _preload_content, _request_timeout) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 155, in __call_api _request_timeout=_request_timeout) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 342, in request headers=headers) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 231, in GET query_params=query_params) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 222, in request raise ApiException(http_resp=r) kubernetes.client.rest.ApiException: (500)
聽 | August 9th 2019, 07:31:01.859 | Reason: Internal Server Error
聽 | August 9th 2019, 07:31:01.859 | HTTP response headers: HTTPHeaderDict({'Audit-Id': '230466cc-1555-4f30-849b-80e16e42965c', 'Content-Type': 'application/json', 'Date': 'Fri, 09 Aug 2019 04:31:01 GMT', 'Content-Length': '186'})
聽 | August 9th 2019, 07:31:01.860 | HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"resourceVersion: Invalid value: \\"None\\": strconv.ParseUint: parsing \\"None\\": invalid syntax","code":500}\n'
聽 | August 9th 2019, 07:31:01.860 | 聽
聽 | August 9th 2019, 07:31:02.857 | Process for configmap died. Stopping and exiting
聽 | August 9th 2019, 07:31:02.857 | Traceback (most recent call last): File "/app/sidecar.py", line 54, in <module> main() File "/app/sidecar.py", line 50, in main payload, namespace, folderAnnotation, resources) File "/app/resources.py", line 163, in watchForChanges raise Exception("Loop died") Exception: Loop died
On container:
Containers:
grafana-sc-dashboard:
Container ID: docker://bc94b6f018aa6e956e2ec7091a1443496c53310559e3f6a04079b2d6fd6190b4
Image: kiwigrid/k8s-sidecar:0.0.18
Image ID: docker-pullable://kiwigrid/k8s-sidecar@sha256:6eb52513d59efcbbb37999f494bd0d647571c76362166d8bb9081f3a067c09e8
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 09 Aug 2019 10:20:02 +0300
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 09 Aug 2019 09:40:58 +0300
Finished: Fri, 09 Aug 2019 10:20:01 +0300
Ready: True
Restart Count: 94
Environment:
LABEL: grafana_dashboard
FOLDER: /tmp/dashboards
RESOURCE: both
Mounts:
/tmp/dashboards from sc-dashboard-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from monitoring-grafana-token-2xzbb (ro)
I believe error comes from "strconv.ParseUint: parsing \"None\"":
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"resourceVersion: Invalid value: \\"None\\": strconv.ParseUint: parsing \\"None\\": invalid syntax","code":500}\n'
I see exactly the same issue on graphana pod ( container grafana-sc-datasources )
helm chart prometheus-operator-6.6.1
Traceback (most recent call last):
File "/app/sidecar.py", line 54, in <module>
main()
File "/app/sidecar.py", line 50, in main
payload, namespace, folderAnnotation, resources)
File "/app/resources.py", line 155, in watchForChanges
raise Exception("Loop died")
Exception: Loop died
Traceback (most recent call last):
File "/usr/local/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/usr/local/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/app/resources.py", line 120, in _watch_resource_loop
_watch_resource_iterator(*args)
File "/app/resources.py", line 82, in _watch_resource_iterator
for event in stream:
File "/usr/local/lib/python3.7/site-packages/kubernetes/watch/watch.py", line 128, in stream
resp = func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 13042, in list_namespaced_secret
(data) = self.list_namespaced_secret_with_http_info(namespace, **kwargs)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 13145, in list_namespaced_secret_with_http_info
collection_formats=collection_formats)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 321, in call_api
_return_http_data_only, collection_formats, _preload_content, _request_timeout)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 155, in __call_api
_request_timeout=_request_timeout)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 342, in request
headers=headers)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 231, in GET
query_params=query_params)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (500
@mjagielka try version 6.7.1
Thanks @dmyerscough. It's working fine on 6.7.2 ! Problem has been fixed.
I had the same problem because I lose to update the image version of k8s-sidecar to k8s-sidecar:0.1.20 in my Prometheus Operator YAML file. Don't forget to maintain a compatible version of your sidecar image if you will to upgrade your Grafana Docker image with a no default Docker registry repository.
Most helpful comment
I had the same problem because I lose to update the image version of
k8s-sidecartok8s-sidecar:0.1.20in my Prometheus Operator YAML file. Don't forget to maintain a compatible version of your sidecar image if you will to upgrade your Grafana Docker image with a no default Docker registry repository.