What change you think needs making.
Please give examples of your use case, e.g. when would you use this.
How do you think this should be implemented?
Message from the maintainers:
If you wish to see this enhancement implemented please add a 馃憤 reaction to this issue! We often sort issues this way to know what to prioritize.
The annotations could be built based on the metrics configuration in the configmap
We currently add scrape annotations on the yaml for the pod template (in our Argo helm chart).
Once we move to Prometheus operator we plan to discontinue using the annotation and instead use the servicemonitor crd. Though at that point having the annotation shouldn't hurt.
I'm hesitant to add these annotations by default as it seems to me that these annotations should be managed by the user as this may be user-specific. Furthermore, it seems like an anti-pattern that a Pod (i.e., the controller) manages the labels of its own definition (i.e., the Deployemt of the controller) dispatched by a Deployment. Unsure how this would be handled by K8s. Let me know if I'm wrong on these things
Ok. I added the annotations to my pod in the manifest as well and that works alright.
I felt that the annotations should match the metrics configuration in the configMap and sensed an opportunity to add by default (if metrics is enabled and remove if metrics is disabled) but will leave it your better judgement.
Please feel free to close the issue
Thanks @sfc-gh-pkrishnamurthy
Most helpful comment
I'm hesitant to add these annotations by default as it seems to me that these annotations should be managed by the user as this may be user-specific. Furthermore, it seems like an anti-pattern that a Pod (i.e., the controller) manages the labels of its own definition (i.e., the Deployemt of the controller) dispatched by a Deployment. Unsure how this would be handled by K8s. Let me know if I'm wrong on these things