At present, the logs viewer only supports pulling data from the ES cluster that Kibana itself is configured to use.
To facilitate the use of the logs viewer by other applications, such as Stack Monitoring, we need to give the logs viewer the ability to pull data from other sources — such as a dedicated monitoring cluster.
For example, Kibana be pulling monitoring data from a different cluster by virtue of having xpack.monitoring.elasticsearch.hosts. Since the Stack Monitoring application will be linking out to the logs viewer, we need to either tell it where to look or tell it to respect the value of this configuration setting.
This can be a very simple solution. We don't need anything particularly elegant right now. In fact, just being able to pass monitoring=true in the request to the log viewer and have it use the monitoring cluster for the data would be fine for our needs.
cc @elastic/infrastructure-ui
@chrisronline I think we are addressing this issue in https://github.com/elastic/kibana/issues/30792
Do you see anything we are missing there?
Thanks
@alvarolobato Yes that looks accurate. Thanks!
Thanks, @alvarolobato. Do you have any sense of whether or not the team will be able to address that issue this week? It's our final blocker for a major feature on the Stack Monitoring side of things. Cheers!
This is scheduled for 7.1 for us. Right now we are testing and fixing 7.0-related issues. Your project board suggests this is 7.1 for you as well?
@weltenwort That's correct. We're just trying to estimate when we might be able to move forward on our side from a development perspective, though, which is why we're hoping for a sense of when you might be able to address this. Thanks!
Friendly bump, @weltenwort. Do you have an estimate of when you might be able to start on this? Will it be after the 7.0 release? We're just trying to sort out our own resource scheduling so any estimation you can provide would be helpful. Thanks!
I must have missed your previous mention, sorry. It is on our 7.1 roadmap, but not at the head of the queue. If forced to give an estimate, I would say it might be four weeks until I actively start working on #30792. I hope to prepare some of the technical groundwork in the meantime.
I've been talking to @kobelb about how adding the ability for a log source to refer to the monitoring cluster would play together with authentication and the upcoming feature controls. The implications did not sound too good in that they might introduce some significant complexity into the implementation, which has to make sure that the security guarantees provided by the feature controls are not broken. Since complexity and security don't mesh well, I was wondering if we could somehow access the monitoring cluster via CCS and thereby avoid special treatment of the monitoring cluster connection. Any thoughts on that?
Hmm.
My main concern with this is configuration. We already have a config in place to enable users to point their Kibana instance to a dedicated monitoring cluster. I don't think it's common configuration for their production cluster to have CCS against their dedicated monitoring cluster. Assuming that's true, we'd require users to perform some CCS setup in order for the logs UI to show the appropriate logs. (This also assumes their Kibana instance isn't directly talking to their dedicated monitoring cluster, which is probably somewhat common).
I'd like to hear what the other @elastic/stack-monitoring folks think too.
For my own education, can you elaborate on the newly raised concerns? I'm assuming it's related to the upcoming ability to toggle access to a plugin (infra ui in this case) and how kibana can't control if users have access to certain monitoring indices?
@chrisronline having Kibana directly hit the remote monitoring cluster requires that Native realm be used, and the passwords for the user be synchronized across both clusters. It doesn't work with SAML, not the token auth provider which we've implemented, and it won't work with the other auth providers which we're implementing.
Using CCS to access the remote monitoring cluster lets us get past this limitation.
Regarding CCS: one of the reasons for us recommending that users setup a dedicated/remote monitoring cluster is so users can access that data w/o having to burden the production cluster. Requiring the user to access data on the monitoring cluster via the production cluster using CCS defeats this purpose to some extent. Imagine a scenario in which the user's production cluster is on fire for some reason. The user would want to look at the monitoring data for this cluster to investigate further. Routing these queries through the production cluster at the time would just add to the workload of the production cluster.
Regarding auth: agreed on the SAML and other non-Native realm problem here. However, just want to point out that this problem isn't specific to the log viewer's connection to a remote monitoring cluster; it's a problem we're going to need to solve for the current Stack Monitoring UI in Kibana as well.
Perhaps one solution could be to recommend that users run a dedicated Kibana instance for the dedicated/remote monitoring cluster? In concrete terms, we would recommend that users setup a dedicated Kibana instance for the Stack Monitoring and (Stack) Logs UIs, and point its elasticsearch.* settings to the dedicated monitoring cluster. Perhaps in the long term we could even deprecate and then remove the xpack.monitoring.elasticsearch.* settings altogether, and recommend the dedicated-Kibana-instance-for-dedicated-monitoring-cluster architecture instead?
I do agree that this problem isn't isolated to the Logs UI, and it's also a problem with the existing Stack Monitoring UI, it just came up during a discussion with Felix.
Does viewing monitoring data place that large of a burden on the coordinating node that CCS isn't an option here? This was something that @pickypg suggested using during an informal discussion in Dublin. There are additional high-effort solutions we could consider if CCS isn't something we want to use and we'd like for the user to use the same instance of Kibana to access the remote cluster.
Does viewing monitoring data place that large of a burden on the coordinating node that CCS isn't an option here?
Sorry, to be clear, I think CCS could work. I'm just not sure it's ideal, especially in an adverse scenario like the one I described in my comment. I have no idea how much load it actually adds but I have to imagine its non-zero.
There's also the additional configuration that we'd have to ask users to do to setup CCS (as @chrisronline alluded to as well) but that's probably six to a half dozen with asking them to setup a dedicated Kibana instance. Personally, I like the dedicated Kibana instance because it totally isolates the Monitoring stack by itself. And obviously from a development point, it's not high effort at all (all the pieces are already there).
We had some offline discussions about this and will be proposing a larger discussion that will affect this work, but for now, feel free to pause any effort on this investigation @weltenwort
We had some offline discussions about this and will be proposing a larger discussion that will affect this work...
That larger discussion is happening here: https://github.com/elastic/stack-monitoring/issues/37.
Hey folks.
We're having a discussion around the merits of recommending an isolated Kibana connected to a dedicated monitoring cluster, and while that's related to this particular issue, we've decided to test out if using CCS will solve this problem for us.
I'm currently doing some research and testing around this and will update the issue when I know more if it will work or not.
Assuming it does work, we will move forward with this approach (for now at least) as we feel confident we can build a UI/UX to assist (and possibly just do programmatically) the user in setting up the necessary CCS configuration.
@weltenwort Is there work necessary on your end to support CCS in the log ui?
yes, but it would be quite a bit simpler
@weltenwort FWIW, this is working great once I update the default log indices to:
*:filebeat*,filebeat-*,kibana_sample_data_logs*
Yes, the work would be around letting the monitoring ui inject a custom source configuration like that without messing with what the user configured in the logs ui.
@weltenwort Do you have a ticket to track this work? Or should we use this one?
yes, it has been mentioned above: #30792
Thanks @weltenwort.
I'm going to close this for now. We don't need this support right now or in the planned future.
Is that because the separate monitoring cluster will be the recommended setup?
Not necessarily. For now, we're going to solve our integration problem by using CCS (https://github.com/elastic/kibana/issues/30792) which we think will work for the near and medium term future.
We might move to a separate Kibana for the dedicated monitoring cluster, but I imagine we still might want CCS support in that scenario too.
makes sense, thanks for the update
Quick clarification. If we do move to a separate Kibana, we probably _won't_ use CCS support, so it's possible that, in the future, we won't need that either.