Lens provides an easy way for user's to explore their data sets. Lens provides the ability to easily switch between views such as bar chart, pie chart, data table and so on. There have been requests to explore the possibility of a map view. This would allow users to drag a geospatial field into the editor and view a map or switch chart types to view a map.
I tried hacking around to see if Lens could use Elastic Maps for geo, branch can be found at https://github.com/nreese/kibana/tree/lens_maps.
Road blocks
1) Elastic Maps is configuration driven where the configuration defines layers. Layer configuration defines source configuration. Source configuration is used by Elastic Maps to fetch data internally. Elastic maps does not accept a table of results at the moment.
2) Lens separates data source from renderer. The renderer receives a table that has been decoupled from the data source. For example, the table contains a column but the renderer does not know that the column came from an index pattern, the aggregation used to generate the column values, or the field name used to generate the column values. There is no way for Lens to pass data source configuration directly to the renderer.
3) The only existing data source for Lens is Index Pattern which would be compatible with Elastic Maps. Lens is planning future data sources that are not based on Index Patterns. Future data sources may not be compatible with Elastic Maps.
Pinging @elastic/kibana-gis (Team:Geo)
Pinging @elastic/kibana-app (Team:KibanaApp)
cc @wylieconlon @chrisdavies
I spoke with Nathan about adding simple maps visualizations into Lens yesterday. He looked into it a bit.
Lens is planning future data sources that are not based on Index Patterns. Future data sources may not be compatible with Elastic Maps.
We can possibly disable maps visualizations in case Lens is using something which is not an index pattern or not provide a suggestion. I do think the majority of the usage will be based on index patterns
@thomasneirynck has suggested that we add a table layer to Elastic Maps that accepts table data.
There are two issues that need to be resolved with a table layer
1) re-fetching data. Lens would need a way to know when to re-fetch data when the map is zoomed and panned. We already have an open issue, https://github.com/elastic/kibana/issues/49236, to allow embeddable consumers to register callbacks for onLayerLoad and onLayerLoadEnd. We could add another callback for onMapMoveEnd that lens could use to know when to re-fetch data.
2) Pass new updated table data back into Elastic Maps. This part will be a little tricky if Lens uses the map embeddable. Maybe table data could be passed in with the embeddable input?
Lets try implementing these in the POC and see what other problems are encountered.
Would a table data layer be something that would only exist behind the scenes then, for Lens integration, or is it a more widely applicable concept? If we wanted to support SQL queries for map layers, would we do it this way?
If we wanted to support SQL queries for map layers, would we do it this way?
Probably not. The lens table layer will be managed by lens meaning that Lens will be in charge of re-fetching data.
For a SQL source, Elastic Maps would still be in charge of re-fetching data. The existing ES documents source could be enhanced to accept SQL statement. In the end, the source just returns geojson so it does not really matter that geojson came from the existing _searchrequest or sql request.
There is no way for Lens to pass data source configuration directly to the renderer.
Mostly yes, but the data table can carry JSON-serializable meta information about it's columns. This is currently used to transport formatting information from the data source to the visualization because the formatting in tied to the data source in this case (due to field formatters being a part of index patterns). If it makes sense we could extend the column meta informations, but it shouldn't get too use case specific.
re-fetching data. Lens would need a way to know when to re-fetch data when the map is zoomed and panned. We already have an open issue, #49236, to allow embeddable consumers to register callbacks for onLayerLoad and onLayerLoadEnd. We could add another callback for onMapMoveEnd that lens could use to know when to re-fetch data.
From the Lens side this is not possible at the moment but we need to implement some mechanism for the rendered chart to pass events back to the editor/embeddable anyway - e.g. for clicking a pie slice to apply a filter as current visualize is doing it. We can probably use the same way to handle this scenario.
Pass new updated table data back into Elastic Maps. This part will be a little tricky if Lens uses the map embeddable. Maybe table data could be passed in with the embeddable input?
Sounds reasonable. Lens translates everything into an expression under the hood, so we could have a Maps expression renderer function that manages the actual Maps instance and passes data in - this _could_ be the embeddable interface or something below it if there is an abstraction layer that fits better - e.g. by exposing the dispatch function of the store to update the state of the table layer.
For a SQL source, Elastic Maps would still be in charge of re-fetching data. The existing ES documents source could be enhanced to accept SQL statement. In the end, the source just returns geojson so it does not really matter that geojson came from the existing _searchrequest or sql request.
If there is a Lens SQL datasource, it could power a Maps visualization with SQL data, but just within Lens - for the Maps instance the data will always be a static table layer. That said, it still makes sense to have a SQL layer within the Maps app, so it's just two different integration points.
This is how I imagine the MVP
Goal :
cc: @timroes
thx @AlonaNadler !
I would add bullet point (4) to your list.
Just to give some context for integration here: If the integration goal is as Alona described and we want to render maps inside lens (and not just jump to the maps application), we'll require Maps to be rendered using expressions (embeddable won't be enough). More specifically if it should work with the index pattern data source (drag'n'drop fields) together, you'll need to work in a way, that you can consume an esaggs output in your map expression function.
Also we need some changes in the Lens infrastructure on how the visualization panels on the right side are configured, that you have a way to hide them (or replace them with your own implementation).
More specifically if it should work with the index pattern data source (drag'n'drop fields) together, you'll need to work in a way, that you can consume an esaggs output in your map expression function.
We could add a new source to the Maps embeddable that renders esaggs output. This will not provide the best user experience though. Users expect maps to be zoomable and pannable. The current LENS model of passing static data to be visualized is very limiting. If users zoom or pan the map to new areas, they expect the map to fetch new content for the current view location. In the current LENS work flow, how would the Map let LENS know that it needs new data because the view port location changed? It would be best if LENS could just pass a configuration to the Maps embeddable and let maps be in control of fetching its own data.
We could add a new source to the Maps embeddable that renders esaggs output.
Just for clarification: the "maps visualization" in Lens would need to provide an expression function. So you cannot make it render an embeddable directly. Of course you could potentially render an embeddable inside your functions renderer implementation, though I am not sure, if there might be larger issues when we try to render embeddables inside expression renderers.
The current LENS model of passing static data to be visualized is very limiting. If users zoom or pan the map to new areas, they expect the map to fetch new content for the current view location. In the current LENS work flow, how would the Map let LENS know that it needs new data because the view port location changed?
Totally agree on that. We know (from the old maps visualization implementations), that Maps are rather different in their needs than most other visualizations. We have a lot of hacking done in the visualize infrastructure to cater for maps (e.g. prerender the visualization once without data, so we know the viewport for Maps; injecting aggregations after rendering into your request, to filter down to the viewport). Lens was never build to have maps inside them (since the have the Maps application), thus there is currently no way of letting a potential Maps-Lens-Visualization modify with the requested data. We had some ideas how to get something like that working in the long run (via using variables in expressions), but that is FAR from ready, since it will require a lot of cleanup on the expression functions beforehand, and we'd still need to make significant refactorings in Lens to make data sources work for that.
It would be best if LENS could just pass a configuration to the Maps embeddable and let maps be in control of fetching its own data.
This is simply not how the Lens infrastructure was built or meant to work. We designed it around rendering expressions, since those are meant to be the backing infrastructure for all visualizations in Kibana (or already are for most of them), and so there is currently no way to make Lens "just pass it's configuration" on (also that configuration is not a public consumable API, and could change at any time). So if we would want to pass configuration on, instead of having the datasource create a datafetching expression, and the visualization expression a part of rendering function to the expression, this would basically mean a complete rearchitecture of Lens.
In summary I agree with your base statement "This will not provide the best user experience though.". I don't see how we could currently build that integration where maps actually render inside Lens, without needing to rearchitecture large parts of at least one of the apps (most likely larger parts of both). Lens was never designed to work for the maps use-case (which as mentioned above is a bit different than what all other visualizations need). I don't think an integration above simply linking between the apps is anything desirable at the moment, or we need to think about rearchitecture/rewrite both apps.
cc @rayafratkina @jensallen I think you both should be subscribed to this issue, and read through it :-)
If it's just about a suggestion that links to the maps app, I think it would be relatively simple to add. However in that case we somehow have to make sure you can also go back to Lens from the Maps app without losing your context.
I agree a suggestion that links, sounds like a more or less minor enhancement to Lens (similar to the visAliasTypes we have). Maps would just need to provide us an API, which which we can jump into Maps and pass the data we have already on.
Also for the navigation back, we'd basically need the reverse thing: having an API to jump into a configured editor (which is currently tracked via https://github.com/elastic/kibana/issues/59845) and give Maps a way to navigate back to Lens and somehow pass a reasonable state back.
Maps would just need to provide us an API, which which we can jump into Maps and pass the data we have already on.
We created such an API for discover to link to Maps. Here is the code in discover that creates the URL, https://github.com/elastic/kibana/blob/7.7/src/legacy/core_plugins/kibana/public/discover/np_ready/components/field_chooser/lib/visualize_url_utils.ts#L57.
The API is just adding initialLayers parameter to the maps URL. initialLayers is rison encoded array of https://github.com/elastic/kibana/blob/7.7/x-pack/legacy/plugins/maps/common/descriptor_types.d.ts#L98 and https://github.com/elastic/kibana/blob/7.7/x-pack/legacy/plugins/maps/common/descriptor_types.d.ts#L112. We are still working on fulling typing the layer descriptor but those links should contain enough to get you started
wrt. Maps in Lens, how does the scope of the kind of Maps we would like in Maps impact the technical requirements.
I suspect Lens-user is not really looking for a highly interactive update-able map. More something like a Map-infographic, similar to what we have in Uptime (which also uses MapsEmbeddable).
But could we restrict it to the following use-cases:
Would that limitation relieve the refactoring requirements on the Lens-end?
The technical requirements are basically mainly impacted by the interactivity of the map. The actual map type, or type of layer should not really have an impact from the Lens side. So as long as the maps would not need to filter down the data it's seeing to the visible viewport of the map, most of those Lens technical challenges will vanish. The larger effort would then really be on the Maps team to write a maps expression function that can work with esaggs (or rather the kibana_datatable as a input format) together and Lens would need to have some ways of rendering custom react components for the visualizations (instead of using the configuration of the Lens panels instead), though that should only be a medium complex change.
Also I've created https://github.com/elastic/kibana/issues/61841 to track the API in the maps plugin to link into it (could potentially use the uiActions sytem). This code should not live within Discover or in the future Lens, but should be consumed there.
After technical discussions we are concerned that the proposed flow would require extensive re-architecture in Lens and will result in a poor user experience since maps will not be interactive. Instead, we;d like to propose a 2-part approach to to better facilitate user discovery of Maps application in the existing apps:
1. In Lens: Give user and indication in Lens that geo fields can be visualized using the Maps app and provide an easy way to do it
2. On Dashboard: Integrate Maps better into the Dashboard to improve time to visualize geo data.
Most helpful comment
This is how I imagine the MVP
Goal :
cc: @timroes