Logstash: Logstash modules

Created on 29 Mar 2017  ยท  40Comments  ยท  Source: elastic/logstash

Introduction

The idea is to explore _modules_ for Logstash, similar to Filebeat modules feature released in 5.3.0. Modules contain packaged Logstash configuration, Kibana dashboards and other meta files to ease the set up of the Elastic stack for certain use cases or data sources. The goal of these modules are to provide an end-end, 5 min getting started experience for a user exploring a data source without having to learn different parts of the stack (initially).

Data sources

The initial goal is to focus on data sources connecting over-the-network to complement modules in beats.

Behavior

Users interact with modules in the following ways:

Command Line

bin/logstash --modules netflow

This will instruct Logstash to use a module and be ready to accept data or start pulling from a data source. Internally, modules subcommand should:

  1. Load the Kibana dashboard to Elasticsearch .kibana index. If there exists a dashboard with the same name, it should be overwritten.
  2. Load the ES template for this module if not existing already.
  3. Load other configuration files for the Stack components to their respective ES indexes or directly using the API.
  4. Start Logstash with the module's config. In this mode the configuration file is not persisted to disk and is loaded onto memory (as -e option).
  5. When modules are used, -f or loading an arbitrary configuration from a file source is disabled. This is to avoid any accidental inclusion of other configurations which could cause the modules to not work correctly.

You can load multiple modules as such:

bin/logstash --modules netflow, foo

File based configuration

If you prefer to use file based configuration for enabling modules (in lieu of the CLI), you can use logstash.yml.

modules:
- name: netflow
- name: foo

Overriding options

The out of the box module configuration assumes Elasticsearch is installed on the same host as Logstash. It also assumes the data sources are local. You can customize such _stock_ configuration for your environment.

For example, you can point to a remote ES host when running the module:

bin/logstash --modules netflow -M "netflow.var.elasticsearch.host=es.mycloud.com"
bin/logstash --modules netflow -M "netflow.var.tcp.port=5606"

Each module will define its own variables that user can override. These are light-weight overrides, as in, we wouldn't expose the entire LS pipeline to be overridden.

In the logstash.yml

modules:
- name: netflow
   var.output.elasticsearch.host: "es.mycloud.com"
   var.output.elasticsearch.user: "foo"
   var.output.elasticsearch.password: "password"
   var.input.tcp.port: 5606

The variables will then be injected into the Logstash pipeline.

Persisting configuration

For v2, we can expose an additional option which can persist the Logstash pipeline to a file. Users can then use this as a template and extend it to their needs.

bin/logstash --modules netflow -M "netflow.var.tcp.port=5606" --save-configs

Internal implementation details

Modules will be implemented as universal plugins which provide access to core functionality directly. A new base Module class will be created which will contain logic to upload the Kibana dashboards, templates, and other configuration files. New modules will extend from this base class and get most of the bootstrapping for free. Module plugins will be installed OOTB with LS artifacts.

The file structure of modules:

โ”œโ”€โ”€ configuration
โ”‚   โ”œโ”€โ”€ elasticsearch
โ”‚   โ”‚   โ””โ”€โ”€ netflow.json
โ”‚   โ”œโ”€โ”€ kibana
โ”‚   โ”‚   โ”œโ”€โ”€ dashboard
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ netflow.json
โ”‚   โ”‚   โ”œโ”€โ”€ searches
โ”‚   โ”‚   โ””โ”€โ”€ vizualization
โ”‚   โ””โ”€โ”€ logstash
โ”‚       โ””โ”€โ”€ netflow.conf.erb
โ”œโ”€โ”€ lib
โ”‚   โ”œโ”€โ”€ logstash
โ”‚   โ”‚   โ””โ”€โ”€ modules
โ”‚   โ”‚       โ””โ”€โ”€ netflow.rb
โ”‚   โ””โ”€โ”€ logstash_registry.rb
โ””โ”€โ”€ logstash-module-netflow.gemspec

This structure also allows new modules to be created and packaged outside of Logstash core. They can then be installed like any other plugin:

bin/logstash-plugin install logstash-module-amazing

@untergeek @ph @acchen97 collobarated on this design.

Progress

Module

  • [x] Add CLI flags to support modules
  • [x] Add modules definition in logstash.yml
  • [x] Add variables and support for overriding variables.
  • [x] Importer for shipping Kibana dashboards and mapping to ES
  • [x] ES mapping template - review template and also remove index settings (i.e. shard allocation) at the top
  • [x] Integrate demo dashboards and mapping.
  • [ ] Documentation changes to support modules (post feature freeze). Mark as beta.

Code Delivery

  • [x] Merge feature/module branch to master
  • [ ] backport to 5.5
  • [x] Add CEF module to master and 5.5
  • [ ] Change default_index_content_id = @settings.fetch("index_pattern.kibana_version", "5.4.0") to use 5.5.0
  • [x] Resolve issue whether to use module name prefix for fields.

Dashboards

  • [x] Network and firewall - overview dashboard (Nic)
  • [x] Network and firewall - suspicious activity dashboard (Nic)
  • [x] Endpoint - overview dashboard (Samir)
  • [x] Endpoint - Windows specific dashboard (Samir)
  • [x] DNS - overview dashboard (Nic)
  • [x] Ensure the navigation pane and top row of overview metrics are consistent across all dashboards
  • [ ] Per dashboard use case summary documentation (post feature freeze)
design enhancement meta v5.6.0

All 40 comments

@suyograo

There is a small typo in the yaml configuration, the vars.* keys need to be a the same level of the name, it should be something like this:

modules:
- name: netflow
   var.elasticsearch.host: "es.mycloud.com"
   var.tcp.port: 5606

Also should we be more explicit in the variables, If we use multiple kind of plugin with the same name, also this might help with the generation of the pipeline template?

 var.output.elasticsearch.host: "es.mycloud.com"
 var.input.tcp.port: 5606

I did another pass with the plugin structure and to be consistent with the actual plugins and rubygems, I think we should go with this structure:

โ”œโ”€โ”€ configuration
โ”‚ย ย  โ”œโ”€โ”€ elasticsearch
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ netflow.json
โ”‚ย ย  โ”œโ”€โ”€ kibana
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ dashboard
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ netflow.json
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ searches
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ vizualization
โ”‚ย ย  โ””โ”€โ”€ logstash
โ”‚ย ย      โ””โ”€โ”€ netflow.conf.erb
โ”œโ”€โ”€ lib
โ”‚ย ย  โ”œโ”€โ”€ logstash
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ modules
โ”‚ย ย  โ”‚ย ย      โ””โ”€โ”€ netflow.rb
โ”‚ย ย  โ””โ”€โ”€ logstash_registry.rb
โ””โ”€โ”€ logstash-module-netflow.gemspec

Note: the gemspec need to be changed to make sure we include the file in configuration/*

Edited, Since we want to leverage the universal plugin, we could have multiple differents modules (apache, netflow) in a single gem.

One thing I would like to clarify, when I see the following in the description does that mean the plugin get installed in the $LOGSTASH_HOME and not in the vendor/bundle directory like any other plugin?

logstash/
  modules/

Really high level todo of required tasks:

  • Add a new plugin type module
  • Add internal hook to make them available in LS, so we can validate them from the CLI or the configuration.
  • Create a subclass of LogStash::UniversalPlugin see this file that will take care of the boilerplate for the module: register the right hooks, read the erb, create the pipeline. (This is just for easy development)
  • Maybe some changes are needed in the Settings to make the user experience and the validation better.
  • Update the plugin generator with this new type?

I kind of like the idea, because it helps new users to get started quickly and to have a good end-2-end experience (including Kibana) right from the start.

But if one is working with Logstash for some time, there are other needs, which come to my mind. In the last few months I integrated the proper log processing for several daemons, which we use in our setup (e.g. consul, bosh director, mongodb, etc.). So one day one of my colleagues, which helped me in implementing the LS config, approached me and asked if there is no such thing like a "market place", where LS config snippets are exchanged, which then would help LS users to ramp up with the LS config much quicker. (does everyone need to reinvent the wheel?)

So in our setup we use Filebeat to ship the logs to Logstash (via a MQ), which does the heavy lifting (in regards of parsing), and finally the logs get stored in Elasticsearch and viewed with Kibana. So for this use case it would be very beneficial for us, if we could use some kind of "modules" or "config snippets", in combination with some standard Kibana dashboards. But we still would need the possibility to modify the LS config according to our needs (like forwarding some logs to other systems).

Additional notes to the proposal:

  • Modules should some how be compatible with Beats in the way that it is possible to have a chain like Filebeat -> Redis -> Logstash -> Elasticsearch and the automatic deployment of the Kibana dashboards still works.
  • Where are the tests for the module? How would it be tested?
  • How to act on the overlap between Beats and Logstash (e.g. Apache log file is possible with both), is it possible to share the Kibana dashboards?

Also should we be more explicit in the variables, If we use multiple kind of plugin with the same name, also this might help with the generation of the pipeline template?

@ph good idea! will update the example.

One thing I would like to clarify, when I see the following in the description does that mean the plugin get installed in the $LOGSTASH_HOME and not in the vendor/bundle directory like any other plugin?

This should get installed like any other plugin.

The other thing I forgot to add here is to make sure users can create modules without knowing too much about ruby or gem structure. Any ideas @ph. They should just deal with configurations and everything else should be magic. One idea is to add this to plugin-generator, but they still have to hack some ruby to put together a module. Note, this is not required for v1, something we should keep in mind.

@breml +1 for testing, I will give it some thoughts, we need to make it reusable in other place. :)

The other thing I forgot to add here is to make sure users can create modules without knowing too much about ruby or gem structure. Any ideas @ph. They should just deal with configurations and everything else should be magic. One idea is to add this to plugin-generator, but they still have to hack some ruby to put together a module. Note, this is not required for v1, something we should keep in mind.

That's a good question, I think it depends how flexible we want to be. We could discuss again if we really need to make them actual gems and not simple directory that we drop in a Logstash instance.

one day one of my colleagues, which helped me in implementing the LS config, approached me and asked if there is no such thing like a "market place", where LS config snippets are exchanged, which then would help LS users to ramp up with the LS config much quicker.

@breml yes, I think of modules as a foundation to get to a "market place" eventually. Indeed, this is a common request and I agree on the concept of config sharing. This is more than configs though, with Kibana dashboard and other stack related configuration.

Where are the tests for the module? How would it be tested?

Each module should have a test to take the LS config, use an input text and assert against an expected json. Something like filter-verifier would work really well here.

How to act on the overlap between Beats and Logstash (e.g. Apache log file is possible with both), is it possible to share the Kibana dashboards?

Right, LS would optimize on data sources that beats don't address. This is where services that can push to TCP/syslog etc come into the picture. For sharing Kibana dashboards, we should be able to migrate from ingest to Logstash config. That is something we are brainstorming currently.

@untergeek and myself discussed that on zoom:

  • Find a way to define configuration of the module in the template file. (ERB or something else.)
  • Make sure people don't have to write ruby code. (MAIN GOAL)
  • The module class will make some assumptions about the structure of the file on disk to do the right call.
  • Think about integration tests (rats)
  • We will use the Agent's register_pipeline as the way to create live configuration.
  • Need to find a way to make the logstash.yml more flexible to support settings from modules.
  • Add an ID to the module config block in the logstash.yml.

Other discussions included, allow the generator to takes arguments for the template/dashboard and the pipeline config.

Notes from our discussion and action items.

Open questions to clarify:

  • Discuss limitation or obstacles for creating a 5.x version of this feature.
  • If we support 5, maybe only able to have 1 module enabled?

Actions:

@untergeek

  • Experiment with the universal plugin
  • Create the actual module source what work with the plugins and settings
  • Modify the settings to allow the module settings

@ph:

We have discussed a crazy idea so people don't have to deal with an external settings file. It consist of using the ERB file as the source of configuration.

# cef.conf.erb
input {
 tcp {
   port => <%=setting("tcp.port", 45) %>
   host => <%= setting("tcp.host", "localhost") %>
   type => <%= setting("tcp.type", "server", ["server", "firewall"]
 }
}
#...

With that configuration we could use something like this. (untested code)

class SettingsExtractor
  def initialize(template)
    @template = File.read(template)
    @configs = []
  end

  def setting(key, value)
    #convert the key / values into LogStash::Settings
    @configs << a_settings # IE LogStash::Settings::Boolean.new("ssl", value)
  end

  def add_setting(setting)
    @configs << setting
  end

  def settings
    ERB.new(template, 3).result(self.binding)
    @configs
  end
end
SettingsExtractor.new("ceph.conf.erb").settings 

We can also allow more advanced users to use the settings class directly.

add_setting(LogStash::Setting::String.new("log.level", "info", true, ["fatal", "error", "warn", "debug", "info", "trace"]),

I've proposed to create a general module that we could use to reduce the boilerplate required in the gems since all plugins will basically have the same structure.

# lib/logstash_registry.rb
LogStash::PLUGIN_REGISTRY.add(:modules, "newmod", LogStash::Modules::General.new("newmod", File.join(File.dirname(__FILE__), ".."))

Using this strategy as a nice side effect we could allow people to creates modules in a specific Logstash directory and at boot time loop through the module dir and add them to the registry.

```ruby
Dir.glob("modules/*") do |module|
LogStash::PLUGIN_REGISTRY.add(:modules, module_name, LogStash::Modules::General.new(module_name, directory))
end

If we support 5, maybe only able to have 1 module enabled?

Based on our discussion, we are only supporting a single module to be run at a time on Logstash. So in other terms, 1 module is equivalent to 1 pipeline, which is equivalent to running bin/logstash -f cef.conf.

after talking with @untergeek , for making it more simple in the first version we will drop the settings extractor class and we will just pass a config string to the pipeline and the pipeline will validate the settings like a normal pipeline.

@untergeek @suyograo @ph
What do we think about adding a dashboards section to logstash.yml like Beats? Maybe some of the info. I would like to not hard code the kibana index string.

#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# The directory from where to read the dashboards. It is used instead of the URL
# when it has a value.
#setup.dashboards.directory:

# The file archive (zip file) from where to read the dashboards. It is used instead
# of the URL when it has a value.
#setup.dashboards.file:

# If this option is enabled, the snapshot URL is used instead of the default URL.
#setup.dashboards.snapshot: false

# The URL from where to download the snapshot version of the dashboards. By default
# this has a value which is computed based on the Beat name and version.
#setup.dashboards.snapshot_url

# In case the archive contains the dashboards from multiple Beats, this lets you
# select which one to load. You can load all the dashboards in the archive by
# setting this to the empty string.
#setup.dashboards.beat: beatname

# The name of the Kibana index to use for setting the configuration. Default is ".kibana"
#setup.dashboards.kibana_index: .kibana

# The Elasticsearch index name. This overwrites the index name defined in the
# dashboards and index pattern. Example: testbeat-*
#setup.dashboards.index:

@guyboertje I think it make perfect sense to not hard code anything or at least provide a way to override it. The only thing I am not sure is theses look like global settings, right? should they be under modules?

They should be nested in the arrays under modules::

modules:
  - name: example
    var.plugintype.pluginname.key: value
  - name: foo
    var.plugintype.pluginname.key: value

Somewhere with the "vars"

My code gets all this and puts the entire array in a @settings key

And my code modifies yours :-)

Please check my most recent PR for some changes

Chaps, while checking the Security Analytics examples, I noticed that there can be more than one dashboard.
So I have coded for a file called dashboard/.json - it will contain this:

["Dashboard-File-1", "Dashboard-File-2"]

Then there should be those two files in the dashboards folder.

Need to discuss version pinning WRT LS Kibana 5 vs 6 where dashboards may change (API)

@guyboertje yes multiple dashboards should be expected. If there are Kibana API changes, we might need a different set of dashboards per Kibana major.

@acchen97 - I get the possible need for multiple via versioning but I'm talking about multiple active dashboards e.g. Apache Access vs Apache Errors.

@acchen97 - Please confirm this module feature will not need to work with Kibana < 5.5

@suyograo, @untergeek - Filebeat modules had to do a Kibana index hack to overcome this issue https://github.com/elastic/beats-dashboards/issues/94, I coded for it but it failed to be applied in Kibana 5.4.
With this in mind, do you think we should plan for a folder in LS config that contains patches for .kibana index by versions?

Also, this bug https://github.com/elastic/kibana/issues/9571 makes modules unusable ATM.
With Samir I also got it when exported all saved objects from their demo Kibana (5.4.0) and then imported them on cleaned up elasticsearch.

I talked with PH, we discussed the Modules namespace and folder for the classes of this feature.
The current file modules.rb needs to be renamed. I am going with Scaffold for now.

@guyboertje Beats module dashboards are compatible across Kibana 5.x, so I think we should strive for similar compatibility. Is it a significant amount of work to make it work across Kibana versions?

Also, this bug elastic/kibana#9571 makes modules unusable ATM.

Is this a blocker?

@guyboertje I'm curious how beats module handle the importing of dashboards with this Kibana issue? Is it only that we run into this because we used a dashboard json from an older version?

I would say let's focus on >=v5.5 of the stack for now. We can deal with < 5.4 kibana issue later on.

@suyograo @acchen97
Its a blocker for usable modules in both filebeat and us. We can still ship it but users will complain.
I could not get any workarounds suggested in the kibana issue to work for me but Samir or Dale might. The fault is seen when working kibana state is exported from one 5.4 instance and imported to a shiny new kibana 5.4 instance.

My tests used the exported cef demo kibana state from kibana 5.1 (So Samir tells me)

@acchen97
To support multiple kibana dashboards per version is not hard, we can do it with small modifications. We need a new file structure, dashboard/[module].json having a hash not an array and a means for the system (or the user via logstash.yml) to tell us which version to import.
We don't have to cater for this now, but using plugin-api pinning we can implement it later.
FROM

GEM File structure
logstash-module-netflow
โ”œโ”€โ”€ configuration
โ”‚   โ”œโ”€โ”€ elasticsearch
โ”‚   โ”‚   โ””โ”€โ”€ netflow.json
โ”‚   โ”œโ”€โ”€ kibana
โ”‚   โ”‚   โ”œโ”€โ”€ dashboard
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ netflow.json (contains '["dash1", "dash2"]')
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ dash1.json ("panelJSON" contains refs to visualization panels 1,2 and search 1)
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ dash2.json ("panelJSON" contains refs to visualization panel 3 and search 2)
โ”‚   โ”‚   โ”œโ”€โ”€ search
|   |   |   โ””โ”€โ”€ search1.json
|   |   |   โ””โ”€โ”€ search2.json
โ”‚   โ”‚   โ””โ”€โ”€ vizualization
|   |   |   โ””โ”€โ”€ panel1.json
|   |   |   โ””โ”€โ”€ panel2.json
|   |   |   โ””โ”€โ”€ panel3.json
โ”‚   โ””โ”€โ”€ logstash
โ”‚       โ””โ”€โ”€ netflow.conf.erb
โ”œโ”€โ”€ lib
โ”‚   โ””โ”€โ”€ logstash_registry.rb
โ””โ”€โ”€ logstash-module-netflow.gemspec

TO

GEM File structure
logstash-module-netflow
โ”œโ”€โ”€ configuration
โ”‚   โ”œโ”€โ”€ elasticsearch
โ”‚   โ”‚   โ””โ”€โ”€ netflow.json
โ”‚   โ”œโ”€โ”€ kibana
โ”‚   โ”‚   โ”œโ”€โ”€ dashboard
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ netflow.json (contains '{"v5": ["dash1", "dash2"], "v6": ["dash1", "dash2"]}')
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ v5
โ”‚   โ”‚   โ”‚   |   โ””โ”€โ”€ dash1.json ("panelJSON" contains refs to visualization panels 1,2 and search 1)
โ”‚   โ”‚   โ”‚   |   โ””โ”€โ”€ dash2.json ("panelJSON" contains refs to visualization panel 3 and search 2)
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ v6
โ”‚   โ”‚   โ”‚       โ””โ”€โ”€ dash1.json ("panelJSON" contains refs to visualization panels 1,2 and search 1)
โ”‚   โ”‚   โ”‚       โ””โ”€โ”€ dash2.json ("panelJSON" contains refs to visualization panel 3 and search 2)
โ”‚   โ”‚   โ”œโ”€โ”€ search
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ v5
|   |   |   |   โ””โ”€โ”€ search1.json
|   |   |   |   โ””โ”€โ”€ search2.json
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ v6
|   |   |       โ””โ”€โ”€ search1.json
|   |   |       โ””โ”€โ”€ search2.json
โ”‚   โ”‚   โ””โ”€โ”€ vizualization
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ v5
|   |   |   |   โ””โ”€โ”€ panel1.json
|   |   |   |   โ””โ”€โ”€ panel2.json
|   |   |   |   โ””โ”€โ”€ panel3.json
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ v6
|   |   |       โ””โ”€โ”€ panel1.json
|   |   |       โ””โ”€โ”€ panel2.json
|   |   |       โ””โ”€โ”€ panel3.json
โ”‚   โ””โ”€โ”€ logstash
โ”‚       โ””โ”€โ”€ netflow.conf.erb
โ”œโ”€โ”€ lib
โ”‚   โ””โ”€โ”€ logstash_registry.rb
โ””โ”€โ”€ logstash-module-netflow.gemspec

@guyboertje So v5.5 and v6 are different enough that we need to support multiple JSON structures? Okay. I'm cool with that. We should plan for that, since we're targeting v5.5 anyway, and v6 and v7 might also have different structures, requiring multiple version support as well.

My worry is that potential _minor_ version incompatibilities in Kibana might make it even more complex :worried:

@untergeek - I'm not saying that v5.5 and 6.0 will be different - just that we can easily accommodate it if it must be done.

After a bit of thought maybe the module gem spec should not reference the plugin api because the plugin api is not a contract that these gems agree to.

Its actually the Scaffold class that determines what file structures it expects the gem to support - implying that we need a modules_api meta gem.

The fault is seen when working kibana state is exported from one 5.4 instance and imported to a shiny new kibana 5.4 instance

@guyboertje can we create a new state for 5.5? We are only creating a module for 5.5, so why bother importing form 5.4 or 5.1? This way users will get a clean experience with 5.5 (until we fix the blocker in Kibana) -- WDYT?

It would suck if we release this for 5.5 and users complain immediately.

@guyboertje can we create a new state for 5.5? We are only creating a module for 5.5, so why bother importing form 5.4 or 5.1? This way users will get a clean experience with 5.5 (until we fix the blocker in Kibana) -- WDYT?

It would suck if we release this for 5.5 and users complain immediately.

+1 on just focusing on Kibana 5.5 for this initial release, we can work on 5.4 afterwards. FYI - as we're planning to leverage time series visualizations in the dashboards, the farthest back we can go regarding full compatibility is 5.4 since that's when the feature was introduced.

@suyograo @acchen97
I don't believe that there is anything specific about Kibana 5.5 regarding dashboard and visualization formats. I am checking in Kibana Slack - that 5.1 or 5.4 dashboards are expected to work with 5.5.

need to import an index-pattern too
the mechanism is different from beats where they dynamically create the index-pattern json from a list of fields, we will use a static file provided by the "module maintainer".
also need to set the defaultIndex in .kibana/config/%{index_pattern.kibana_version} - this is a pain because we need to do this for a version until the Kibana team implement the "pick a default index pattern" feature that is on the cards.
NOTE: this is done locally.

Implement a dynamic search file import include based on any references to the field "savedSearchId" in any found visualization json files.
Done.
Then submit final branch PR before branch to master PR.

Done - for now

Was this page helpful?
0 / 5 - 0 ratings

Related issues

scheung38 picture scheung38  ยท  5Comments

simmel picture simmel  ยท  4Comments

jsvd picture jsvd  ยท  3Comments

bobbyhubbard picture bobbyhubbard  ยท  3Comments

marcinpm picture marcinpm  ยท  3Comments