Happening in master, using an imported packetbeat dashboard (imported via packetbeat 5.0)
Investigating...
There is at least one issue in that the packetbeat import_dashboards script is creating a packetbeat index in kibana when that index doesn't actually exist. I don't know if there is anything we can do about this, since anyone can use the es api directly to inject an index into .kibana.
There may be an additional issue in that the fields that are being imported are missing their types.
I'm hoping this is simply an issue from importing 5.0 packet-beat dashboards into kibana master.
Once I started tracking data and refreshed the fields, type is assigned and the dashboards work as expected.
I suppose this could be considered working as expected, though it isn't a great user experience.
Re-opening, as we should first verify whether this was working in 5.1. If it was, then this is a regression and something broke.
If something did break, I am not sure yet if the bug is on the beats side or the kibana side.
**Update: Not a regression. This exists in 5.1.1
I think this is due to https://github.com/elastic/beats/pull/3147 (i.e. a bug on the Packetbeat side).
I'm also use manual adding of index-patterns, visualizations and dashboards into .kibana
index, and I'm also met this problem.
So the problem is that you put wrong data in fields
inside type index-pattern
:
...
"index-pattern": {
"properties": {
"fieldFormatMap": {
"type": "text"
},
"fields": {
"type": "text"
},
"intervalName": {
"type": "text"
},
"notExpandable": {
"type": "boolean"
},
"sourceFilters": {
"type": "text"
},
"timeFieldName": {
"type": "text"
},
"title": {
"type": "text"
}
}
},
...
And for Kibana devs:
I'm think that some API for manual adding of index-patterns, visualizations, dashboards and embed links generation from external programs will be good and for me it's highly needed.
Or at least add some documentation about how and what need to add in .kibana
index.
Note that another user is running into this error with 5.1.1 and metricbeat dashboards, but I can't reproduce it.
It also looks like the user who reported the issue with packetbeat dashboards is not able to resolve the issue, even after getting data and refreshing the field list (he actually notes that refreshing the field list caused the dashboard to subsequently fail).
https://discuss.elastic.co/t/visualize-field-is-a-required-parameter-error/69536/7
Another case at https://discuss.elastic.co/t/5-1-1-discover-error/70940, on first glance not using beats data.
And another report on discuss with a very simple setup: https://discuss.elastic.co/t/saved-field-parameter-is-now-invalid-please-select-a-new-field/71044
In the last two cases the error is being thrown on the Discover tab, so it might be being triggered by something else. I think this error appears for a variety of reasons.
The reason it was happening on the visualize and dashboard tab is that a visualization was using a field that was not marked as searchable or aggregatable. I suspect something similar (perhaps yet another issue with the field_stats api not returning data), but I do wonder why it's being triggered on the discover tab.
@Stacey-Gammon I could imagine that it happens for users that are upgrading to 5. E.g. they used a field that was valid in 4.x (before field stats were used) and now is not "valid" anymore.
The field agg param is set as the default time field in the histogram at the top of the Discover results, so for users seeing this in Discover their timestamp field is probably marked as not-aggregatable.
This likely won't be fixed by ES anytime soon: https://github.com/elastic/elasticsearch/issues/22438
So we should come up with a workaround in Kibana because this is obviously affecting a lot of users in different ways.
@critix would you mind elaborating on your use case here? What's the scenario in which you need to build visualizations on fields with no data?
One possible solution: I'm thinking for fields that are missing from field_stats (in other words, missing data), we mark them with a "maybe" instead of true/false for searchable and aggregatable. For "maybe" fields we remove the early failure from the Field Agg Param. We could then display them with a note in the agg config editor's field dropdown saying something like "these fields may be aggregatable, use at your own risk".
Discussed in Mendit Monday and confirmed this is a problem and we want to prioritize a solution. Ideally the solution would come from the elasticsearch side and we'd like to push forward on improving the field stats API. @spalger is going to follow up on elastic/elasticsearch#22438.
I'm facing the same issue since i added a nested field to a mapping:
Is there any workaround?
// EDIT: I can see/search documents again, by deleting and recreating Index Patterns (not Refreshing).
@cehrig is you're field mapped as an actual nested
type? If so, Kibana does not currently support nested aggregations https://github.com/elastic/kibana/issues/1084
Just ran into this when exporting Kibana objects from one 5.2 instance, and importing them into another one.
I was able to work around it. Once I had some valid data in my indexes, I went to Management > Index Patterns, and refreshed the field list. That marked some of the fields as "searchable", which weren't that way before. Then I deleted and re-imported my objects.
Just ran into this when creating an Index pattern before data has been written to our index. We use the ARM template which is running ES 5.1.2
Can anyone tell me how to remark some fields as "aggregatable",and I want to know why this filed is not " searchable" and not "aggregatable".Thanks !
@cutd elasticsearch decides what makes a field aggregatable/searchable, and I'm not certain about the entire criteria. Can you post your mappings to https://discuss.elastic.co/c/kibana and someone will help you figure out what needs to change to get it working.
@spalger I had resolved this problem.It is cause by that one field has two type of number that one is long and another one is float in my mapping.I delete that index who has two type in one field and it working.But I don't know why one field can have two type.
the ES issue (https://github.com/elastic/elasticsearch/issues/22438) seems to be fixed
Yep @hex2a, and we will be implementing the new field capabilities API as part of https://github.com/elastic/kibana/issues/11011
Is this issue sorted out in the new version of Kibana and elasticsearch?
I am facing this error in version 5.1.1 for both Kibana and elastic.
An update can find the solution?
Thanks in advance
Yesterday I upgraded from 2.3.3 to 5.3 and also suffer from this issue I believe.
None of my fields are searchable and I get the 'Saved "field" parameter is now invalid. Please select a new field.' / 'Discover: "field" is a required parameter' errors, which makes Kibana unusable at the moment.
Is there anything at all I can do to overcome the problem and not wait for the new field capabilities API?
@apetrov88 I've got a brute force method that basically sets the searchable and aggregatable flags to true for all fields except chosen meta fields. It fixed most of my problems as a temporary measure. As a bonus is also means you can create filters on _type and _index if you chose.
https://gist.github.com/pemontto/b817a89f34675e6b9f5bad08e30bc52d
@apetrov88 if you already have data indexed, you simply need to refresh your index pattern. If you don't have any data, try indexing some and then refreshing.
If you can't get to that page due to errors, you could delete the index pattern manually and then recreate it after indexing some data.
curl -XDELETE <elasticsearch-host:port>/.kibana/index-pattern/<pattern-name>
I wouldn't recommend marking all fields as searchable and aggregatable because we use these flags to show you the correct fields in different contexts. I'd only go for that solution over the above if you have a lot of scripted fields you don't want to lose by deleting your index pattern.
@pemontto thanks for your gist. I'll give it a try.
@Bargs I tried refreshing and also recreating, but nothing changes.
@apetrov88 have you already indexed data into indices that match your index pattern? If so, would you mind sharing the response from the following query?
GET <index-pattern>/_field_stats?fields=*&level=cluster&allow_no_indices=false
The _field_stats api won't return any info if you don't have data, which is the usual cause of this bug. If you do have data already, this might be something else we need to look into.
@Bargs I do have lots of indexed data. Here is the response to the query:
{
"_shards": {
"total": 5,
"successful": 0,
"failed": 5,
"failures": [
{
"shard": 0,
"index": "logstash-2016.19",
"status": "INTERNAL_SERVER_ERROR",
"reason": {
"type": "exception",
"reason": "java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException: 5",
"caused_by": {
"type": "execution_exception",
"reason": "java.lang.ArrayIndexOutOfBoundsException: 5",
"caused_by": {
"type": "array_index_out_of_bounds_exception",
"reason": "5"
}
}
}
}
]
},
"indices": {}
}
@apetrov88 ah this looks like something different. Your field_stats request is failing for some reason. I'm guessing there's an underlying problem with your ES cluster that needs fixed. I would check your ES logs to see if there's additional information about the error.
If it seems like it might be a bug, could you submit a ticket to the ES repo?
@apetrov88 might be related https://github.com/elastic/elasticsearch/issues/24275
Edit: and matching Kibana issue https://github.com/elastic/kibana/issues/11379
I just reproduced this on 5.3.3 with our makelogs data and a saved export from Kibana 4.1.11 I'll attach.
Run makelogs like makelogs --indexPrefix makelogs-
so that it matches what is in this export.
kibana-makelogs.zip
I reproduced it on 5.1.1 with 3 steps;
makelogs --indexPrefix makelogs-
makelogs-*
index patternIn 5.0.0 I can do those same 3 steps with no error about "field"
kibana: 5.4.0
elasticsearch: 5.4.0
Bug reproduces while following "Getting Started" tutorial:
https://www.elastic.co/guide/en/kibana/current/tutorial-load-dataset.html and then https://www.elastic.co/guide/en/kibana/current/tutorial-discovering.html
For logstash-*
index pattern with @timestamp
as time-based field.
This isn't a full solution, but at the very least, coming in 5.5 (and 6.0 Alpha-2), the rest of your dashboard will load - only the visualizations referencing the invalid fields will fail to load. Related issue: https://github.com/elastic/kibana/issues/9747
@Stacey-Gammon - FYI, I am also getting this problem on 5.4.0 when testing the logstash modules feature that we are writing now.
@LeeDr What's there to discuss with this? Seems like something we need to fix
@epixa Discuss how to fix it? I don't think anybody knows the solution yet. And/or a work-around if possible for people hitting it.
The solution for most of these issues is to switch to the field_caps API: https://github.com/elastic/kibana/pull/11114
Aside from that, there are a couple scenarios where this error may occur and the user can fix it on their own:
aggregatable
because Elasticsearch turned off fielddata by default in 5.0. Solution: Enable fielddata on the field or switch the visualization to a keyword
version of the field.@LeeDr I haven't tried your latest repro yet, so I don't know which scenario it falls under, or if it's something new.
Once we merge the change to field_caps we should probably close this issue and open up new issues any time a new bug causes this error. This error just means "the field you've selected for this visualization is invalid" and can occur for any number of reasons.
@Bargs the issue I hit was caused by a field that is not aggregatable
. In our makelogs
data there is an extension
field and extension.keyword
. In a 4.x version of Kibana the Discover tab let you click Visualize
on extension
but in 5.x you can't because it's not aggregatable (you can on extension.keyword
but you have to show hidden fields so you can see that one).
I guess there's at least 2 scenarios here.
some saved objects were written directly to .kibana index and when you open that in Kibana you see the error. In this case the bad field is either in the thing you opened or in something it contains. So a dashboard could have multiple saved searches and visualizations which could have a problem. I'm not sure how you would show that info to the user.
Importing saved objects can cause the error. In this case maybe we could just improve the error message so that we told the user which field
it was and in which saved object.
Agreed, the error message could be better. And our handling of the scenario where a dashboard has 1 or more invalid saved objects could also be improved. We should probably create a new enhancement request for those tasks, or re-label this issue as an enhancement and create new issues for the legitimate bugs which can lead to this error.
@Bargs
I now know the scenario under which I was seeing this. Logstash is getting ready to ship modules like beats. Beats is creating a "full" index_pattern and sending it to ES as part of the module setup and Logstash will too (but did not while I was testing the UI).
Remembering that modules are a "Getting Started Quickly" or a "You are good to go, please don't edit anything (COTS)" solution - an index_pattern refresh when there is insufficient breadth of documents indexed will effectively edit the index_pattern.
In this scenario, the user will only see the error message if:
This hints at a disconnected index_pattern or with RBAC/Workspaces no edit permissions.
This issue was happening to me because the date field I selected had no data in it (all values where null). Selecting another date field with non-null values solved the problem.
@seliver - thanks for your comment. This was my issue as well!
I am also having the exact same issue.
I have E,L,K = 5.5.2
This issue has been resolved.
I am seeing this issue in ELK 5.6, using the packetbeat import-dashboard script.
Many of the visuals just idle at "loading" indefinitely. Not sure if that is because they do not have a proper field selected or what.
I've tried various tweaks I've found in the forums but can't find a general solution other than fixing a visual's settings one at a time.
An example would be the "Connections over time" visual in "Packetbeat Flows" dash.
It ships a visState like:
{
"title": "Connections over time",
"type": "area",
"params": {
"shareYAxis": true,
"addTooltip": true,
"addLegend": true,
"legendPosition": "right",
"smoothLines": true,
"scale": "linear",
"interpolate": "linear",
"mode": "stacked",
"times": [],
"addTimeMarker": false,
"defaultYExtents": false,
"setYExtents": false,
"yAxis": {}
},
"aggs": [
{
"id": "1",
"enabled": true,
"type": "cardinality",
"schema": "metric",
"params": {
"field": "flow_id"
}
},
{
"id": "2",
"enabled": true,
"type": "date_histogram",
"schema": "segment",
"params": {
"field": "@timestamp",
"interval": "auto",
"customInterval": "2h",
"min_doc_count": 1,
"extended_bounds": {}
}
}
],
"listeners": {}
}
I went to edit the visual, and selected the metric field to be flow_id.keyword
and it works. I'm assuming I would have to do this for every visual to get it all working properly?
Hi Adam,
I was able to resolve the issue by editing the JSON code for my dashboard panels, for
which I was getting "field" required error.
My issue was, visualize was looking for a field that was used in building the Viz, but
that field wasn't being indexed , and hence Kibana was not able to find that field in the logs.
I had to do this for every panel in the Viz to fix the problem.
Fatema.
Running into this very frequently running 6.0.0-rc-1. No imported dashboards or indexes, everything was created on 6.0.0-rc1. Throwing the error constantly on the visualize page, as well as when turning on and off a filter on Discover.
I have quite a few index patterns generated from logstash, packetbeat, metricbeat, etc.
Will hopefully find time to dig in and figure out what the issue is.
@spanishgum I'm trying to reproduce this. I started with a fresh 5.6 installation then ran the import dashboards script in packetbeat. After that, I started packetbeat and I'm able to load all visualizations and see the appropriate data. Are there more steps involved?
Hi.
I have just updated from Kibana 4.5.4 to 6.0.0 and Elasticsearch 2.3.4 to 6.0.0.
I get the exact same problem when importing 4.5.4 based objects to Kibana 6.0.0.
The data that the imported objects are creating dashboards for is in the logstash-* format.
Is there no way to transfer your dashboards from Kibana 4.5.4 to 6.0.0?
Hi,
What I've noted while playing around with this is that the field name seems to be different. As an example - when importing the dashboards for Winlogbeat it creates one visualization called "Sources". I get the same error when opening the visualization as above. By looking in the Winlogbeat-overview.json file, I see:
"title": "Sources",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\"title\":\"Sources\",\"type\":\"pie\",\"params\":{\"shareYAxis\":true,\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"isDonut\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"source_name\",\"size\":7,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}"
In the field parameter it's stated source_name, \"field\":\"source_name\"
. When I edit the visualization and try to find the source_name it's not present, but the source_name.keyword. Choosing that is will make the visualization present what it should. I don't know if this is applicable to all of you, but for me it worked. But it's pretty annoying to update all these visualization manually.
Update
Looking at the index pattern I do have both, source_name and source_name.keyword
This extends my knowledge so maybe this is a stupid question - but why is there two fields for the same data?
Also, I'm running:
Elasticsearch 6.0.0
Logstash 6.0.0
Kibana 6.0.0
I have not run into this issue recently. Has anyone else on the latest versions of Kibana?
I'm inclined to close this, and wait to see if more bugs reports come in. If so, we can address each one individually, since the original issue was actually fixed.
If anyone feels this is in error, please feel free to comment or re-open.
I'm also getting this on the latest versions, pretty similar to what @f-eric said
I'm also getting this on the latest versions, pretty similar to what @f-eric said
happened again in ELK, version 7.6. Any help?
@apuppy Did you only notice it after upgrading? If so, from what version?
@apuppy Did you only notice it after upgrading? If so, from what version?
The error disappeared when i rerun the following command:
filebeat setup -e
Still don't know why, maybe because of the fields from beats not synchronized to the elasticsearch.
I get this alot if i use just "*" as the index pattern to "catch-all" the indexes in order to make some visualizations. It becomes unusable
Still getting this quite frequently. Seems to be related to a saved search. Running 7.7.0.
Most helpful comment
Just ran into this when exporting Kibana objects from one 5.2 instance, and importing them into another one.
I was able to work around it. Once I had some valid data in my indexes, I went to Management > Index Patterns, and refreshed the field list. That marked some of the fields as "searchable", which weren't that way before. Then I deleted and re-imported my objects.