In order to satisfy automatically exporting dependent saved objects #27203, having the export and import logic on the server side will facilitate object manipulation.
Proposed would be three new rest API endpoints:
/api/management/saved_objects/_export
/api/management/saved_objects/_import
/api/management/saved_objects/_resolve_import_conflicts
Notes:
Original description
It would be great to move saved object import/export as a rest API. This requires that savedObjects can be used on the server, but seems like a worthwhile goal so that savedObject manipulation can be a bit more automated.
+1
+1
+1
Related to #5480
+1
would help also a lot with bootstraping a virtualized application containing elk stack
+1
+1
+1
+1
+1, we'd like to promote saved searches alongside logging code releases in our release pipeline.
嗉坚晽嗪堎勍溹簣嗉结晽 +1
+1
+1
+1!
+1
+1
I wrote this script to export saved objects:
USERNAME="USER"
PASSWORD="PASS"
BASE_URL="https://kibana_host:port/es_admin/.kibana"
VERSION_HEADER="kbn-version: 5.4.1"
# get the saved dashboard objects
DASHBOARDS=$( curl --silent "$BASE_URL/dashboard/_search?size=1000&scroll=1m&sort=_doc" --user "$USERNAME:$PASSWORD" -H "$VERSION_HEADER" --data-binary '{"query":{"match_all":{}}}' --compressed | jq '[.hits.hits[] | {_id: ._id, _type: ._type}]')
# get the saved search objects
SEARCHES=$( curl --silent "$BASE_URL/search/_search?size=1000&scroll=1m&sort=_doc" --user "$USERNAME:$PASSWORD" -H "$VERSION_HEADER" --data-binary '{"query":{"match_all":{}}}' --compressed | jq '[.hits.hits[] | {_id: ._id, _type: ._type}]')
# get the saved visualization objects
VISUALIZATIONS=$(curl --silent "$BASE_URL/visualization/_search?size=1000&scroll=1m&sort=_doc" --user "$USERNAME:$PASSWORD" -H "$VERSION_HEADER" --data-binary '{"query":{"match_all":{}}}' --compressed | jq '[.hits.hits[] | {_id: ._id, _type: ._type}]')
# combine the fetched objects into a single array
ALL_ITEMS=$(echo $DASHBOARDS $SEARCHES $VISUALIZATIONS | jq -s add)
# wrap that array in `docs` prop. Kibana endpoint expects that
DOCS=$(echo "{"\"docs"\": $ALL_ITEMS}")
# get the export friendly data for the things above
RESPONSE=$(curl --silent "$BASE_URL/_mget" --user "$USERNAME:$PASSWORD" -H "$VERSION_HEADER" --data-binary "$DOCS" --compressed)
# delete the properties that are not in export done using Kibana "export" button
OUTPUT=$(jq '[.docs[] | del(.found) | del(._version) | del(._index)]' <<< $RESPONSE)
echo $OUTPUT | jq
$OUTPUT
is identical to the export I take from Kibana using the UI (except the property ordering).
Some notes:
-user "$USERNAME:$PASSWORD"
from commands if you don't need to authenticateThis script would be useful for automated backup purposes. I would hate to lose my dashboard and click 1000s times to build the same visualizations.
+1
+9999999999
We want to deploy the same dashboards to N clusters. Without this feature we'll have to do a manual step everytime we deploy :(
Is there any hack way around this? Such as POSTing the dashboards.json to the ES cluster itself?
There is no straightforward workaround for this. Some people are writing directly to the internal kibana index in ES, but this is error prone between minor versions, and the upcoming 6.0 release completely changes the entire kibana index mappings in order to support the removal of types in ES, so any work you do now will essentially be undone in that version.
An experimental API was merged in Kibana version 5.5.0 that includes the ability to import dashboards in one go: https://github.com/elastic/kibana/pull/10858 Our intention is to iron out any kinks with that in the coming weeks so that we can remove the experimental label and publicly document the API as soon as possible, likely around the 6.0.0 timeframe. If you're willing to take on the risks of learning an experimental API, then feel free to give that API a whirl and let us know how it goes! Beware, the API is still subject to change in any version until we're more confident that it's right, and it isn't documented yet either.
Any updates on this feature request (given the number of requests and the fact that it was requested in 2015 and is still open) ?
@termlen0 As I mentioned in this comment, we do have an experimental API that covers a broad set of these requirements right now. Feedback has been really positive on it, though we still need to see it in the wild a bit longer before we're confident in the approach. If you have any feedback on the API for your own use case, it would be greatly appreciated.
+300
+1
+1
+100000
+1
+1
+1
+1 :)
+1
+1 which my tally count says it currently stands at 100,522 :)
+1
+1
+1
+1. It would be great to build these visualization objects on a dev machine, commit to git and as part of deploy process update then on test/prod environments.
@dmitrypol This issue is specifically for REST APIs for export/import. Follow #2310 for loading objects from the file system.
could I just +1 馃憤
Hey there Mike,
I can see you are working on this issue as I write this. Any updates on the issue of creating Dashboards saving the JSON file to Kibana?
Hi @JoaoFranciscoCarvalhoNeto, a pull request is on its way to resolve this. I have to finish the relationships aspect first https://github.com/elastic/kibana/issues/27210.
Thanks @mikecote! Keep up the wonderful work
Fixed by #33513.
Most helpful comment
There is no straightforward workaround for this. Some people are writing directly to the internal kibana index in ES, but this is error prone between minor versions, and the upcoming 6.0 release completely changes the entire kibana index mappings in order to support the removal of types in ES, so any work you do now will essentially be undone in that version.
An experimental API was merged in Kibana version 5.5.0 that includes the ability to import dashboards in one go: https://github.com/elastic/kibana/pull/10858 Our intention is to iron out any kinks with that in the coming weeks so that we can remove the experimental label and publicly document the API as soon as possible, likely around the 6.0.0 timeframe. If you're willing to take on the risks of learning an experimental API, then feel free to give that API a whirl and let us know how it goes! Beware, the API is still subject to change in any version until we're more confident that it's right, and it isn't documented yet either.