Kibana: CSV Import gives an "properties must be a map type" error

Created on 28 Jun 2016  路  7Comments  路  Source: elastic/kibana

Using Kibana 5.0.0-alpha4.

I'm importing a CSV file:

universalaccessd universalaccessd aujourd hui at 12 23 58

Then next:

universalaccessd universalaccessd aujourd hui at 12 24 26

It gives me back a very very very long error message (like infinite loop) which takes 30s to be displayed when I click on more:

x-pack security - google drive google chrome aujourd hui at 12 26 23

In elasticsearch logs, I'm getting:

[2016-06-28 12:24:18,695][DEBUG][action.admin.indices.template.put] [Abominatrix] failed to put template [kibana-bano-17]
MapperParsingException[Failed to parse mapping [_default_]: properties must be a map type]; nested: ElasticsearchParseException[properties must be a map type];
    at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:236)
    at org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService.validateAndAddTemplate(MetaDataIndexTemplateService.java:220)
    at org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService.access$300(MetaDataIndexTemplateService.java:59)
    at org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService$2.execute(MetaDataIndexTemplateService.java:163)
    at org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:45)
    at org.elasticsearch.cluster.service.ClusterService.runTasksForExecutor(ClusterService.java:549)
    at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:850)
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:392)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:237)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:200)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: ElasticsearchParseException[properties must be a map type]
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:203)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:180)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:284)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:205)
    at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:143)
    at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:111)
    at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:92)
    at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:78)
    at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:256)
    at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:234)
    ... 12 more
Add Data bug

All 7 comments

Could you provide the CSV you used? If not, could you capture the request payload going to /api/kibana/ingest which gets fired off when you click Save?

@Bargs sure. I'm using this file: http://bano.openstreetmap.fr/data/bano-17.csv

Ah, the dots in the field names is throwing things off. Our Kibana API makes the assumption that dots in field names indicates it's a property of an object type field, which obviously isn't always the case. I'm not sure why it's breaking in such an odd way so I'll have to dig into that, but supporting dots in field names is going to be a tricky one to solve.

In the meantime, you could work around this by adding a header row to your CSV with column names that don't contain dots.

I think we should replace dots in the first line by _ or something

Yeah that's not a bad idea. Kibana doesn't handle dots in field names well in general, so it would be best to avoid the issue entirely.

Handle it in whatever way gives the end user the best experience.
Perhaps even give the user a choice for what to do with the dots:
"Dots found in column field names, do you want to remove dots or replace with _ or a space?"

I'm going to close this since CSV upload was pulled. We can always refer back to this issue if/when we revisit the feature.

Was this page helpful?
0 / 5 - 0 ratings