There is a seed-specific failure in the DocumentSubsetReaderTests testSearch test.
This has failed on both master and 6.x.
Master:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+periodic/6761/console
6.x:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+6.x+periodic/2457/console
It's reproducible using this command:
./gradlew :x-pack:plugin:core:test \
-Dtests.seed=4CDD600B7538DB55 \
-Dtests.class=org.elasticsearch.xpack.core.security.authz.accesscontrol.DocumentSubsetReaderTests \
-Dtests.method="testSearch" \
-Dtests.security.manager=true \
-Dtests.locale=fi-FI \
-Dtests.timezone=Europe/Kiev
It doesn't reproduce with other seeds.
I will mute the test on master and 6.x
Pinging @elastic/es-security
This seems to be a result of the upgrade to Lucene 7.5.0 (#32390 / 53ff06e621213cb007d0e03b8ec8d80431d9186a).
I'll debug further to work out if it's just a test bug or a genuine behaviour change.
FYI @jimczi ; I'll let you know what I find.
@jimczi (or another Lucene expert)
I think I understand why this test is failing and can guess at why it is triggered in Lucene 7.5, but I'm not sure of the best way to fix it.
The problem is between lines 102 and 104 here:
Depending on the merge policy/schedule configuration (which is randomised in LuceneTestCase), it's possible that the delete would trigger a merge, which causes the "value3" document to be removed entirely, and the "value4" document become the doc with index 2 instead of 3 which fails the assertion at line 133.
Since Lucene 7.5 contains reclaim_deletes_weight/setDeletesPctAllowed changes, and this index has so few docs, a single delete could trigger a merge.
If that analysis is correct, what's the cleanest way to ensure that no merge happens when the document is deleted at line 103?
Good catch @tvernum ! One workaround is to force the merge policy to choose a log merge policy (this policy does not handle the new setDeletesPctAllowed) like this:
IndexWriter iw = new IndexWriter(directory, newIndexWriterConfig().setMergePolicy(newLogMergePolicy(random()));
It seems that the test only need a single segment with a delete so it could also work if you disable the merge entirely and set a big buffer size for the index writer. Though the solution with the log merge policy should work fine. Thanks for chasing this up Tim !
@tvernum and @jimczi Another workaround is to index more documents to reduce the deletion ratio.
Most helpful comment
Good catch @tvernum ! One workaround is to force the merge policy to choose a log merge policy (this policy does not handle the new
setDeletesPctAllowed) like this:IndexWriter iw = new IndexWriter(directory, newIndexWriterConfig().setMergePolicy(newLogMergePolicy(random()));It seems that the test only need a single segment with a delete so it could also work if you disable the merge entirely and set a big buffer size for the index writer. Though the solution with the log merge policy should work fine. Thanks for chasing this up Tim !