Runtime: Optimize Enumerable.Any

Created on 31 Aug 2017  路  21Comments  路  Source: dotnet/runtime

Hi,

Currently, the Enumerable.Any creates an iterator and attempts to iterate once to obtain the result.
Wouldn't it be faster (if we have a heavy resource collection) to first attempt getting the collection size if available?

Current code:

```c#
public static bool Any(this IEnumerable source) {
if (source == null) throw Error.ArgumentNull("source");
using (IEnumerator e = source.GetEnumerator()) {
if (e.MoveNext()) return true;
}
return false;
}

Proposed code:
```c#
public static bool Any<TSource>(this IEnumerable<TSource> source) {
     if (source == null) throw Error.ArgumentNull("source");
     if (source is ICollection<T> collectionT) return collectionT.Count > 0;
     if (source is ICollection collection) return collection.Count > 0;
     using (IEnumerator<TSource> e = source.GetEnumerator()) {
         return e.MoveNext()
    }
 }

Here's a sloppy test case:
```c#
using System;
using System.Collections;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Net.Http;
using static System.Console;

static class Program
{
static void Main()
{
var list = Enumerable.Range(0, short.MaxValue).ToList();
var array = new int[short.MaxValue];

new HttpClient(); //is first initialization more expensive?
var hashSets = 
  new HashSet<HttpClient>(Enumerable.Range(1, short.MaxValue).Select(i => new HttpClient()));
var arrayClients =
  Enumerable.Range(1, short.MaxValue).Select(i => new HttpClient()).ToArray();

IEnumerable<int> enumerate()
{
  for (int i = 0; i < short.MaxValue; i++)
    yield return i;
}                 
var enumerable = enumerate();

PrintResult(nameof(list), GetResults(list));
PrintResult(nameof(array), GetResults(array));
PrintResult(nameof(hashSets), GetResults(hashSets));
PrintResult(nameof(arrayClients), GetResults(arrayClients));
PrintResult(nameof(enumerable), GetResults(enumerable));

ReadLine();

}

static void PrintResult(string type, (TimeSpan, TimeSpan) results)
{
WriteLine();
WriteLine($"{type}:");
WriteLine($"iteration: {results.Item1}");
WriteLine($"count: {results.Item2}");
WriteLine(new string('*', 27));
}

static (TimeSpan, TimeSpan) GetResults(IEnumerable collection)
{

var iterationStopwatch = new Stopwatch();
var countStopwatch = new Stopwatch();

var max = 1000000;

bool any;
for (int i = 0; i <= max; i++)
{
  Write($"\riteration: {i:########}/{max:########}");

  iterationStopwatch.Start();
  any = AnyIteration(collection);
  iterationStopwatch.Stop();

  countStopwatch.Start();
  any = AnyCount(collection);
  countStopwatch.Stop();
}

return (iterationStopwatch.Elapsed, countStopwatch.Elapsed);

}

static bool AnyCount(IEnumerable source)
{
if (source is ICollection collectionT) return collectionT.Count > 0;
if (source is ICollection collection) return collection.Count > 0;
using (IEnumerator e = source.GetEnumerator())
if (e.MoveNext()) return true;
return false;
}

static bool AnyIteration(IEnumerable source)
{
using (IEnumerator e = source.GetEnumerator())
if (e.MoveNext()) return true;
return false;
}
}

Results:

iteration: 1000000/1000000
list:
iteration: 00:00:00.1867072
count: 00:00:00.0511948


iteration: 1000000/1000000
array:
iteration: 00:00:00.1680355
count: 00:00:00.1238209


iteration: 1000000/1000000
hashSets:
iteration: 00:00:00.1696227
count: 00:00:00.0413590


iteration: 1000000/1000000
arrayClients:
iteration: 00:00:00.1396707
count: 00:00:00.1151786


iteration: 1000000/1000000
enumerable:
iteration: 00:00:00.1782133
count: 00:00:00.0876572


```

As you can see, iteration is always way more expensive.

Related: https://github.com/dotnet/corefx/issues/23578

Most helpful comment

***************************
iteration: 1000000/1000000
enumerable:
iteration: 00:00:00.1782133
count:     00:00:00.0876572
***************************

According to this, it's slower to run some code than to do two attempted casts and then run the exact same code. I think it's reasonable to have some scepticism here.

All 21 comments

Performance measurements > intuition.

I could swear I've seen @JonHanna add this optimization at some point...

@jnm2

Performance measurements > intuition.

Agree. I've updated my issue, although as not as thorough it would have to be but at least I threw in some code.

How much does the performance decrease for types that aren't ICollections, like yield return enumerables?

@svick added another case per your request.

I could swear I've seen @JonHanna add this optimization at some point...

You may remember me discussing investigations of it when someone else suggested it while I was doing similar types of optimisations.

All of this class of optimisation adds cost to the case where the optimisation isn't available, since we spend a small amount of time testing if we can take the optimal path, and then find we cannot. In most of these cases that's worth it because the gain, when we can take it, is large, especially if it changes order of operation from O(n) to O(1). Here though it's only able to change a O(1) operation to an O(1) operation with better constant costs, so it's not as big a win for the potential cost.

It's also not always a win at all. Arrays are faster to do the GetEnumerator(), MoveNext() and Dispose() calls on than to cast to an interface and then call Count on through that interface. T[] will in fact lose about as much as List<T> will gain if this optimisation is used, other types will generally gain, but by different amounts. The difference is greater still if its an empty array since that case already has an optimisation in its enumerator (generally not a case we really worry about when looking at the gains or losses here, but given the point of Any() worth considering) and greater again if the array is being dealt with covariantly (we have a string[] that was cast to IEnumerable<object>, etc.)

***************************
iteration: 1000000/1000000
enumerable:
iteration: 00:00:00.1782133
count:     00:00:00.0876572
***************************

According to this, it's slower to run some code than to do two attempted casts and then run the exact same code. I think it's reasonable to have some scepticism here.

@JonHanna

According to this, it's slower to run some code than to do two attempted casts and then run the exact same code. I think it's reasonable to have some scepticism here.

Yeah, so I was wondering, what could be the explanation to that?

It's certainly intriguing. I wonder if the branch prediction is benefiting the same data hitting it repeatedly more in the count-using case than the other. That would be annoying because it wouldn't be as likely to benefit real cases (I'll happily take a weird counter-intuitive performance benefit if I've reason it would actually still hold in real cases).

I also note that there's not false case hit in the tests, which would also be worth considering (some enumerators can have internally different paths for empty collections).

@weitzhandler Microbenchmarks are prone to various issues. Instead of trying to figure out if your code suffers from one of those, I think it would be simpler to use BenchmarkDotNet. It's fairly easy to use and makes sure you're measuring what you want to measure.

@svick
I was wrong. It was the microbenchmarks indeed.

Changing to:

```c#
static (TimeSpan, TimeSpan) GetResults(IEnumerable collection)
{
var iterationStopwatch = new Stopwatch();
var countStopwatch = new Stopwatch();

var max = 1000000;
bool any;

iterationStopwatch.Start();
for (int i = 0; i <= max; i++)
{
Write($"\riteration: {i:########}/{max:########}{new string(' ', 20)}");
any = AnyIteration(collection);
}
iterationStopwatch.Stop();

countStopwatch.Start();
for (int i = 0; i <= max; i++)
{
Write($"\rcount: {i:########}/{max:########}{new string(' ', 20)}");
any = AnyCount(collection);
}
countStopwatch.Stop();

return (iterationStopwatch.Elapsed, countStopwatch.Elapsed);
}

resulted in:

count: 1000000/1000000
list:
iteration: 00:00:08.5466694
count: 00:00:08.8490420


count: 1000000/1000000
array:
iteration: 00:00:08.3404633
count: 00:00:08.2994561


count: 1000000/1000000
hashSets:
iteration: 00:00:08.3081030
count: 00:00:08.2321981


count: 1000000/1000000
arrayClients:
iteration: 00:00:08.2333401
count: 00:00:08.2799143


count: 1000000/1000000
enumerable:
iteration: 00:00:08.2697450
count: 00:00:08.7120273


```

I'm closing this issue then, because the performance is subjective.
It's slightly faster on arrays and hashsets but much slower on lists and enumerables. And again, this was a mock up test nothing really intense.

At least it's here for further reference.

Incidentally, where did you get that "current code" from. That looks quite a bit out of date.

Write($"\riteration: {i:########}/{max:########}{new string(' ', 20)}");
This is almost certainly distorting your results in the updated version, since you're doing it _every single iteration_, irrespective of any other problems. Try to do nothing except the operation under consideration in performance tests.

I can't say enough good things about BenchmarkDotNet. You gotta try it out!

@JonHanna I took it from here. I like the reference source because easy and fast browsing and searching of types.

@Clockwork-Muse you were right.
Here are the results with a plain loop (max = 100000000):

```c#

list:
iteration: 00:00:02.1775856
count: 00:00:01.3231993


array:
iteration: 00:00:02.1587138
count: 00:00:02.4961830


hashSets:
iteration: 00:00:04.0008384
count: 00:00:01.3617311


arrayClients:
iteration: 00:00:02.3180954
count: 00:00:02.6931834


enumerable:
iteration: 00:00:04.2760573
count: 00:00:05.2728051


```

@weitzhandler

I like the reference source because easy and fast browsing and searching of types.

But it is often very different to corefx. Linq would be an example of where corefx is particularly different to reference source.

@JonHanna
There are two reference sources, one for the .NET Framework, and the other for .NET Core.

They are identical (browser vs. GitHub), maybe not the very latest source, but it's pretty synced and is usually enough to what I need, especially given the responsive and quick code search option.

The code you see in my original post I shrinked for brevity.

The code in the first post seems to be from the netfx one at http://referencesource.microsoft.com/#System.Core/System/Linq/Enumerable.cs,1165 It's close enough in this case, but no in many others within Enumerable

@JonHanna I don't get it, in what is it different than https://source.dot.net/#System.Linq/System/Linq/AnyAll.cs,11, other than using a literal in the exception?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

matty-hall picture matty-hall  路  3Comments

v0l picture v0l  路  3Comments

nalywa picture nalywa  路  3Comments

bencz picture bencz  路  3Comments

jchannon picture jchannon  路  3Comments