I would like to compare two collections (in C#), but I\'m not sure of the best way to implement this efficiently.
I\'ve read the other thread about Enumerable.Sequen
Create a Dictionary "dict" and then for each member in the first collection, do dict[member]++;
Then, loop over the second collection in the same way, but for each member do dict[member]--.
At the end, loop over all of the members in the dictionary:
private bool SetEqual (List<int> left, List<int> right) {
if (left.Count != right.Count)
return false;
Dictionary<int, int> dict = new Dictionary<int, int>();
foreach (int member in left) {
if (dict.ContainsKey(member) == false)
dict[member] = 1;
else
dict[member]++;
}
foreach (int member in right) {
if (dict.ContainsKey(member) == false)
return false;
else
dict[member]--;
}
foreach (KeyValuePair<int, int> kvp in dict) {
if (kvp.Value != 0)
return false;
}
return true;
}
Edit: As far as I can tell this is on the same order as the most efficient algorithm. This algorithm is O(N), assuming that the Dictionary uses O(1) lookups.
A simple and fairly efficient solution is to sort both collections and then compare them for equality:
bool equal = collection1.OrderBy(i => i).SequenceEqual(
collection2.OrderBy(i => i));
This algorithm is O(N*logN), while your solution above is O(N^2).
If the collections have certain properties, you may be able to implement a faster solution. For example, if both of your collections are hash sets, they cannot contain duplicates. Also, checking whether a hash set contains some element is very fast. In that case an algorithm similar to yours would likely be fastest.
There are many solutions to this problem. If you don't care about duplicates, you don't have to sort both. First make sure that they have the same number of items. After that sort one of the collections. Then binsearch each item from the second collection in the sorted collection. If you don't find a given item stop and return false. The complexity of this: - sorting the first collection: NLog(N) - searching each item from second into the first: NLOG(N) so you end up with 2*N*LOG(N) assuming that they match and you look up everything. This is similar to the complexity of sorting both. Also this gives you the benefit to stop earlier if there's a difference. However, keep in mind that if both are sorted before you step into this comparison and you try sorting by use something like a qsort, the sorting will be more expensive. There are optimizations for this. Another alternative, which is great for small collections where you know the range of the elements is to use a bitmask index. This will give you a O(n) performance. Another alternative is to use a hash and look it up. For small collections it is usually a lot better to do the sorting or the bitmask index. Hashtable have the disadvantage of worse locality so keep that in mind. Again, that's only if you don't care about duplicates. If you want to account for duplicates go with sorting both.
A duplicate post of sorts, but check out my solution for comparing collections. It's pretty simple:
This will perform an equality comparison regardless of order:
var list1 = new[] { "Bill", "Bob", "Sally" };
var list2 = new[] { "Bob", "Bill", "Sally" };
bool isequal = list1.Compare(list2).IsSame;
This will check to see if items were added / removed:
var list1 = new[] { "Billy", "Bob" };
var list2 = new[] { "Bob", "Sally" };
var diff = list1.Compare(list2);
var onlyinlist1 = diff.Removed; //Billy
var onlyinlist2 = diff.Added; //Sally
var inbothlists = diff.Equal; //Bob
This will see what items in the dictionary changed:
var original = new Dictionary<int, string>() { { 1, "a" }, { 2, "b" } };
var changed = new Dictionary<int, string>() { { 1, "aaa" }, { 2, "b" } };
var diff = original.Compare(changed, (x, y) => x.Value == y.Value, (x, y) => x.Value == y.Value);
foreach (var item in diff.Different)
Console.Write("{0} changed to {1}", item.Key.Value, item.Value.Value);
//Will output: a changed to aaa
Original post here.
In many cases the only suitable answer is the one of Igor Ostrovsky , other answers are based on objects hash code. But when you generate an hash code for an object you do so only based on his IMMUTABLE fields - such as object Id field (in case of a database entity) - Why is it important to override GetHashCode when Equals method is overridden?
This means , that if you compare two collections , the result might be true of the compare method even though the fields of the different items are non-equal . To deep compare collections , you need to use Igor's method and implement IEqualirity .
Please read the comments of me and mr.Schnider's on his most voted post.
James
Here is a solution which is an improvement over this one.
public static bool HasSameElementsAs<T>(
this IEnumerable<T> first,
IEnumerable<T> second,
IEqualityComparer<T> comparer = null)
{
var firstMap = first
.GroupBy(x => x, comparer)
.ToDictionary(x => x.Key, x => x.Count(), comparer);
var secondMap = second
.GroupBy(x => x, comparer)
.ToDictionary(x => x.Key, x => x.Count(), comparer);
if (firstMap.Keys.Count != secondMap.Keys.Count)
return false;
if (firstMap.Keys.Any(k1 => !secondMap.ContainsKey(k1)))
return false;
return firstMap.Keys.All(x => firstMap[x] == secondMap[x]);
}