This question already has an answer here:
I have an array like [1,1,1,2,4,6,3,3] and I would like to get the list of repeated elements, in this case [1,3]. I wrote this:
my_array.select{|obj|my_array.count(obj)>1}.uniq
But it is tragically inefficient (o(n²)). Do you have a better idea? If possible concise.
Thanks
Inspired by Ilya Haykinson's answer:
def repeated(array)
counts = Hash.new(0)
array.each{|val|counts[val]+=1}
counts.reject{|val,count|count==1}.keys
end
Using Ruby's Set library:
require 'set'
ary = [1,1,1,2,4,6,3,3]
dups = Set.new
test_set = Set.new
ary.each {|val| dups.add(val) unless test_set.add?(val)}
dups.to_a # [1, 3]
I believe this should be O(n), because Set#add and Set#add? are constant-time operations, as far as I know.
How about something like this? It will run in O(n).
a = [1,1,1,2,4,6,3,3]
b = {}
a.each { |v| if b.has_key? v then b[v] = b[v]+1 else b[v]=1 end }
b.reject { |k,v| if v > 1 then false else true end }.keys
A O(n) solution (change << x
to + [x]
and update
to merge
to make it purely functional):
rs = xs.inject([[], {}]) do |(out, seen), x|
[(seen[x] == 1 ? (out << x) : out), seen.update(x => (seen[x] || 0)+1)]
end[0]
A much simpler yet less space-efficient approach:
rs = xs.group_by { |x| x }.select { |y, ys| ys.size > 1 }.keys
The same idea avoiding the intermediate hash using a "list-comprehension":
rs = xs.group_by { |x| x }.map { |y, ys| y if ys.size > 1 }.compact
Using inject
[1,1,1,2,4,6,3,3].inject({}){ |ele, n| ele[n] = nil; ele }.keys
# => [1, 2, 4, 6, 3]
EXPLANATION:
ele
hash it's initialled to {}
, each iteration a key with the number n
and nil
value is added to the ele
hash. At the end ele
is returned as:
{1=>nil, 2=>nil, 4=>nil, 6=>nil, 3=>nil}
We only want the keys, so .keys
ends the job.
Some ideas: you'd have to figure out the correct library data structures:
1 Sort the array O(nlogn), then run through the array
2 Create a set, search for the current array element in the set and if not found, insert and proceed for all the elements -- O(nlogn) again.
I was thinking of counting how many times a unique element appears in array. It may be really inefficient just like the original suggestion but it was fun looking at the problem. I didn't do any benchmarks on larger arrays so this is just an excercise.
a = [1,1,1,2,4,6,3,3]
dupes = []
a.uniq.each do |u|
c = a.find_all {|e| e == u}.size
dupes << [u, c] unless c == 1
end
puts dupes.inspect
# dupes = [[1, 3], [3, 2]]
# 1 appears 3 times
# 3 appears twice
# to extract just the elment a bit cleaner
dupes = a.uniq.select do |u|
a.find_all {|e| e == u}.size != 1
end
puts dupes.inspect
# returns [1,3]
This will work if the duplicated entries are always consecutive, as in your example; otherwise you would have to sort first. each_cons examines a rolling window of the specified size.
require 'set'
my_array = [1,1,1,2,4,6,3,3]
dups = Set.new
my_array.each_cons(2) {|a,b| dups.add(a) if (a == b)}
p dups.to_a
来源:https://stackoverflow.com/questions/786879/how-can-i-efficiently-extract-repeated-elements-in-a-ruby-array