collect

How to improve code that quotes all array elements with `'` and returns a string containing all those quoted and comma-separated elements?

筅森魡賤 提交于 2019-12-04 02:55:35
I am using Rails 3.2.2 and I would like to quote all array elements with ' and return a string containing all those quoted and comma-separated elements. At this time I am using ['a', 'b', 'c'].collect {|x| "'#{x}'"}.join(", ") # => "'a', 'b', 'c'" but I think I can improve the above code (maybe by using a unknown to me Ruby method, if it exists). Is it possible? I use "'#{%w{a b c}.join("', '")}'" Here is expanded version: ' # Starting quote %w{a b c}.join("', '") # Join array with ', ' delimiter that would give a', 'b', 'c ' # Closing quote N.N. You can replace collect with its alias map and

Question on Ruby collect method

你说的曾经没有我的故事 提交于 2019-12-03 17:29:38
I have an array of hashes Eg: cars = [{:company => "Ford", :type => "SUV"}, {:company => "Honda", :type => "Sedan"}, {:company => "Toyota", :type => "Sedan"}] # i want to fetch all the companies of the cars cars.collect{|c| c[:company]} # => ["Ford", "Honda", "Toyota"] # i'm lazy and i want to do something like this cars.collect(&:company) # => undefined method `company' I was wondering if there is a similar shortcut to perform the above. I believe your current code cars.collect{|c| c[:company]} is the best way if you're enumerating over an arbitrary array. The method you would pass in via the

Map an array modifying only elements matching a certain condition

一曲冷凌霜 提交于 2019-12-03 11:14:48
问题 In Ruby, what is the most expressive way to map an array in such a way that certain elements are modified and the others left untouched ? This is a straight-forward way to do it: old_a = ["a", "b", "c"] # ["a", "b", "c"] new_a = old_a.map { |x| (x=="b" ? x+"!" : x) } # ["a", "b!", "c"] Omitting the "leave-alone" case of course if not enough: new_a = old_a.map { |x| x+"!" if x=="b" } # [nil, "b!", nil] What I would like is something like this: new_a = old_a.map_modifying_only_elements_where

How to use collect call in Java 8?

人走茶凉 提交于 2019-12-03 09:39:46
Lets say we have this boring piece of code that we all had to use: ArrayList<Long> ids = new ArrayList<Long>(); for (MyObj obj : myList){ ids.add(obj.getId()); } After switching to Java 8, my IDE is telling me that I can replace this code with collect call , and it auto-generates: ArrayList<Long> ids = myList.stream().map(MyObj::getId).collect(Collectors.toList()); However its giving me this error: collect(java.util.stream.Collector) in Steam cannot be applied to: (java.util.stream.Collector, capture, java.util.List>) I tried casting the parameter but its giving me undefined A and R , and the

Java 8 Stream, How to get Top N count? [closed]

穿精又带淫゛_ 提交于 2019-12-03 08:39:55
I need your advice to simplify this code below. I have a player list with an ID of the games won. I want to extract the 2 best players from this list (the 2 players who have a better amount of match Id) Once extracted, I have to return the initial list to do other operations. I think it is possible to improve this code in terms of optimization or reading. If you can help me. public class PlayerStatistics { int id String name; int idMatchWon; // key from Match // getter , setter } public static void main(String[] args) throws Exception { List<PlayerStatistics> _players = new ArrayList

Error ExecutorLostFailure when running a task in Spark

北城余情 提交于 2019-12-03 07:38:23
问题 when I am trying to run it on this folder it is throwing me ExecutorLostFailure everytime Hi I am a beginner in Spark. I am trying to run a job on Spark 1.4.1 with 8 slave nodes with 11.7 GB memory each 3.2 GB Disk . I am running the Spark task from one of the slave node (from 8 nodes) (so with 0.7 storage fraction approx 4.8 gb only is available on each node )and using Mesos as the Cluster Manager. I am using this configuration : spark.master mesos://uc1f-bioinfocloud-vamp-m-1:5050 spark

Collect values from an array of hashes [closed]

此生再无相见时 提交于 2019-12-03 07:21:06
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 6 years ago . I have a data structure in the following format: data_hash = [ { price: 1, count: 3 }, { price: 2, count: 3 }, { price: 3, count: 3 } ] Is there an efficient way to get the values of :price as an array like [1,2,3] ? 回答1: First, if you are using ruby < 1.9: array = [ {:price => 1, :count => 3}, {:price => 2,

Map an array modifying only elements matching a certain condition

我们两清 提交于 2019-12-03 01:33:15
In Ruby, what is the most expressive way to map an array in such a way that certain elements are modified and the others left untouched ? This is a straight-forward way to do it: old_a = ["a", "b", "c"] # ["a", "b", "c"] new_a = old_a.map { |x| (x=="b" ? x+"!" : x) } # ["a", "b!", "c"] Omitting the "leave-alone" case of course if not enough: new_a = old_a.map { |x| x+"!" if x=="b" } # [nil, "b!", nil] What I would like is something like this: new_a = old_a.map_modifying_only_elements_where (Proc.new {|x| x == "b"}) do |y| y + "!" end # ["a", "b!", "c"] Is there some nice way to do this in Ruby

Scala Partition/Collect Usage

自闭症网瘾萝莉.ら 提交于 2019-12-03 01:21:32
问题 Is it possible to use one call to collect to make 2 new lists? If not, how can I do this using partition ? 回答1: collect (defined on TraversableLike and available in all subclasses) works with a collection and a PartialFunction . It also just so happens that a bunch of case clauses defined inside braces are a partial function (See section 8.5 of the Scala Language Specification [warning - PDF] ) As in exception handling: try { ... do something risky ... } catch { //The contents of this catch

Error ExecutorLostFailure when running a task in Spark

狂风中的少年 提交于 2019-12-03 00:12:33
when I am trying to run it on this folder it is throwing me ExecutorLostFailure everytime Hi I am a beginner in Spark. I am trying to run a job on Spark 1.4.1 with 8 slave nodes with 11.7 GB memory each 3.2 GB Disk . I am running the Spark task from one of the slave node (from 8 nodes) (so with 0.7 storage fraction approx 4.8 gb only is available on each node )and using Mesos as the Cluster Manager. I am using this configuration : spark.master mesos://uc1f-bioinfocloud-vamp-m-1:5050 spark.eventLog.enabled true spark.driver.memory 6g spark.storage.memoryFraction 0.7 spark.core.connection.ack