pipeline

In what conditions does powershell unroll items in the pipeline?

冷暖自知 提交于 2019-11-27 20:14:21
Consider the following: function OutputArray{ $l = @(,(10,20)) $l } (OutputArray) -is [collections.ienumerable] # C:\ PS> True (OutputArray).Count # C:\ PS> 2 $l is "unrolled" when it enters the pipeline . This answer states that powershell unrolls all collections . A hashtable is a collection . However, a hashtable is of course unaffected by the pipeline: function OutputHashtable{ $h = @{nested=@{prop1=10;prop2=20}} $h } (OutputHashtable) -is [collections.ienumerable] # C:\ PS> True (OutputHashtable).Count # C:\ PS> 1 This comment suggests that it is all IEnumerable that are converted to

Efficient XSLT pipeline in Java (or redirecting Results to Sources)

纵饮孤独 提交于 2019-11-27 18:49:59
I have a series of XSL 2.0 stylesheets that feed into each other, i.e. the output of stylesheet A feeds B feeds C. What is the most efficient way of doing this? The question rephrased is: how can one efficiently route the output of one transformation into another. Here's my first attempt: @Override public void transform(Source data, Result out) throws TransformerException{ for(Transformer autobot : autobots){ if(autobots.indexOf(autobot) != (autobots.size()-1)){ log.debug("Transforming prelim stylesheet..."); data = transform(autobot,data); }else{ log.debug("Transforming final stylesheet...");

Scrapy pipeline to export csv file in the right format

半腔热情 提交于 2019-11-27 18:37:17
I made the improvement according to the suggestion from alexce below. What I need is like the picture below. However each row/line should be one review: with date, rating, review text and link. I need to let item processor process each review of every page. Currently TakeFirst() only takes the first review of the page. So 10 pages, I only have 10 lines/rows as in the picture below. Spider code is below: import scrapy from amazon.items import AmazonItem class AmazonSpider(scrapy.Spider): name = "amazon" allowed_domains = ['amazon.co.uk'] start_urls = [ 'http://www.amazon.co.uk/product-reviews

Batch fork bomb? [duplicate]

与世无争的帅哥 提交于 2019-11-27 18:02:10
问题 This question already has an answer here: What is %0|%0 and how does it work? 4 answers I was looking at the fork bomb on Wikipedia, and the batch examples were: %0|%0 OR :here start ''your fork bomb name''.bat goto here OR :here start %0 goto here I understand the second two, they start another instance of themselves and then repeat, but I don't understand the first. I read that the pipeline executes the file to the right with the output of the file to the left. Why can't the fork bomb just

how to tune parameters of custom kernel function with pipeline in scikit-learn

一个人想着一个人 提交于 2019-11-27 17:45:30
问题 currently I have successfully defined a custom kernel function(pre-computing the kernel matrix) using def function, and now I am using the GridSearchCV function to get the best parameters. so, in the custom kernel function, there is a total of 2 parameters which will be tuned (Namely gamm and sea_gamma in the example below), and also, for SVR model, the cost c parameter has to be tuned as well. But until now, I can just tune the cost c parameter using GridSearchCV -> please refer to the Part

How can you diff two pipelines in Bash?

女生的网名这么多〃 提交于 2019-11-27 16:49:06
How can you diff two pipelines without using temporary files in Bash? Say you have two command pipelines: foo | bar baz | quux And you want to find the diff in their outputs. One solution would obviously be to: foo | bar > /tmp/a baz | quux > /tmp/b diff /tmp/a /tmp/b Is it possible to do so without the use of temporary files in Bash? You can get rid of one temporary file by piping in one of the pipelines to diff: foo | bar > /tmp/a baz | quux | diff /tmp/a - But you can't pipe both pipelines into diff simultaneously (not in any obvious manner, at least). Is there some clever trick involving

Is it possible to access estimator attributes in spark.ml pipelines?

陌路散爱 提交于 2019-11-27 16:23:00
问题 I have a spark.ml pipeline in Spark 1.5.1 which consists of a series of transformers followed by a k-means estimator. I want to be able to access the KMeansModel.clusterCenters after fitting the pipeline, but can't figure out how. Is there a spark.ml equivalent of sklearn's pipeline.named_steps feature? I found this answer which gives two options. The first works if I take the k-means model out of my pipeline and fit it separately, but that kinda defeats the purpose of a pipeline. The second

Different argument order for getting N-th element of Array, List or Seq

て烟熏妆下的殇ゞ 提交于 2019-11-27 16:11:04
问题 Is there a good reason for a different argument order in functions getting N-th element of Array, List or Seq: Array.get source index List .nth source index Seq .nth index source I would like to use pipe operator and it seems possible only with Seq: s |> Seq.nth n Is there a way to have the same notation with Array or List? 回答1: I don't think of any good reason to define Array.get and List.nth this way. Given that pipeplining is very common in F#, they should have been defined so that the

Pipe complete array-objects instead of array items one at a time?

こ雲淡風輕ζ 提交于 2019-11-27 13:03:01
问题 How do you send the output from one CmdLet to the next one in a pipeline as a complete array-object instead of the individual items in the array one at a time? The problem - Generic description As can be seen in help for about_pipelines ( help pipeline ) powershell sends objects one at the time down the pipeline¹. So Get-Process -Name notepad | Stop-Process sends one process at the time down the pipe. Lets say we have a 3rd party CmdLet (Do-SomeStuff) that can't be modified or changed in any

Performance of x86 rep instructions on modern (pipelined/superscalar) processors

梦想与她 提交于 2019-11-27 10:56:09
问题 i've been writing in x86 assembly lately (for fun) and was wondering whether or not rep prefixed string instructions actually have a performance edge on modern processors or if they're just implemented for back-compatibility. i can understand why Intel would have originally implemented the rep instructions back when processors only ran one instruction at a time, but is there a benefit to using them now? With a loop that compiles to more instructions, there is more to fill up the pipeline and