pipeline

Get last element of pipeline in powershell

被刻印的时光 ゝ 提交于 2019-11-29 13:03:19
问题 This might be weird, but stay with me. I want to get only the last element of a piped result to be assigned to a varaiable. I know how I would do this in "regular" code of course, but since this must be a one-liner. More specifically, I'm interested in getting the file extension when getting the result from an FTP request ListDirectoryDetails . Since this is done within a string expansion, I can't figure out the proper code. Currently I'm getting the last 3 hars, but that is real nasty. New

Batch fork bomb? [duplicate]

这一生的挚爱 提交于 2019-11-29 03:56:44
This question already has an answer here: What is %0|%0 and how does it work? 4 answers I was looking at the fork bomb on Wikipedia, and the batch examples were: %0|%0 OR :here start ''your fork bomb name''.bat goto here OR :here start %0 goto here I understand the second two, they start another instance of themselves and then repeat, but I don't understand the first. I read that the pipeline executes the file to the right with the output of the file to the left. Why can't the fork bomb just be: %0 I would assume that this would call itself, but then terminate instantly, but why wouldn't %0|%0

how to tune parameters of custom kernel function with pipeline in scikit-learn

▼魔方 西西 提交于 2019-11-29 03:39:55
currently I have successfully defined a custom kernel function(pre-computing the kernel matrix) using def function, and now I am using the GridSearchCV function to get the best parameters. so, in the custom kernel function, there is a total of 2 parameters which will be tuned (Namely gamm and sea_gamma in the example below), and also, for SVR model, the cost c parameter has to be tuned as well. But until now, I can just tune the cost c parameter using GridSearchCV -> please refer to the Part I: example below. I have searched for some similar solutions such as: Is it possible to tune parameters

Different argument order for getting N-th element of Array, List or Seq

拈花ヽ惹草 提交于 2019-11-29 01:49:17
Is there a good reason for a different argument order in functions getting N-th element of Array, List or Seq: Array.get source index List .nth source index Seq .nth index source I would like to use pipe operator and it seems possible only with Seq: s |> Seq.nth n Is there a way to have the same notation with Array or List? I don't think of any good reason to define Array.get and List.nth this way. Given that pipeplining is very common in F#, they should have been defined so that the source argument came last. In case of List.nth , it doesn't change much because you can use Seq.nth and time

Is it possible to access estimator attributes in spark.ml pipelines?

与世无争的帅哥 提交于 2019-11-29 01:42:32
I have a spark.ml pipeline in Spark 1.5.1 which consists of a series of transformers followed by a k-means estimator. I want to be able to access the KMeansModel .clusterCenters after fitting the pipeline, but can't figure out how. Is there a spark.ml equivalent of sklearn's pipeline.named_steps feature? I found this answer which gives two options. The first works if I take the k-means model out of my pipeline and fit it separately, but that kinda defeats the purpose of a pipeline. The second option doesn't work - I get error: value getModel is not a member of org.apache.spark.ml.PipelineModel

“Piping” output from one function to another using Python infix syntax

瘦欲@ 提交于 2019-11-28 20:24:21
I'm trying to replicate, roughly, the dplyr package from R using Python/Pandas (as a learning exercise). Something I'm stuck on is the "piping" functionality. In R/dplyr, this is done using the pipe-operator %>% , where x %>% f(y) is equivalent to f(x, y) . If possible, I would like to replicate this using infix syntax (see here ). To illustrate, consider the two functions below. import pandas as pd def select(df, *args): cols = [x for x in args] df = df[cols] return df def rename(df, **kwargs): for name, value in kwargs.items(): df = df.rename(columns={'%s' % name: '%s' % value}) return df

Performance of x86 rep instructions on modern (pipelined/superscalar) processors

南楼画角 提交于 2019-11-28 17:57:10
i've been writing in x86 assembly lately (for fun) and was wondering whether or not rep prefixed string instructions actually have a performance edge on modern processors or if they're just implemented for back-compatibility. i can understand why Intel would have originally implemented the rep instructions back when processors only ran one instruction at a time, but is there a benefit to using them now? With a loop that compiles to more instructions, there is more to fill up the pipeline and/or be issued out-of-order. Are modern processors built to optimize for these rep-prefixed instructions,

Should I create pipeline to save files with scrapy?

点点圈 提交于 2019-11-28 17:38:27
I need to save a file (.pdf) but I'm unsure how to do it. I need to save .pdfs and store them in such a way that they are organized in a directories much like they are stored on the site I'm scraping them off. From what I can gather I need to make a pipeline, but from what I understand pipelines save "Items" and "items" are just basic data like strings/numbers. Is saving files a proper use of pipelines, or should I save file in spider instead? Yes and no[1]. If you fetch a pdf it will be stored in memory, but if the pdfs are not big enough to fill up your available memory so it is ok. You

How to extract best parameters from a CrossValidatorModel

做~自己de王妃 提交于 2019-11-28 17:31:55
I want to find the parameters of ParamGridBuilder that make the best model in CrossValidator in Spark 1.4.x, In Pipeline Example in Spark documentation, they add different parameters ( numFeatures , regParam ) by using ParamGridBuilder in the Pipeline. Then by the following line of code they make the best model: val cvModel = crossval.fit(training.toDF) Now, I want to know what are the parameters ( numFeatures , regParam ) from ParamGridBuilder that produces the best model. I already used the following commands without success: cvModel.bestModel.extractParamMap().toString() cvModel.params

Functional pipes in python like %>% from R's magritrr

半腔热情 提交于 2019-11-28 15:17:47
In R (thanks to magritrr ) you can now perform operations with a more functional piping syntax via %>% . This means that instead of coding this: > as.Date("2014-01-01") > as.character((sqrt(12)^2) You could also do this: > "2014-01-01" %>% as.Date > 12 %>% sqrt %>% .^2 %>% as.character To me this is more readable and this extends to use cases beyond the dataframe. Does the python language have support for something similar? One possible way of doing this is by using a module called macropy . Macropy allows you to apply transformations to the code that you have written. Thus a | b can be