pipeline

how to pass value from commit to GitLab CI pipeline as variable?

前提是你 提交于 2019-12-11 12:50:23
问题 I need to dynamically pass value to GitLab CI pipeline to pass the value further to jobs. The problem is: the value cannot be stored in the code and no pipeline reconfiguration should be needed (e.g. I can pass the value in "variables" section of .gitlab-ci.yml but it means store value in the code, or changes in "Environment variables" section of "CI / CD Settings" means manual reconfiguration). Also, branch name cannot be used for that purpose too. It is not a secret string but a keyword

Luigi flexible pipeline and passing parameters all the way through

烂漫一生 提交于 2019-12-11 10:57:50
问题 I've recently implemented a luigi pipeline to handle the processing for one of our bioinformatics pipelines. However, there's something fundamental about how to setup these tasks that I'm not grasping. Let's say I've got a chain of three tasks that I'd like to be able to run with multiple workers. For example, the dependency graph for three workers might look like: / taskC -> taskB -> taskA - taskC -> taskB -> taskA \ taskC -> taskB -> taskA and I might write class entry(luigi.Task): in_dir =

Warning on building biztalk application: 'Validate call on on component 'Flat File Disassembler' failed

百般思念 提交于 2019-12-11 08:45:02
问题 So, new hand at Biztalk here, and I'm trying to get a working sample process slapped together. Almost all of the files I'm going to have to work with come in as raw TXT flat files, and I've walked one of them through the flat file schema wizard to get myself a good solid schema. I also put together an XML version of the sterilized data I want and mapped the two together. So long as that's all I try to do, Biztalk seems to have no problems. However, when I add in a receive pipeline with the

Is there a way to create a Cmdlet “delegate” that supports pipeline parameter binding?

浪子不回头ぞ 提交于 2019-12-11 08:29:17
问题 In .NET if you have a subroutine whose implementation might change from one call to another, you can pass a delegate to the method that uses the subroutine. You can also do this in Powershell. You can also use scriptblocks which have been described as Powershell's equivalent of anonymous functions. Idiomatic powershell, however, makes use of powershell's pipeline parameter bindings. But neither delegates nor scriptblocks seem to make use of Powershell's pipeline parameter bindings. Is there a

Arranging one items per one column in a row of csv file in scrapy python

坚强是说给别人听的谎言 提交于 2019-12-11 08:22:18
问题 I had items that scraped from a site which i placed them in to json files like below { "author": ["TIM ROCK"], "book_name": ["Truk Lagoon, Pohnpei & Kosrae Dive Guide"], "category": "Travel", } { "author": ["JOY"], "book_name": ["PARSER"], "category": "Accomp", } I want to store them in csv file with one dictionary per one row in which one item per one column as below | author | book_name | category | | TIM ROCK | Truk Lagoon ... | Travel | | JOY | PARSER | Accomp | i am getting the items of

passing an extra argument to GenericUnivariateSelect without scope tricks

て烟熏妆下的殇ゞ 提交于 2019-12-11 07:58:55
问题 EDIT: here is the complete traceback if I apply the make_scorer workaround suggested in the answers... `File "________python/anaconda-2.7.11-64/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 880, in runfile execfile(filename, namespace) File ""________python/anaconda-2.7.11-64/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 94, in execfile builtins.execfile(filename, *where) File ""________/main_"________.py", line 43, in <module> "_________index

Spark add new fitted stage to a exitsting PipelineModel without fitting again

☆樱花仙子☆ 提交于 2019-12-11 05:33:46
问题 I have a saved PipelineModel: pipe_model = pipe.fit(df_train) pipe_model.write().overwrite().save("/user/pipe_text_2") And now I want to add to this Pipe a new already fited PipelineModel: pipe_model = PipelineModel.load("/user/pipe_text_2") df2 = pipe_model.transform(df1) kmeans = KMeans(k=20) pipe2 = Pipeline(stages=[kmeans]) pipe_model2 = pipe2.fit(df2) Is that possible without fitting it again? In order to obtain a new PipelineModel but not a new Pipeline. The ideal thing would be the

Full sklearn pipeline example

笑着哭i 提交于 2019-12-11 05:10:01
问题 I am trying to use sklearn pipeline. But i tried various tutorials online and it didnt help me. import pandas as pd import numpy as np import json import seaborn as sb from sklearn.metrics import log_loss from sklearn import linear_model from sklearn.model_selection import StratifiedKFold from sklearn.svm import SVC from scipy.stats import zscore from Transformers import TextTransformer from sklearn.metrics import confusion_matrix, accuracy_score from sklearn.model_selection import

Cross Validating With Imblearn Pipeline And GridSearchCV

耗尽温柔 提交于 2019-12-11 04:59:46
问题 I'm trying to use the Pipeline class from imblearn and GridSearchCV to get the best parameters for classifying the imbalanced dataset. As per the answers mentioned here, I want to leave out resampling of the validation set and only resample the training set, which imblearn 's Pipeline seems to be doing. However, I'm getting an error while implementing the accepted solution. Please let me know what am I doing wrong. Below is my implementation: def imb_pipeline(clf, X, y, params): model =

How can I process the content of a CSV file as Pipeline input in Powershell cmdlet

浪尽此生 提交于 2019-12-11 03:43:11
问题 I want to use a CSV file to feed the parameters of powershell cmdlet Role, email, fname, lname Admin, a@b.com, John, Smith I want to process a cmdlet as follows: import-csv myFile| mycmdlet | export-csv myresults I also want to be able to call the cmdlet like this mycmdlet -role x -email j@b.com -fname John -lname Smith and see a result as an object like: lname: "Smith" fname: "John" email: "j@b.com" role: "X" ResultData: "something else" I didn't want to have to do this: import-csv X.txt |