pipeline

Rotate a Video in gstreamer

半城伤御伤魂 提交于 2019-12-13 08:27:35
问题 I have this pipeline to record from two webcams simultaneously: gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=300\ ! "video/x-raw,width=800,height=600,framerate=30/1" ! videorate\ ! "video/x-raw,framerate=30/1" ! jpegenc ! queue ! mux. \ pulsesrc device="alsa_input.pci-0000_00_1b.0.analog-stereo" \ ! 'audio/x-raw,rate=88200,channels=1,depth=24' ! audioconvert ! \ avenc_aac compliance=experimental ! queue ! mux. matroskamux name="mux"\ ! filesink location=/home/sina/T1.avi v4l2src

Write a loop of formatStyle in R shiny

假装没事ソ 提交于 2019-12-13 05:26:05
问题 Guys. I met a problem that: I am developing a R shiny app and I want to highlight some values in the datatable. I have a dataframe(entities) and a vector(vec1), and I want to highlight a specific value in each column when the value in each column is equal to the value in a vec1. Now I achieved it by repeating the formatStyle code 25 times, but I believe it can be done by writing a loop. Could anyone help me? vec1 = c(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25) datatable

perror usage in this case?

旧街凉风 提交于 2019-12-13 05:25:54
问题 I've written a small program (with code from SO) that does printenv | sort | less and now I want to add error-handling with perror and checking the return values. I've never done this before but I suppose it is similar to exception handling. I need to check errors for execvp, fork, pipe and dup2. I have this code #include <sys/types.h> #include <errno.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> struct command { const char **argv; }; /* Helper function

Python: Sqlalchemy.exc.OperationalError: <unprintable OperationalError object>

柔情痞子 提交于 2019-12-13 04:28:54
问题 I made the following code that I will present below to create a web crawler (elaborated in scrapy) and I want to put this data in a database, the one being mysql. For this I used the pipeline file and made the following configurations: pipeline.py: class ScrapySpiderPipeline(object): def __init__(self): engine = db_connect() create_table(engine) self.Session = sessionmaker(bind=engine) def process_item(self, item, spider): session = self.Session() quotedb = QuoteDB() quotedb.Titulo = item[

Powershell and one to many relationships / objects

谁说我不能喝 提交于 2019-12-13 04:14:05
问题 I have created a function that reads a directory listing and then the files contents for CNC program names/numbers. Each file have multiple program numbers which i have separated by "," and are passed on as a obj (see code below). I want to export this to XML to upload to a database having each of the program names in a seperate entry. Whats the best/easiest method to achieve this? $obj = New-Object -TypeName PSObject $obj | Add-Member -MemberType NoteProperty -Name FullFileName -Value

How do I continue processing items if one throws an error?

China☆狼群 提交于 2019-12-13 02:39:06
问题 Had to rewrite this post with the actual code since there must be some kind of difference with binary modules. The full cmdlet is as follows (apologies for length): In a nutshell, this function basically uses the SMO library and pulls information about a database users from a SQL Server Instance. namespace Namespace { using Microsoft.SqlServer.Management.Smo; using System; using System.Collections.Generic; using System.Management.Automation; using static PrivateFunctions; [Cmdlet(VerbsCommon

PowerShell PipelineVariable parameter contains only first value in a collection of PSCustomObject

和自甴很熟 提交于 2019-12-13 00:27:19
问题 Question I get count for lines of code for only one file in the array of type PSCustomObject . Rest all entries in the array error out with following message - Get-Content : Cannot bind argument to parameter 'Path' because it is null. At line:42 char:100 ... -Content -Path $files.UncommentedFileName) | Measure-Object -Line | Select-Objec ... How do I overcome this error and display the lines of code for all the files in the array? Constraints I intend to re-use the function Remove-VBComments

Spark: Extracting summary for a ML logistic regression model from a pipeline model

人走茶凉 提交于 2019-12-12 19:15:22
问题 I've estimated a logistic regression using pipelines. My last few lines before fitting the logistic regression: from pyspark.ml.feature import VectorAssembler from pyspark.ml.classification import LogisticRegression lr = LogisticRegression(featuresCol="lr_features", labelCol = "targetvar") # create assember to include encoded features lr_assembler = VectorAssembler(inputCols= numericColumns + [categoricalCol + "ClassVec" for categoricalCol in categoricalColumns], outputCol = "lr_features")

Streaming wrapper around program that writes to multiple output files

眉间皱痕 提交于 2019-12-12 17:23:43
问题 There is a program (which I cannot modify) that creates two output files. I am trying to write a Python wrapper that invokes this program, reads both output streams simultaneously, combines the output, and prints to stdout (to facilitate streaming). How can I do this without deadlocking? The following proof of concept below works fine, but when I apply this approach to the actual program it deadlocks. Proof of concept : this is a dummy program, bogus.py , that creates two output files like

How to create an iterator pipeline in Python?

佐手、 提交于 2019-12-12 11:18:42
问题 Is there a library or recommended way for creating an iterator pipeline in Python? For example: >>>all_items().get("created_by").location().surrounding_cities() I also want to be able to access attributes of the objects in the iterators. In the above example, all_items() returns an iterator of items, and the items have different creators. The .get("created_by") then returns the "created_by" attribute of the items (which are people), and then location() returns the city of each person, and