user-defined-functions

Convert a hexadecimal varbinary to its string representation?

≡放荡痞女 提交于 2020-06-16 07:42:26
问题 I have some base-64 encoded strings in SQL Server database, for example: DECLARE @x VARBINARY(64); SET @x = 0x4b78374c6a3733514f723444444d35793665362f6c513d3d When it's CAST or CONVERTED to a VARCHAR, I get: +˽Ð:¾Îréî¿• I'm looking for SQL Server to return a varchar with the hexadecimal representation of the varbinary as a varchar, e.g.: 4b78374c6a3733514f723444444d35793665362f6c513d3d Is there a build in CAST/CONVERT/function that does this, or does it have to be added as a User Defined

Is there a way in Big query to execute dynamic queries something like 'EXEC' in sql server?

为君一笑 提交于 2020-06-07 07:08:45
问题 I have a table with over 200 column names which are created with a temporary name like - custColum1 -custColum200. I have a mapping table which contains a list of custColum1-custColumn200 to which name it has to be mapped with. For example Table1(custColum1,custColum2) Mappingtable(tempColumnName,RealColumnName) data in mapping table be like (custColum1,Role_number) (custColum2,Person_name) I need to change table 1 to Table1(Role_number,Person_name). Note: I cannot create table1 with this

Is there a way in Big query to execute dynamic queries something like 'EXEC' in sql server?

独自空忆成欢 提交于 2020-06-07 07:08:45
问题 I have a table with over 200 column names which are created with a temporary name like - custColum1 -custColum200. I have a mapping table which contains a list of custColum1-custColumn200 to which name it has to be mapped with. For example Table1(custColum1,custColum2) Mappingtable(tempColumnName,RealColumnName) data in mapping table be like (custColum1,Role_number) (custColum2,Person_name) I need to change table 1 to Table1(Role_number,Person_name). Note: I cannot create table1 with this

Office JS: Problems when Addin is executed in multiple Excel instances

荒凉一梦 提交于 2020-05-16 06:31:09
问题 I have problems executing an office addin in multiple Excel instances. One stops running when both are executed hat the same time. I did 2 quick ScriptLab samples, where you can reproduce some issues (I pasted them). One contains an UDF-Function, just register it in ScriptLab. The other on is a sample which produces one of my problems. First register the UDF, than before using the second part, create 2 workbooks with each having 100 worksheets that contain the following function (depending on

Calling another custom Python function from Pyspark UDF

点点圈 提交于 2020-05-15 02:51:04
问题 Suppose you have a file, let's call it udfs.py and in it: def nested_f(x): return x + 1 def main_f(x): return nested_f(x) + 1 You then want to make a UDF out of the main_f function and run it on a dataframe: import pyspark.sql.functions as fn import pandas as pd pdf = pd.DataFrame([[1], [2], [3]], columns=['x']) df = spark.createDataFrame(pdf) _udf = fn.udf(main_f, 'int') df.withColumn('x1', _udf(df['x'])).show() This works OK if we do this from within the same file as where the two functions

scala spark dataframe modify column with udf return value

邮差的信 提交于 2020-05-14 14:17:10
问题 I have a spark dataframe which has a timestamp field and i want to convert this to long datatype. I used a UDF and the standalone code works fine but when i plug to to a generic logic where any timestamp will need to be converted i m not ble to get it working.Issue is how can i assing the return value from UDF back to the dataframe column Below is the code snippet val spark: SparkSession = SparkSession.builder().master("local[*]").appName("Test3").getOrCreate(); import org.apache.spark.sql

scala spark dataframe modify column with udf return value

无人久伴 提交于 2020-05-14 14:16:32
问题 I have a spark dataframe which has a timestamp field and i want to convert this to long datatype. I used a UDF and the standalone code works fine but when i plug to to a generic logic where any timestamp will need to be converted i m not ble to get it working.Issue is how can i assing the return value from UDF back to the dataframe column Below is the code snippet val spark: SparkSession = SparkSession.builder().master("local[*]").appName("Test3").getOrCreate(); import org.apache.spark.sql

scala spark dataframe modify column with udf return value

ぃ、小莉子 提交于 2020-05-14 14:14:51
问题 I have a spark dataframe which has a timestamp field and i want to convert this to long datatype. I used a UDF and the standalone code works fine but when i plug to to a generic logic where any timestamp will need to be converted i m not ble to get it working.Issue is how can i assing the return value from UDF back to the dataframe column Below is the code snippet val spark: SparkSession = SparkSession.builder().master("local[*]").appName("Test3").getOrCreate(); import org.apache.spark.sql

Absolute reference with UDF & filtering data in google sheets script

北城余情 提交于 2020-05-09 07:16:07
问题 Hi I am begineer to scripts, here is the code and googlesheet for reference What i am getting what i want to achieve /** *@param quest1 Question of the note *@param quest1 Answer of the note *@customfunction*/ function DMNOTE(quest1,ans1,quest2,ans2,quest3,ans3,) { var ss = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet(); var result = quest1+'-'+ans1+','+quest2+'-'+ans2+','+quest3+'-'+ans3; return result; } I want to achieve absolute reference for "quest" parameter and i want it to

check if a row value is null in spark dataframe

泄露秘密 提交于 2020-05-08 05:36:17
问题 I am using a custom function in pyspark to check a condition for each row in a spark dataframe and add columns if condition is true. The code is as below: from pyspark.sql.types import * from pyspark.sql.functions import * from pyspark.sql import Row def customFunction(row): if (row.prod.isNull()): prod_1 = "new prod" return (row + Row(prod_1)) else: prod_1 = row.prod return (row + Row(prod_1)) sdf = sdf_temp.map(customFunction) sdf.show() I get the error mention below: AttributeError: