问题
I am having a CSV file in my Azure Blob Storage which contain all the column name with its data Data type of respective tables.
I want to create a table in SQL Database from this Blob file with the same column name with its corresponding datatype without doing the mapping.
I have created a table through data flow but I have to set the data type of each column manually. But I don't want to do this. When I create a table it should accept the same data types in the source as well as synch which was given in the CSV file.
[]
[]
When I import the schema it takes full column as ID (int)
and data type as String
but I want when I import the schema and create a table it will take column name as ID
and data type as INT
and it will do same all column names for multiple tables.
Please let me know if you have a solution to this problem.
回答1:
In Data Factory, when we copy data from the CSV file, we set first row as column
, that means that the first row data will be set as column name, id(int)
and Name(varchar)
. As you know, the default column name data type is String
(in Data Factory)/varchar(128)
(in SQL database), we can not change it.
We can not create the table with schema as column name! There's no solution to this problem.
But Data Factory will auto help us create the suitable column data type mapping for us.
For example, if your csv file is like this:
Auto schema mapping will convert "1"(string) to 1(int)
in Azure SQL database.
Hope this helps.
来源:https://stackoverflow.com/questions/61136151/how-to-create-a-table-in-sql-database-from-a-csv-file-in-blob-which-contain-all