I have a DataSet
populated from Excel Sheet. I wanted to use SQLBulk Copy to Insert Records in Lead_Hdr
table where LeadId
is PK.
What I have found is that the columns in the table and the columns in the input must at least match. You can have more columns in the table and the input will still load. If you have less you'll receive the error.
One of the reason is that :SqlBukCOpy is case sensitive . Follow steps:
For Example:`
//Get Column from Source table
string sourceTableQuery = "Select top 1 * from sourceTable";
DataTable dtSource=SQLHelper.SqlHelper.ExecuteDataset(transaction, CommandType.Text, sourceTableQuery).Tables[0];// i use sql helper for executing query you can use corde sw
for (int i = 0; i < destinationTable.Columns.Count; i++)
{ //check if destination Column Exists in Source table
if (dtSource.Columns.Contains(destinationTable.Columns[i].ToString()))//contain method is not case sensitive
{
int sourceColumnIndex = dtSource.Columns.IndexOf(destinationTable.Columns[i].ToString());//Once column matched get its index
bulkCopy.ColumnMappings.Add(dtSource.Columns[sourceColumnIndex].ToString(), dtSource.Columns[sourceColumnIndex].ToString());//give coluns name of source table rather then destination table so that it would avoid case sensitivity
}
}
bulkCopy.WriteToServer(destinationTable);
bulkCopy.Close();
I've encountered the same problem while copying data from access to SQLSERVER 2005 and i found that the column mappings are case sensitive on both data sources regardless of the databases sensitivity.
Thought a long time about answering... Even if column names are case equally, if the data type differs you get the same error. So check column names and their data type.
P.S.: staging tables are definitive the way to import.
Well, is it right? Do the column names exist on both sides?
To be honest, I've never bothered with mappings. I like to keep things simple - I tend to have a staging table that looks like the input on the server, then I SqlBulkCopy
into the staging table, and finally run a stored procedure to move the table from the staging table into the actual table; advantages:
As a final thought - if you are dealing with bulk data, you can get better throughput using IDataReader
(since this is a streaming API, where-as DataTable
is a buffered API). For example, I tend to hook CSV imports up using CsvReader as the source for a SqlBulkCopy. Alternatively, I have written shims around XmlReader
to present each first-level element as a row in an IDataReader
- very fast.
I would go with the staging idea, however here is my approach to handling the case sensitive nature. Happy to be critiqued on my linq
using (SqlConnection connection = new SqlConnection(conn_str))
{
connection.Open();
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connection))
{
bulkCopy.DestinationTableName = string.Format("[{0}].[{1}].[{2}]", targetDatabase, targetSchema, targetTable);
var targetColumsAvailable = GetSchema(conn_str, targetTable).ToArray();
foreach (var column in dt.Columns)
{
if (targetColumsAvailable.Select(x => x.ToUpper()).Contains(column.ToString().ToUpper()))
{
var tc = targetColumsAvailable.Single(x => String.Equals(x, column.ToString(), StringComparison.CurrentCultureIgnoreCase));
bulkCopy.ColumnMappings.Add(column.ToString(), tc);
}
}
// Write from the source to the destination.
bulkCopy.WriteToServer(dt);
bulkCopy.Close();
}
}
and the helper method
private static IEnumerable<string> GetSchema(string connectionString, string tableName)
{
using (SqlConnection connection = new SqlConnection(connectionString))
using (SqlCommand command = connection.CreateCommand())
{
command.CommandText = "sp_Columns";
command.CommandType = CommandType.StoredProcedure;
command.Parameters.Add("@table_name", SqlDbType.NVarChar, 384).Value = tableName;
connection.Open();
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
yield return (string)reader["column_name"];
}
}
}
}