sqlbulkcopy

Bulk inserts taking longer than expected using Dapper

点点圈 提交于 2019-12-27 16:49:45
问题 After reading this article I decided to take a closer look at the way I was using Dapper. I ran this code on an empty database var members = new List<Member>(); for (int i = 0; i < 50000; i++) { members.Add(new Member() { Username = i.toString(), IsActive = true }); } using (var scope = new TransactionScope()) { connection.Execute(@" insert Member(Username, IsActive) values(@Username, @IsActive)", members); scope.Complete(); } it took about 20 seconds. That's 2500 inserts/second. Not bad, but

Performance issue with SqlBulkCopy and DataTable

两盒软妹~` 提交于 2019-12-25 16:51:04
问题 I need to efficiently import large amount of data from file to database. I have few rrf file which contain that data, the size of a file could be > 400mb and eventually it could be > 2 million record to database from file. What did I do: I am reading needed records in DataTable. using (StreamReader streamReader = new StreamReader(filePath)) { IEnumerable<string> values = new List<string>(); while (!streamReader.EndOfStream) { string line = streamReader.ReadLine().Split('|'); int index = 0;

MySqlBulkLoader with Column Mapping?

假装没事ソ 提交于 2019-12-25 01:47:09
问题 I use SqlBulkCopy to do bulk inserts into a SQL Server database. I am now providing MySql support for my program and the nearest thing to SqlBulkCopy is MySqlBulkLoader . But in MySqlBulkLoader , I have to first convert my DataTable to a file because MySqlBulkLoader only works with files and not DataTable . And then I have to disable foreign key checks before the insert. I have done them both but now I am left with one more problem: My destination table has an identity column (auto-increment

MySqlDataAdapter or MySqlDataReader for bulk transfer?

依然范特西╮ 提交于 2019-12-24 13:45:21
问题 I'm using the MySql connector for .NET to copy data from MySql servers to SQL Server 2008. Has anyone experienced better performance using one of the following, versus the other? DataAdapter and calling Fill to a DataTable in chunks of 500 DataReader.Read to a DataTable in a loop of 500 I am then using SqlBulkCopy to load the 500 DataTable rows, then continue looping until the MySql record set is completely transferred. I am primarily concerned with using a reasonable amount of memory and

SqlBulkCopy ColumnMapping multiple columns

前提是你 提交于 2019-12-24 13:42:29
问题 I'm using SqlBulkCopy to import CSV data to a database, but I'd like to be able to combine columns. For instance, lets say I have a firstname and lastname column in my CSV. Through my UI, I'd like the user to be able to choose firstname + lastname to fill a Displayname field. This would then instruct SqlBulkCopy to use a combo of fields to populate this one. In pseudo code, I want to do something like this. foreach(Pick p in picks){ if(p.csv_col_indexes.Count() == 1) bulkCopy.ColumnMappings

bulkcopy with primary key not working

随声附和 提交于 2019-12-24 02:12:32
问题 I have a database table, with columns and a priamary key. I want to do the bulkcopy, from a datatable in my c#. When I have primary key in my table, I got exception because the table has 6 columns, while my datatable has just 5. what should I do please? Should I add the primary key to my datatable in my c#? (if you need any code tell me pleae) this is the datatable private DataTable getBasicDataTable() { DataTable dataTable = new DataTable(); dataTable.Clear(); dataTable.Columns.Add(

How to Update SQL Server Table With Data From Other Source (DataTable)

不打扰是莪最后的温柔 提交于 2019-12-23 02:07:07
问题 I have a DataTable which is generated from .xls table. I would like to store this DataTable into an existing table in SQL Server database. I use SqlBulkCopy to store rows which have unique PK . Problem is, I also have other rows which have same PK as SQL Server table but contain cells with different value compared to SQL Server table. In short: Let's say in my DataTable I have a row like this: id(PK) | name | number 005 | abc | 123 006 | lge | 122 For my SQL server I have sth like this; id(PK

Using SQLBulkCopy - Significantly larger tables in SQL Server 2016 than in SQL Server 2014

*爱你&永不变心* 提交于 2019-12-22 18:24:57
问题 I have an application that uses SqlBulkCopy to move data into a set of tables. It has transpired recently that users that are using SQL2016 are reporting problems with their harddrives being filled with very large databases (that should not be that large). This problem does not occur in SQL2014. Upon inspection it appears that running TableDataSizes.sql (script attached) showed large amounts of space in UnusedSpaceKB. I would like to know if a) There is some bug in SQLServer 2016 or if our

postgresql: how to get primary keys of rows inserted with a bulk copy_from?

痞子三分冷 提交于 2019-12-22 10:27:54
问题 The goal is this: I have a set of values to go into table A , and a set of values to go into table B . The values going into B reference values in A (via a foreign key), so after inserting the A values I need to know how to reference them when inserting the B values. I need this to be as fast as possible. I made the B values insert with a bulk copy from: def bulk_insert_copyfrom(cursor, table_name, field_names, values): if not values: return print "bulk copy from prepare..." str_vals = "\n"

Sqlbulkcopy doesn't seem to work for me

核能气质少年 提交于 2019-12-22 07:09:07
问题 I have created a datatable and trying to insert that datatable through SqlBulkCopy but somehow it doesn't seem to work for me.... I got the error, The given value of type DateTime from the data source cannot be converted to type decimal of the specified target column. My Datasource is, DataTable dt = new DataTable(); dt.Columns.Add(new DataColumn("EmpId", typeof(Int64))); dt.Columns.Add(new DataColumn("FromDate", typeof(DateTime))); dt.Columns.Add(new DataColumn("ToDate", typeof(DateTime)));