sqlbulkcopy

Inserting GUIDs with SqlBulkCopy

跟風遠走 提交于 2020-01-23 05:27:36
问题 I'm trying to do a bulk insert with the SqlBulkCopy class from flatfiles created by the SQL Server Management Import Export Wizard. The files are comma separated. One row in the file can look like this: {DCAD82A9-32EC-4351-BEDC-2F8291B40AB3},,{ca91e768-072d-4e30-aaf1-bfe32c24008f},900001:1792756,900001:1792757,basladdning,2011-04-29 02:54:15.380000000,basladdning,2011-04-29 02:54:15.380000000,{20A3C50E-8029-41DE-86F1-DDCDB9A78BA5} The error I get is: System.InvalidOperationException was

What's the drawback of SqlBulkCopy

只谈情不闲聊 提交于 2020-01-23 05:14:17
问题 I have done some research for "The bast way to insert huge data into DB with C#" then a lot of people just suggested me using SqlBulkCopy. After I tried it out and it really amazed me. Undoubtedly, SqlBulkCopy is very very fast. It seems that SqlBulkCopy is a perfect way to insert data (especially huge data). But why dont we use it at all times. Is there any drawback of using SqlBulkCopy? 回答1: Two reasons I can think of: As far as I know, it's only available for Microsoft SQL Server In a lot

How do I capture the data passed in SqlBulkCopy using the Sql Profiler?

ε祈祈猫儿з 提交于 2020-01-22 11:59:38
问题 I am using Sql Profiler all the time to capture the SQL statements and rerun problematic ones. Very useful. However, some code uses the SqlBulkCopy API and I have no idea how to capture those. I see creation of temp tables, but nothing that populates them. Seems like SqlBulkCopy bypasses Sql Profiler or I do not capture the right events. 回答1: Capturing event info for bulk insert operations ( BCP.EXE , SqlBulkCopy , and I assume BULK INSERT , and OPENROWSET(BULK... ) is possible, but you won't

How do I capture the data passed in SqlBulkCopy using the Sql Profiler?

左心房为你撑大大i 提交于 2020-01-22 11:58:26
问题 I am using Sql Profiler all the time to capture the SQL statements and rerun problematic ones. Very useful. However, some code uses the SqlBulkCopy API and I have no idea how to capture those. I see creation of temp tables, but nothing that populates them. Seems like SqlBulkCopy bypasses Sql Profiler or I do not capture the right events. 回答1: Capturing event info for bulk insert operations ( BCP.EXE , SqlBulkCopy , and I assume BULK INSERT , and OPENROWSET(BULK... ) is possible, but you won't

Python Postgres Best way to insert data from table on one DB to another table on another DB

折月煮酒 提交于 2020-01-17 05:50:10
问题 I have the following python code that copies content of a table on postgres DB1 and INSERTS into a similar table on postgres DB2. I want to speed it up by using BULK INSERTS. How do I achieve this import psycopg2 import sys import os all_data = [] try: connec = psycopg2.connect("host = server1 dbname = DB1 ") connecc = psycopg2.connect("host = server2 dbname = DB2 ") connec.autocommit = True connecc.autocommit = True except: print("I am unable to connect to the database.") cur = connec.cursor

Redshift copy creates different compression encodings from analyze

不羁的心 提交于 2020-01-15 06:29:28
问题 I've noticed that AWS Redshift recommends different column compression encodings from the ones that it automatically creates when loading data (via COPY) to an empty table. For example, I have created a table and loaded data from S3 as follows: CREATE TABLE Client (Id varchar(511) , ClientId integer , CreatedOn timestamp, UpdatedOn timestamp , DeletedOn timestamp , LockVersion integer , RegionId varchar(511) , OfficeId varchar(511) , CountryId varchar(511) , FirstContactDate timestamp ,

How to add CsvHelper records to DataTable to use for SqlBulkCopy to the database

谁说胖子不能爱 提交于 2020-01-12 07:06:31
问题 I am trying to read a CSV file with CsvHelper, load each record into a DataTable, and then use SqlBulkCopy to insert the data into a database table. With the current code, I get an exception when adding a row to the DataTable. The exception is: "Unable to cast object of type 'MvcStockAnalysis.Models.StockPrice' to type 'System.IConvertible'.Couldn't store in Date Column. Expected type is DateTime." The example CSV file is from yahoo finance. For example: http://ichart.yahoo.com/table.csv?s

SqlBulkCopy performance

血红的双手。 提交于 2020-01-12 05:34:06
问题 I am working to increase the performance of bulk loads; 100's of millions of records + daily. I moved this over to use the IDatareader interface in lieu of the data tables and did get a noticeable performance boost (500,000 more records a minute). The current setup is: A custom cached reader to parse the delimited files. Wrapping the stream reader in a buffered stream. A custom object reader class that enumerates over the objects and implements the IDatareader interface. Then SqlBulkCopy

How do I read a large file from disk to database without running out of memory

白昼怎懂夜的黑 提交于 2020-01-11 06:18:09
问题 I feel embarrassed to ask this question as I feel like I should already know. However, given I don't....I want to know how to read large files from disk to a database without getting an OutOfMemory exception. Specifically, I need to load CSV (or really tab delimited files). I am experimenting with CSVReader and specifically this code sample but I'm sure I'm doing it wrong. Some of their other coding samples show how you can read streaming files of any size, which is pretty much what I want

How does SqlBulkCopy circumnavigate foreign key constraints?

陌路散爱 提交于 2020-01-10 11:49:28
问题 I used SqlBulkCopy to insert a collection of rows into a table. I forgot to set an integer value on the rows. The missing column is used to reference another table and this is enforced with a foreign key constraint. For every row inserted, the final integer value was zero and zero didn't identify a row in the related table. When I modified the value to a valid value and then tried to switch it back to zero it wouldn't accept it. So my question is how does SqlBulkCopy manage to leave the