bulkinsert

Import CSV file into SQL Server

一笑奈何 提交于 2019-12-16 20:05:46
问题 I am looking for help to import a .csv file into SQL Server using BULK INSERT and I have few basic questions. Issues: The CSV file data may have , (comma) in between (Ex: description), so how can I make import handling these data? If the client creates the CSV from Excel then the data that have comma are enclosed within "" (double quotes) [as the below example] so how do the import can handle this? How do we track if some rows have bad data, which import skips? (does import skips rows that

Bulk insert strategy from c# to SQL Server

此生再无相见时 提交于 2019-12-14 03:41:51
问题 In our current project, customers will send collection of a complex/nested messages to our system. Frequency of these messages are approx. 1000-2000 msg/per seconds. These complex objects contains the transaction data (to be added) as well as master data (which will be added if not found). But instead of passing the ids of the master data, customer passes the 'name' column. System checks if master data exist for these names. If found, it uses the ids from database otherwise create this master

How to insert multiple rows into a SQLite 3 table?

你说的曾经没有我的故事 提交于 2019-12-13 11:46:45
问题 In MySQL I'd use INSERT INTO `mytable` (`col1`, `col2`) VALUES (1, 'aaa'), (2, 'bbb'); but this causes an error in SQLite. What is the correct syntax for SQLite? 回答1: This has already been answered before here: Is it possible to insert multiple rows at a time in an SQLite database? To answer your comment to OMG Ponies answer: As of version 3.7.11 SQLite does support multi-row-insert. Richard Hipp comments: "The new multi-valued insert is merely syntactic suger (sic) for the compound insert.

How to bulk insert in SQL using Dapper in C# [duplicate]

允我心安 提交于 2019-12-13 11:02:59
问题 This question already has answers here : Does Dapper support inserting multiple rows in a single query? (2 answers) Closed 6 months ago . I am using C# Dapper with MYSQL. I have a list of classes that i want to insert into MySQL table. I used to do it using TVP in MS SQL, how do we do it in MySQL. 回答1: Disclaimer : I'm the owner of Dapper Plus This project is not free but supports MySQL and offers all bulk operations: BulkInsert BulkUpdate BulkDelete BulkMerge And some more options such as

How can I bulk insert with MongoDB (using PyMongo), even when one record of the bulk fails?

被刻印的时光 ゝ 提交于 2019-12-13 07:12:15
问题 I have some Python code that uses PyMongo to insert many lists (of 1000 objects each), into a collection with a unique index (field name is data_id ). However, some of my lists of objects have duplicate data in the different sets of lists to be inserted ( e.g. , perhaps the second list of 1000 objects has one or two records that are identical to some of the objects previously inserted in the first set of the bulk insert). Here's the problem : when the code goes to bulk insert a set of 1000

SQL BULK INSERT FROM errors

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-13 06:55:16
问题 I'm attempting to insert a CSV file into an Microsoft SQL Server Management Studio database like this: BULK INSERT [dbo].[STUDY] FROM 'C:\Documents and Settings\Adam\My Documents\SQL Server Management Studio\Projects\StudyTable.csv' WITH ( MAXERRORS = 0, FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' ) But I am getting errors: Msg 4863, Level 16, State 1, Line 2 Bulk load data conversion error (truncation) for row 1, column 9 (STATUS). Msg 7399, Level 16, State 1, Line 2 The OLE DB provider

SQL Server Bulk Insert with FOREIGN KEY parameter (not existant in txt file, ERDs included)

戏子无情 提交于 2019-12-13 04:11:59
问题 Okay so I have a table ERD designed like so... for regular bulk inserts (source: iforce.co.nz) And a tab delimited \t text file with information about each customer (consists of about 100,000+ records). # columnA columnB columnC data_pointA data_pointB data_pointC And a stored procedure that currently does its intended job fine. CREATE PROCEDURE import_customer_from_txt_para @filelocation varchar(100) AS BEGIN TRUNCATE TABLE dbo.[customer_stg] DECLARE @sql nvarchar(4000) = ' BULK INSERT

select bottleneck and insert into select doesn't work on cockroach db

浪尽此生 提交于 2019-12-13 03:14:29
问题 I have to union 2 tables like below query. and 'table2' has 15GB data. But it show errors. I set max-sql-memory=.80 and I don't know how to solve this. When I execute this query with limit 50000 option , it works! Even 'select * from table2' shows same error. I think there are a select bottleneck somehow.... Also, with this query it is unusual only 1 of 3nodes's latency goes up. (AWS EC2 i3.xlarge type) ▶ Query insert into table1 ( InvoiceID, PayerAccountId, LinkedAccountId, RecordType,

Failing to bulk insert with XML model

為{幸葍}努か 提交于 2019-12-13 02:24:19
问题 I am trying to use BULK INSERT to insert a large amount of binary files with metadata into a SQL Server table. I want to only specify a subset of the columns to use (most importantly, the PK is a UniqueIdentifier and I want SQL Server to generate this field). And so I must use a model file. Not using a model file (entering data for all columns) works fine with this file: 00000000-F436-49D0-B5A9-02DAB2E03F45, B, , , , JVBERio4MjEzNQ0KJSVFT0Y=, 109754, 2017-12-14 14:53:23, 2017-12-14 14:53:23,

Insert .csv file into SQL Server with BULK INSERT and reorder columns

青春壹個敷衍的年華 提交于 2019-12-13 01:42:17
问题 I make an ASP.NET application and I want to insert data into my SQL Server from a CSV file. I did it with this SQL command: BULK INSERT Shops FROM 'C:\..\file.csv' WITH ( FIELDTERMINATOR = ';', ROWTERMINATOR = '\n' ); It pretty works but I have an id columns with AUTO INCREMENT option. I want to reorder inserted columns to SQL server increment automatiquely Id column. How can I do that with BULK method? (of course, I don't want to edit .csv file manualy :P ) 回答1: Like I said in my comment: