data-import

import txt files using excel interop in C# (QueryTables.Add)

与世无争的帅哥 提交于 2019-12-08 07:04:32
问题 I am trying to insert text files into excel cell using Querytables.Add; no error, but the worksheet is empty. except for the single cell manipulation using Value2 property. I already using macro to record the object used. Can you help me on this(I am using vs2008, C# , excel 2003 and 2007; both shown empty cell). Below is my code; thanks for your help Application application = new ApplicationClass(); try { object misValue = Missing.Value; wbDoc = application.Workbooks.Open(flnmDoc, misValue,

How can I do indexing .html files in SOLR

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-07 07:37:04
问题 The files I want to do indexing is stored on the server(I don't need to crawl). /path/to/files/ the sample HTML file is <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta name="product_id" content="11"/> <meta name="assetid" content="10001"/> <meta name="title" content="title of the article"/> <meta name="type" content="0xyzb"/> <meta name="category" content="article category"/> <meta name="first" content="details of the article"/> <h4>title of the article</h4> <p class

Bulk Insert Failed “Bulk load data conversion error (truncation)”

£可爱£侵袭症+ 提交于 2019-12-06 08:54:30
I've done data imports with SQL Server's BULK INSERT task hundreds of times, but this time I'm receiving an error that's unfamiliar and that I've tried troubleshooting to no avail with Google. The below is the code I use with a comma deliminated file where the new rows are indicated by new line characters: BULK INSERT MyTable FROM 'C:\myflatfile.txt' WITH ( FIELDTERMINATOR = ',' ,ROWTERMINATOR = '/n') GO It consistently works, yet now on a simple file with a date and rate, it's failing with the error of " Msg 4863, Level 16, State 1, Line 1 Bulk load data conversion error (truncation) for row

Import CSV file (contains some non-UTF8 characters) in MongoDb

不羁的心 提交于 2019-12-06 07:07:55
问题 How can I import a CSV file that contains some non-UTF8 characters to MongoDB? I tried a recommended importing code. mongoimport --db dbname --collection colname --type csv --headerline --file D:/fastfood.xls Error Message exception: Invalid UTF8 character detected I would remove those invalid characters manually, but the size of the data is considerably big. Tried Google with no success. PS: mongo -v = 2.4.6 Thanks. Edit: BTW, I'm on Win7 回答1: In Linux you could use the iconv command as

How to import/read data from an XML file?

梦想与她 提交于 2019-12-06 06:08:34
How to access an XML file in C#? How to count the number of nodes in that xml file? How am i supposed to access each and every node in that xml file? I have two xml files, one of them is dev.xml which has this code <Devanagri_to_itrans> <mapping> <character>अ</character> <itrans>a</itrans> </mapping> ... </Devanagri_to_itrans> the second file is guj.xml (with a very similar structure) <Gujrathi_to_itrans> <mapping> <character>અ</character> <itrans>a</itrans> <mapping> ... </Gujrathi_to_itrans> I need to turn this into a two-dimension arraying of the character mappings. Since you've added more

Input data into R from .dat and .sps files

让人想犯罪 __ 提交于 2019-12-05 03:35:34
I've been trying to import data into R using a .dat and .sps files. The .dat file has no headers and varying lengths of columns which are of course included in the .sps file. It also contains missing values. I've tried using spss.fixed.file from package memisc but when I specify it as: x=spss.fixed.file(file=data.dat,columns.file=input.sps,varlab.file=input.sps) and I get the following error: Error in spss.parse.variable.labels(varlab.file) : too many 'variable label' statments Has anyone used this command/package before? I can't understand whether it's ok for both the columns.file and the

excel to oracle db using VS 2005 C#

眉间皱痕 提交于 2019-12-05 01:45:35
问题 I want to build a utility that can import data from excel sheet(columns are fixed but sheets can be any number) to oracle db. Can you suggest how should I: Read excel sheets(n number)?(Best way) Validate data? Bulk insert into DB? My concern is performance here. Each sheet can have 200,000+ rows. PS - please remember I am a complete newbie to oracle. 回答1: You can use Microsoft Integration Services and bulkload the files with it Another way is to convert the excel sheets into cvs and load them

Import selected columns from a CSV files to SQL Server table

て烟熏妆下的殇ゞ 提交于 2019-12-04 05:21:37
问题 I am trying to import data from a CSV file to a SQL Server 2008 table. Data upload is working, but I want to import only selected columns, not all, and add them to a a new table, of same no. of columns, using the wizard, but its not happening, the wizard is selecting all the columns. So is it possible using wizard that I only import selected columns. 回答1: If you are using the Import / Export wizard, when you get to Select Source Tables and Views click on the button "Edit Mappings" on the

Import dump with SQLFILE parameter not returning the data inside the table

夙愿已清 提交于 2019-12-02 17:50:50
问题 I am trying to import the dump file to .sql file using SQLFILE parameter. I used the command "impdp username/password DIRECTORY=dir DUMPFILE=sample.dmp SQLFILE=sample.sql LOGFILE=sample.log" I expected this to return a sql file with contents inside the table. But it created a sql file with only DDL queries. For export I used, "expdp username/password DIRECTORY=dir DUMPFILE=sample.dmp LOGFILE=sample.log FULL=y" Dump file size is 130 GB. So, I believe the dump has been exported correctly. Am I

Separate a column of a dataframe in undefined number of columns with R/tidyverse [duplicate]

对着背影说爱祢 提交于 2019-12-02 13:38:25
问题 This question already has an answer here : R: Split Variable Column into multiple (unbalanced) columns by comma (1 answer) Closed 6 months ago . I have to import a table that look like as the following dataframe: > df = data.frame(x = c("a", "a.b","a.b.c","a.b.d", "a.d")) > df x 1 <NA> 2 a 3 a.b 4 a.b.c 5 a.b.d 6 a.d I'd like to separate the first column in one or more columns based one how many separator I'll find. The output should lool like this > df_separated col1 col2 col3 1 a <NA> <NA>