read.csv

R read.csv how to ignore carriage return?

断了今生、忘了曾经 提交于 2020-02-21 14:51:53
问题 I need to read a text file (tab-separated) that has some carriage returns inside some fields. If I use read.table, it gives me an error: line 6257 did not have 20 elements If I use read.csv, it doesn't give an error, but creates a new line in that place, putting the next fields in the first fields of the new line. How can I avoid this? I can't alter the file itself (the script is to be run elsewhere). Also the broken strings don't have quotation marks (no strings in the file have). One option

Merge multiple .csv files into one [duplicate]

血红的双手。 提交于 2020-02-20 11:21:48
问题 This question already has answers here : How to import multiple .csv files at once? (12 answers) Closed 2 years ago . I am aware this question has been asked multiple times, but despite of trying to apply the aforementioned solutions i was not able to solve my little problem: I have saved all my .csv that i am aiming to merge into one folder: > file_list <- list.files() > file_list[] [1] "SR-einfam.csv" "SR-garage.csv" "SR-hotel.csv" [4] "SR-IndustrieGewerbe.csv" "SR-mehrfam.csv" "SR-OffG.csv

read.csv() with UTF-8 encoding [duplicate]

我们两清 提交于 2020-02-20 08:01:05
问题 This question already has answers here : Cannot read unicode .csv into R (3 answers) Closed 2 years ago . I am trying to read in data from a csv file and specify the encoding of the characters to be UTF-8. From reading through the ?read.csv() instructions, it seems that fileEncoding set equal to UTF-8 should accomplish this, however, I am not seeing that when checking. Is there a better way to specify the encoding of character strings to be UTF-8 when importing the data? Sample Data: Download

read.csv row.names

ⅰ亾dé卋堺 提交于 2020-01-20 04:32:05
问题 I'm trying to read a column oriented csv file into R as a data frame. the first line of the file is like so: sDATE, sTIME,iGPS_ALT, ... and then each additional line is a measurement: 4/10/2011,2:15,78, ... when I try to read this into R, via d = read.csv('filename') I get a duplicate row.names error since R thinks that the first column of the data is the row names, and since all of the measurements were taken on the same day, the values in the first column do not change. If I put in row

read.csv and skip last column in R [duplicate]

ぃ、小莉子 提交于 2020-01-15 10:11:40
问题 This question already has answers here : Only read selected columns (3 answers) Closed 2 years ago . I have read several other posts about how to import csv files with read.csv but skipping specific columns. However, all the examples I have found had very few columns, and so it was easy to do something like: columnHeaders <- c("column1", "column2", "column_to_skip") columnClasses <- c("numeric", "numeric", "NULL") data <- read.csv(fileCSV, header = FALSE, sep = ",", col.names = columnHeaders,

Loop read.csv files containing pattern in file name

帅比萌擦擦* 提交于 2020-01-15 09:13:48
问题 I created a vector with 30 words, called "club" club <- pixid$ack1 Next i want to import 30 csv files. Each filename contains 1 of the words in "club". for (i in club){ DCM.[i] <- read.csv(list.files(pattern = "[i]")) } However I receive the following error: Error in file(file, "rt") : invalid 'description' argument. How can I read in all of these files containing names from the vector? I'm hoping this is just a syntax error. 回答1: It is possible that there are multiple files for a single

Speed up R script looping through files/folders to check thresholds, calculate averages, and plot

自古美人都是妖i 提交于 2020-01-07 03:10:29
问题 I'm trying to speed up some code in R. I think my looping methods can be replaced (maybe with some form of lapply or using sqldf) but I can't seem to figure out how. The basic premise is that I have a parent directory with ~50 subdirectories, and each of those subdirectories contains ~200 CSV files (a total of 10,000 CSVs). Each of those CSV files contains ~86,400 lines (data is daily by the second). The goal of the script is to calculate the mean and stdev for two intervals of time from each

Insert NA values into dataframe blank cells when importing read.csv/read.xlsx

旧街凉风 提交于 2020-01-01 03:23:13
问题 The attached screenshot shows part of a dataframe which I have just imported into R from an excel file. In the cells which are blank, I need to insert 'NA'. How can I insert NA into any cell which is blank (whilst leaving the already populated cells alone)? 回答1: The better question is how can I read it into R so the missing cells will already be NA s. Maybe you used something like this: read.csv(file, header=FALSE, strip.white = TRUE, sep=",") Specify the NA strings like this when you read it

rbind all given columns within a list

▼魔方 西西 提交于 2019-12-24 11:51:10
问题 I am reading a variable number of .csv files, all contained in the present working directory, into a list, and would like to rbind the 2nd column of each of these .csv files. The files in the working directory look like this: 150601_0001.csv 150601_0002.csv 150601_0003.csv etc. I have the following code to read them all into a list for any given number of files in the directory: (code comes from here) myfiles <- dir(pattern = "\\.(csv|CSV)$", full.names = TRUE) # get filenames and paths

Read.csv makes everything negative [duplicate]

会有一股神秘感。 提交于 2019-12-24 01:52:33
问题 This question already has an answer here : Closed 6 years ago . Possible Duplicate: R seems to multiply my data by -1 I have a simple csv file. Looks like this x y 1 2 1 3 2 1 2 3 I created it in MS Excel, saved as csv etc. I read it using this command ttest<--read.csv("ttest.csv", header = TRUE) The resulting data looks like this x y -1 -2 -1 -3 -2 -1 -2 -3 I've opened the original csv file in a text editor and it looks like it should 回答1: The reason is that your command: ttest<--read.csv(