R write dataframe column to csv having leading zeroes

后端 未结 7 1057
故里飘歌
故里飘歌 2020-12-21 04:21

I have a table that stores prefixes of different lengths.. snippet of table(ClusterTable)

ClusterTable[ClusterTable$FeatureIndex == \"Prefix2\",\'Feat

相关标签:
7条回答
  • 2020-12-21 04:35

    If you're trying to open the .csv with Excel, I recommend writing to excel instead. First you'll have to pad the data though.

        library(openxlsx)
        library(dplyr)
    
        ClusterTable <- ClusterTable %>% 
         mutate(FeatureValue = as.character(FeatureValue),
         FeatureValue = str_pad(FeatureValue, 2, 'left', '0'))
    
        write.xlsx(ClusterTable, "Filename.xlsx")
    
    0 讨论(0)
  • 2020-12-21 04:43

    If you just need it for the visual, just need to add one line before you write the csv file, as such:

    ClusterTable <- read.table(text="   FeatureIndex FeatureValue
    80      Prefix2           80
               81      Prefix2           81
               30      Prefix2           30
               70      Prefix2           70
               51      Prefix2           51
               84      Prefix2           84
               01      Prefix2           01
               63      Prefix2           63
               28      Prefix2           28
               26      Prefix2           26
               65      Prefix2           65
               75      Prefix2           75",
                               colClasses=c("character","character"))
    
    ClusterTable$FeatureValue <- paste0(ClusterTable$FeatureValue,"\t")
    
    write.csv(ClusterTable,file="My_Clusters.csv")
    

    It adds a character to the end of the value, but it's hidden in Excel.

    0 讨论(0)
  • 2020-12-21 04:43

    You have to modificate your column using format:

    format(your_data$your_column, trim = F)
    

    So when you export to .csv then leading zeros will keep on.

    0 讨论(0)
  • 2020-12-21 04:46

    This is pretty much the route you can take when exporting from R. It depends on the type of data and number of records (size of data) you are exporting:

    • if you have many rows such as thousands, txt is the best route, you can export to csv if you know you don't have leading or trailing zeros in the data, either use txt or xlsx format. Exporting to csv will most likely remove the zeros.

    • if you don't deal with many rows, then xlsx libraries are better

    • xlsx libraries may depend on java so make sure you use a library that does not require it

    • xlsx libraries are either problematic or slow when dealing with many rows, so still txt or csv can be a better route

    for your specific problem, it seems you don't deal with a large number of rows, so you can use:

    library(openxlsx)
    
    # read data from an Excel file or Workbook object into a data.frame
    df <- read.xlsx('name-of-your-excel-file.xlsx')
    
    # for writing a data.frame or list of data.frames to an xlsx file
    write.xlsx(df, 'name-of-your-excel-file.xlsx')
    
    0 讨论(0)
  • 2020-12-21 04:55

    I know this is an old question, but I happened upon a solution for keeping the lead zeroes when opening .csv output in excel. Before writing your .csv in R, add an apostrophe at the front of each value like so:

    vector <- sapply(vector, function(x) paste0("'", x))
    

    When you open the output in excel, the apostrophe will tell excel to keep all the characters and not drop lead zeroes. At this point you can format the column as "text" and then do a find and replace to remove the apostrophes (maybe make a macro for this).

    0 讨论(0)
  • 2020-12-21 04:59

    When dealing with leading zeros you need to be cautious if exporting to excel. Excel has a tendency to outsmart itself and automatically trim leading zeros. You code is fine otherwise and opening the file in any other text editor should show the zeros.

    0 讨论(0)
提交回复
热议问题