Reading in multiple CSVs with different numbers of lines to skip at start of file
问题 I have to read in about 300 individual CSVs. I have managed to automate the process using a loop and structured CSV names. However each CSV has 14-17 lines of rubbish at the start and it varies randomly so hard coding a 'skip' parameter in the read.table command won't work. The column names and number of columns is the same for each CSV. Here is an example of what I am up against: QUICK STATISTICS: Directory: Data,,,, File: Final_Comp_Zn_1 Selection: SEL{Ox*1000+Doma=1201} Weight: None,,, ,