问题
I have a single column csv file containing duplicate records. I have more than ten thousands records.So I am using LOAD DATA LOCAL INFILE
Example data:
ID
1
2
3
2
2
1
2
The mysql table is called 'huts'.
My first question is if it is possible to count how many duplicate rows are there for each row while inserting the data in mysql table 'huts'.So I would like to see the populated huts table as below
ID count
1 2
2 4
3 1
My second question is if the above is not possible then my current working code returns the following
ID
1
2
3
$insert_query = "LOAD DATA LOCAL INFILE 'test.csv'
INTO table huts
(id)
";
if (!mysql_query($insert_query)) {
echo "Can't insert student record : " . mysql_error($connection);
} else {
echo "You have successfully insert records into huts table";
}
The table structure is
CREATE TABLE `huts` (
`id` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=116 DEFAULT CHARSET=latin1
The database storage engine is 'MyISAM'
There is no unique key declared but why the query is ignoring the duplicate rows.I expected that the query should insert all the rows no matter if there is any duplicate.
回答1:
Remove the Auto increment and primary key options from your table field if you don't care about duplicated data
来源:https://stackoverflow.com/questions/24213516/load-data-infile-with-counting-duplicate-rows-in-mysql