How to identify character encoding from website?
问题 What I'm trying to do: I'm getting from a database a list of uris and download them, removing the stopwords and counting the frequency that the words appears in the webpage, then trying to save in the mongodb. The Problem: When I try to save the result in the database I get the error bson.errors.invalidDocument: the document must be a valid utf-8 it appears to be related to the codes '\xc3someotherstrangewords', '\xe2something' when I'm processing the webpages I try remove the punctuation,