I have got a MongoDB which store product data for 204.639.403 items, those data has already spitted up, by the item\'s country, into four logical
Would you consider using a database with better throughput that supports documents? I've heard success stories with TokuMX. And FoundationDB (where I'm an engineer) has very good performance with high-concurrent write loads and large documents. Happy to answer further questions about FoundationDB.
Most likely you are running into issues due to record growth, see http://docs.mongodb.org/manual/core/write-performance/#document-growth.
Mongo prefers records of fixed (or at least bounded) size. Increasing the record size beyond the pre-allocated storage will cause the document to be moved to another location on disk, multiplying your I/O with each write. Consider pre-allocating "enough" space for your average document on insert, if your document sizes are relatively homogenous. Otherwise consider splitting rapidly growing nested arrays into a separate collection, thereby replacing updates with inserts. Also check your fragmentation and consider compacting your databases from time to time, so that you have a higher density of documents per block which will cut down on hard page faults.