Let\'s say I have several different containers and each one of them uses it\'s own database. What is the best practice in this case regarding performance? Run one container, say
You'd want to use a single database server, preferably running a shell you can attach to for administration, sharing either a Unix socket, a port or both to linked containers. This means you'll have an easier time managing the database container as a service, tweaking performances, monitoring usage, backing up volumes, etc.
Granted, there might be non-standard situations where you might want to have independent servers, for instance running servers with isolated host resources, users, databases, though I'm certain this shouldn't apply to developer environments.
Since Docker container overhead is not significant and negligible here, the question is more about architecture in a microservices paradigm.
Performance is indeed complex question and there is no general advice, but maybe following will help you:
Personaly i doubt that a beginnig of the project one should try to solve all possible performance problems in advance (#MVP, #agile) However, correct me, but it looks like you have not much resources (one host?) and want to be thrifty with these resources in advance.
Ok, what is your bigest concern now?
Then having two concurreent mySQL instances on same host is maybe not that good (but not a problem for different setups)
For one host i would propose to start use one Database container but create different schemas. It might involve additional work with standard container (https://forums.docker.com/t/multiple-databases-in-official-mysql-container/8324)
I would not care to much now and start with separate databases from begin on. Being able to separate your services horizontal to the databases is a huge value! I would not want to weaken this design decision because of very theoretical future performance issues.