I am working with mongodb and nodejs. I have mongodb hosted on Atlas.
My backend had been working perfectly but now it is sometimes getting stuck and when I see the analytics on mongodb atlas it shows maximum number of active connections reached to 100.
Can someone please explain why this is happening? Can I reboot the connections and make it 0?
@Stennie I have used mongoose to connect to database
Here is my configuration file
const mongooseOptions = {
useNewUrlParser: true,
autoReconnect: true,
poolSize: 25,
connectTimeoutMS: 30000,
socketTimeoutMS: 30000
}
exports.register = (server, options, next) => {
defaults = Hoek.applyToDefaults(defaults, options)
if (Mongoose.connection.readyState) {
return next()
}
if (!Mongoose.connection.readyState) {
server.log(`${process.env.NOED_ENV} server connecting to ${defaults.url} ${defaults.url}`)
return Mongoose.connect(defaults.url, mongooseOptions).then(() => {
return next() // call the next item in hapi bootstrap
})
}
}
Assuming your backend is deployed on lambda since serverless tag.
Each invocation will leave a container idle to prevent cold start, or use an existing one if available. You are leaving the connection open to reuse it between invocation, like advertised in best practices.
With a poolSize
of 25 (?) and 100 max connections, you should limit your function concurrency to 4.
Reserve concurrency to prevent your function from using all the available concurrency in the region, or from overloading downstream resources.
More reading: https://www.mongodb.com/blog/post/optimizing-aws-lambda-performance-with-mongodb-atlas-and-nodejs
You could try couple of things:
In a
serverless
environment, as already suggested by @Gabriel Bleu, why have such a highconnectionLimit
.Serverless
environment keeps spawning new containers and stopping as per requests. If multiple instances spawn concurrently, it would exhaust the MongoDB server limit very quickly.The concept of
connectionPool
is,x
number of connections are established every time from every node (instance). But that does not mean all the connections are automatically released after querying. After completing ALL the DB operation, you shouldrelease
each connection individual after use:mongoose.connection.close();
Note: Mongoose connection close will close all the connections of connection pool. So ideally, this should be run just before returning the response.
Why are you setting explicity
autoReconnect
to true. MongoDB driver internally reconnects whenever the connection is lost and certainly is not recommended for short lifespan instances such asserverless containers
.If you are running in
cluster
mode, to optimize for performance, change theserverUri
to replica set URL format:MONGODB_URI=mongodb://<username>:<password>@<hostOne>,<hostTwo>,<hostThree>...&ssl=true&authSource=admin
.
There are so many factors affecting the max connection limit
. You have mongoDB
hosted on Atlas and as you mentioned the backend is lamda
means you have a serverless environment.
- Serverless environments spawn new container on the new connection and destroy a connection when it's no longer being used. The
peak
connection shows that there are so many new instances being initialized or so many concurrent requests from the user connection. The best practice is to terminate database connection once it's no longer needed. You can terminate the connectionmongoose.connection.close();
as you have usedmongoose
. It will release the connection from the connection pool. Rather exhausting the concurrent connection limit, you should release connection once it's idle. - Your configuration forces the database driver to reconnect after the connection is dropped by the database. You are explicitly setting the
autoReconnect
astrue
so the driver will quickly instantiate connection request once the connection is dropped. That may affect theconcurrent connection limit
. You should avoid setting it explicitly. cluster mode
can optimize the requests according to the load, you can change the server uri to the replica of database. it may help to migrate the load.- There is a small initial startup cost of approximately 5 to 10 seconds when the Lambda function is invoked for the first time and the MongoDB client in your AWS Lambda function connects to MongoDB. Connections to a
mongos
for a sharded cluster are faster than connecting to a replica set. Subsequent connections will be significantly faster for the duration of the lifecycle of the Lambda function. so Each invocation will leave a container idle to preventcold start
orcold boot
, or use an existing one if available. - Atlas sets the limit for concurrent incoming connections to a cluster based on the cluster tier. If you try to connect when you are at this limit, MongoDB displays an error stating “connection refused because too many open connections”. You can close any open connections to your cluster not currently in use. scaling down to a higher tier to support more concurrent connections. as mentioned in best practice you may
restart
the application. To prevent this issue in the future, consider utilizing the maxPoolSize connection string option to limit the number of connections in the connection pool. - Final Solution to this issue is Upgrading to a larger Atlas cluster tier which allows a greater number of connections. if your user base is too large for your current cluster tier.
来源:https://stackoverflow.com/questions/56373536/number-of-active-connections-on-the-server-reached-to-max