I am writing a node.js app on Heroku and using the pg module. I can\'t figure out the \"right\" way to get a client object for each request that I need to query the database
I am the author of pg-promise, which simplifies the use of node-postgres via promises.
It addresses the issues about the right way of connecting to and disconnecting from the database, using the connection pool implemented by node-postgres, among other things, like automated transactions.
An individual request in pg-promise boils down to just what's relevant to your business logic:
db.any('SELECT * FROM users WHERE status = $1', ['active'])
.then(data => {
console.log('DATA:', data);
})
.catch(error => {
console.log('ERROR:', error);
});
i.e. you do not need to deal with connection logic when executing queries, because you set up the connection only once, globally, like this:
const pgp = require('pg-promise')(/*options*/);
const cn = {
host: 'localhost', // server name or IP address;
port: 5432,
database: 'myDatabase',
user: 'myUser',
password: 'myPassword'
};
// alternative:
// const cn = 'postgres://username:password@host:port/database';
const db = pgp(cn); // database instance;
You can find many more examples in Learn by Example tutorial, or on the project's home page.
I'm the author of node-postgres. First, I apologize the documentation has failed to make the right option clear: that's my fault. I'll try to improve it. I wrote a Gist just now to explain this because the conversation grew too long for Twitter.
Using
pg.connect
is the way to go in a web environment.PostgreSQL server can only handle 1 query at a time per connection. That means if you have 1 global
new pg.Client()
connected to your backend your entire app is bottleknecked based on how fast postgres can respond to queries. It literally will line everything up, queuing each query. Yeah, it's async and so that's alright...but wouldn't you rather multiply your throughput by 10x? Usepg.connect
set thepg.defaults.poolSize
to something sane (we do 25-100, not sure the right number yet).
new pg.Client
is for when you know what you're doing. When you need a single long lived client for some reason or need to very carefully control the life-cycle. A good example of this is when usingLISTEN/NOTIFY
. The listening client needs to be around and connected and not shared so it can properly handleNOTIFY
messages. Other example would be when opening up a 1-off client to kill some hung stuff or in command line scripts.
One very helpful thing is to centralize all access to your database in your app to one file. Don't litter pg.connect
calls or new clients throughout. Have a file like db.js
that looks something like this:
module.exports = {
query: function(text, values, cb) {
pg.connect(function(err, client, done) {
client.query(text, values, function(err, result) {
done();
cb(err, result);
})
});
}
}
This way you can change out your implementation from pg.connect
to a custom pool of clients or whatever and only have to change things in one place.
Have a look at the node-pg-query module that does just this.
As you can see from the documentation both options are valid, so choose whichever you prefer. As you, I would go with the second choice.
It is better to create a pg pool globally and each time you need to do a db operation use the client and then release it back to the pool. Once all db operations are done end the pool using pool.end()
Sample code -
let pool = new pg.Pool(dbConfig);
pool.connect(function(err, client, done) {
if (err) {
console.error('Error connecting to pg server' + err.stack);
callback(err);
} else {
console.log('Connection established with pg db server');
client.query("select * from employee", (err, res) => {
if (err) {
console.error('Error executing query on pg db' + err.stack);
callback(err);
} else {
console.log('Got query results : ' + res.rows.length);
async.each(res.rows, function(empRecord) {
console.log(empRecord.name);
});
}
client.release();
});
}
});
For more details, you can refer to my blog post -Source
Pool is the way to go now.Some thing like this
const { Pool } = require('pg');
const pool = new Pool({
connectionString: DATABASE_URL,
ssl: false,
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000,
});
module.exports = {
query: (text, params) => pool.query(text, params)
}
it can be used as db.query('<BEGIN,COMMIT,ROLLBACK,your query,anything')
Here's how I do it, sort of an "all of the above approach"
Promise = require 'bluebird'
pg = module.exports = require 'pg'
Promise.promisifyAll pg.Client.prototype
Promise.promisifyAll pg.Client
Promise.promisifyAll pg.Connection.prototype
Promise.promisifyAll pg.Connection
Promise.promisifyAll pg.Query.prototype
Promise.promisifyAll pg.Query
Promise.promisifyAll pg
connectionString = process.env.DATABASE_URL
module.exports.queryAsync = (sql, values) ->
pg.connectAsync connectionString
.spread (connection, release) ->
connection.queryAsync sql, values
.then (result) ->
console.log result.rows[0]
.finally ->
release()