Looking at the postgres server log, I see that the exact same query on the same postgres server takes much longer (about 10x longer) when invoked from a Linux client or from
You may want to check if the slow client does SSL encryption or not. It happens by default when it's set up on the server and the client has been compiled with SSL support.
For queries that retrieve large amounts of data, the time difference is significant. Also some Linux distributions like Debian/Ubuntu have SSL on by default, even for TCP connections through localhost.
As an example, here's the time difference for a query retrieving 1,5M rows weighing a total of 64Mbytes, with a warm cache.
Without encryption:
$ psql "host=localhost dbname=mlists sslmode=disable" Password: psql (9.1.7, server 9.1.9) Type "help" for help. mlists=> \timing Timing is on. mlists=> \o /dev/null mlists=> select subject from mail; Time: 1672.258 ms
With encryption:
$ psql "host=localhost dbname=mlists" Password: psql (9.1.7, server 9.1.9) SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256) Type "help" for help. mlists=> \o /dev/null mlists=> \timing Timing is on. mlists=> select subject from mail; Time: 7017.935 ms
To turn it off globally, one might set SSL=off
in postgresql.conf
.
To turn it off for specific ranges of client addresses, add entries in pg_hba.conf
with hostnossl
in the first field before the more generic host
entries.
To turn if off client-side, it depends on how the driver exposes the sslmode
connection parameter. If it doesn't, the PGSSLMODE
environment variable may be used if the driver is implemented on top of libpq
.
As for connections through Unix domain sockets (local
), SSL is never used with them.