For some reason my production DB decided to spew out this message. All application calls fail to the DB with the error:
PreparedStatementCallback; SQL [ /*lo
The filename looks like a temporary table created by a query in MySQL. These files are often very short-lived, they're created during one specific query and cleaned up immediately afterwards.
Yet they can get very large, depending on the amount of data the query needs to process in a temp table. Or you may have multiple concurrent queries creating temp tables, and if enough of these queries run at the same time, they can exhaust disk space.
I do MySQL consulting, and I helped a customer who had intermittent disk full errors on his root partition, even though every time he looked, he had about 6GB free. After we examined his query logs, we discovered that he sometimes had four or more queries running concurrently, each creating a 1.5GB temp table in /tmp, which was on his root partition. Boom!
Solutions I gave him:
Increase the MySQL config variables tmp_table_size
and max_heap_table_size
so MySQL can create really large temp tables in memory. But it's not a good idea to allow MySQL to create 1.5GB temp tables in memory, because there's no way to limit how many of these are created concurrently. You can exhaust your memory pretty quickly this way.
Set the MySQL config variable tmpdir
to a directory on another disk partition with more space.
Figure out which of your queries is creating such big temp tables, and optimize the query. For example, use indexes to help that query reduce its scan to a smaller slice of the table. Or else archive some of the data in the tale so the query doesn't have so many rows to scan.
For me this issue came after a long period of not using mysql nor the webserver. So I was sure that my settings where correct; Simply restarting the service fixes this issue; The weird part about the issue is that one can still connect to the database, and even query/add tables using the mysql tool. for example :
mysql -u root -p
I restarted using :
systemctl start mysqld.service
or service mysqld restart or /etc/init.d/mysqld restart
Note : depending on the machine/environment on of these commands should restart the service.
Often this means your /tmp
partition has run out of space and the file can't be created, or for whatever reason the mysqld
process cannot write to that directory because of permission problems. Sometimes this is the case when selinux
rains on your parade.
Any operation that requites a "temp file" will go into the /tmp
directory by default. The name you're seeing is just some internal random name.
Check permission issues, mysql config.
Also check if you haven't reached disk space, quota limits.
Note: Some systems are limiting number of files (not just space), deleting some old session files helped fixed the issue in my case.
it's very easy, you just grant the /tmp folder as 777 permission. just type:
chmod -R 777 /tmp
For those using VPS / virtual hosting.
I was using a VPS, getting errors with MySQL not being able to write to /tmp, and everything looked correct. I had enough free space, enough free inodes, correct permissions. Turned out the problem was outside my VPS, it was the machine hosting the VPS that was full. I only had "virtual space" in my file system, but the machine in the background which hosted the VPS had no "physical space" left. I had to contact the VPS company any they fixed it.
If you think this might be your problem, you could test writing a larger file to /tmp (1GB):
dd if=/dev/zero of=/tmp/file.txt count=1024 bs=1048576
I got a No space left on device
error message, which was a giveaway that it was a disk/volume in the background that was full.