I came across a comment on the following blogpost that recommends against using MEDIUMINT
:
Don’t use [the 24bit INT], even in MySQL. It’s dumb,
InnoDB stores MEDIUMINT as three bytes value. But when MySQL has to do any computation the three bytes MEDIUMINT is converted into eight bytes unsigned long int(I assume nobody runs MySQL on 32 bits nowadays).
There are pros and cons, but you understand that "It’s dumb, and it’s slow, and the code that implements it is a crawling horror" reasoning is not technical, right?
I would say MEDIUMINT makes sense when data size on disk is critical. I.e. when a table has so many records that even one byte difference (4 bytes INT vs 3 bytes MEDIUMINT) means a lot. It's rather a rare case, but possible.
mach_read_from_3 and mach_read_from_4 - primitives that InnoDB uses to read numbers from InnoDB records are similar. They both return ulint. I bet you won't notice a difference on any workload.
Just take a look at the code:
ulint
mach_read_from_3(
/*=============*/
const byte* b) /*!< in: pointer to 3 bytes */
{
ut_ad(b);
return( ((ulint)(b[0]) << 16)
| ((ulint)(b[1]) << 8)
| (ulint)(b[2])
);
}
Do you think it's much slower than this?
ulint
mach_read_from_4(
/*=============*/
const byte* b) /*!< in: pointer to four bytes */
{
ut_ad(b);
return( ((ulint)(b[0]) << 24)
| ((ulint)(b[1]) << 16)
| ((ulint)(b[2]) << 8)
| (ulint)(b[3])
);
}
In the grand scheme of things, fetching a row is the big cost. Simple functions, expressions, and much less, data formats, is insignificant in how long a query takes.
On the other side, if your dataset it too large to stay cached, the overhead of I/O to fetch row(s) is even more significant. A crude rule of thumb says that a non-cached row takes 10 times as long as a cached one. Hence, shrinking the dataset (such as using a smaller *INT
) may give you a huge performance benefit.
This argument apples to ...INT
, FLOAT
vs DOUBLE
, DECIMAL(m,n)
, DATETIME(n)
, etc. (A different discussion is needed for [VAR]CHAR/BINARY(...)
and TEXT/BLOB
.)
For those with a background in Assembly language...
Hence, the only sane way to write the code is to work at the byte level, and to ignore register size and assume all values are mis-aligned.
For Optimization, in order of importance:
Rule of Thumb: If a tentative optimization does not (via back-of-envelope calc) yield 10% improvement, don't waste your time on it. Instead look for some bigger improvement. For example, indexes and Summary tables are often provide 10x (not just 10%).