There is a big database, 1,000,000,000 rows, called threads (these threads actually exist, I\'m not making things harder just because of I enjoy it). Threads has only a few
You should read the following and learn a little bit about the advantages of a well designed innodb table and how best to use clustered indexes - only available with innodb !
http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html
http://www.xaprb.com/blog/2006/07/04/how-to-exploit-mysql-index-optimizations/
then design your system something along the lines of the following simplified example:
The important features are that the tables use the innodb engine and the primary key for the threads table is no longer a single auto_incrementing key but a composite clustered key based on a combination of forum_id and thread_id. e.g.
threads - primary key (forum_id, thread_id)
forum_id thread_id
======== =========
1 1
1 2
1 3
1 ...
1 2058300
2 1
2 2
2 3
2 ...
2 2352141
...
Each forum row includes a counter called next_thread_id (unsigned int) which is maintained by a trigger and increments every time a thread is added to a given forum. This also means we can store 4 billion threads per forum rather than 4 billion threads in total if using a single auto_increment primary key for thread_id.
forum_id title next_thread_id
======== ===== ==============
1 forum 1 2058300
2 forum 2 2352141
3 forum 3 2482805
4 forum 4 3740957
...
64 forum 64 3243097
65 forum 65 15000000 -- ooh a big one
66 forum 66 5038900
67 forum 67 4449764
...
247 forum 247 0 -- still loading data for half the forums !
248 forum 248 0
249 forum 249 0
250 forum 250 0
The disadvantage of using a composite key is that you can no longer just select a thread by a single key value as follows:
select * from threads where thread_id = y;
you have to do:
select * from threads where forum_id = x and thread_id = y;
However, your application code should be aware of which forum a user is browsing so it's not exactly difficult to implement - store the currently viewed forum_id in a session variable or hidden form field etc...
Here's the simplified schema:
drop table if exists forums;
create table forums
(
forum_id smallint unsigned not null auto_increment primary key,
title varchar(255) unique not null,
next_thread_id int unsigned not null default 0 -- count of threads in each forum
)engine=innodb;
drop table if exists threads;
create table threads
(
forum_id smallint unsigned not null,
thread_id int unsigned not null default 0,
reply_count int unsigned not null default 0,
hash char(32) not null,
created_date datetime not null,
primary key (forum_id, thread_id, reply_count) -- composite clustered index
)engine=innodb;
delimiter #
create trigger threads_before_ins_trig before insert on threads
for each row
begin
declare v_id int unsigned default 0;
select next_thread_id + 1 into v_id from forums where forum_id = new.forum_id;
set new.thread_id = v_id;
update forums set next_thread_id = v_id where forum_id = new.forum_id;
end#
delimiter ;
You may have noticed I've included reply_count as part of the primary key which is a bit strange as (forum_id, thread_id) composite is unique in itself. This is just an index optimisation which saves some I/O when queries that use reply_count are executed. Please refer to the 2 links above for further info on this.
I'm still loading data into my example tables and so far I have a loaded approx. 500 million rows (half as many as your system). When the load process is complete I should expect to have approx:
250 forums * 5 million threads = 1250 000 000 (1.2 billion rows)
I've deliberately made some of the forums contain more than 5 million threads for example, forum 65 has 15 million threads:
forum_id title next_thread_id
======== ===== ==============
65 forum 65 15000000 -- ooh a big one
select sum(next_thread_id) from forums;
sum(next_thread_id)
===================
539,155,433 (500 million threads so far and still growing...)
under innodb summing the next_thread_ids to give a total thread count is much faster than the usual:
select count(*) from threads;
How many threads does forum 65 have:
select next_thread_id from forums where forum_id = 65
next_thread_id
==============
15,000,000 (15 million)
again this is faster than the usual:
select count(*) from threads where forum_id = 65
Ok now we know we have about 500 million threads so far and forum 65 has 15 million threads - let's see how the schema performs :)
select forum_id, thread_id from threads where forum_id = 65 and reply_count > 64 order by thread_id desc limit 32;
runtime = 0.022 secs
select forum_id, thread_id from threads where forum_id = 65 and reply_count > 1 order by thread_id desc limit 10000, 100;
runtime = 0.027 secs
Looks pretty performant to me - so that's a single table with 500+ million rows (and growing) with a query that covers 15 million rows in 0.02 seconds (while under load !)
These would include:
partitioning by range
sharding
throwing money and hardware at it
etc...
hope you find this answer helpful :)
EDIT: Your one-column indices are not enough. You would need to, at least, cover the three involved columns.
More advanced solution: replace replycount > 1
with hasreplies = 1
by creating a new hasreplies
field that equals 1 when replycount > 1
. Once this is done, create an index on the three columns, in that order: INDEX(forumid, hasreplies, dateline)
. Make sure it's a BTREE index to support ordering.
You're selecting based on:
forumid
hasreplies
dateline
Once you do this, your query execution will involve:
forumid = X
. This is a logarithmic operation (duration : log(number of forums)). hasreplies = 1
(while still matching forumid = X
). This is a constant-time operation, because hasreplies
is only 0 or 1. My earlier suggestion to index on replycount
was incorrect, because it would have been a range query and thus prevented the use of a dateline
to sort the results (so you would have selected the threads with replies very fast, but the resulting million-line list would have had to be sorted completely before looking for the 100 elements you needed).
IMPORTANT: while this improves performance in all cases, your huge OFFSET value (10000!) is going to decrease performance, because MySQL does not seem to be able to skip ahead despite reading straight through a BTREE. So, the larger your OFFSET is, the slower the request will become.
I'm afraid the OFFSET problem is not automagically solved by spreading the computation over several computations (how do you skip an offset in parallel, anyway?) or moving to NoSQL. All solutions (including NoSQL ones) will boil down to simulating OFFSET based on dateline
(basically saying dateline > Y LIMIT 100
instead of LIMIT Z, 100
where Y
is the date of the item at offset Z
). This works, and eliminates any performance issues related to the offset, but prevents going directly to page 100 out of 200.
Indices are a must - but remember to choose the right type of index: BTREE is more suitable when using queries with "<" or ">" in your WHERE clauses, while HASH is more suitable when you have many distinct values in one column and you are using "=" or "<=>" in your WHERE clause.
Further reading http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html
You should not be trying to fit a database architecture to hardware you're planning to buy, but instead plan to buy hardware to fit your database architecture.
Once you have enough RAM to keep the working set of indexes in memory, all your queries that can make use of indexes will be fast. Make sure your key buffer is set large enough to hold the indexes.
So if 12GB is not enough, don't use 10 servers with 12GB of RAM, use fewer with 32GB or 64GB of RAM.
There is are part of question which related to NoSQL or MySQL option. Actually this is one fundamental thing hidden here. SQL language is easy to write for human and bit difficult to read for computer. In high volume databases I would recommend to avoid SQL backend as this requires extra step - command parsing. I have done extensive benchmarking and there are cases when SQL parser is slowest point. There is nothing you can do about it. Ok, you can possible use pre-parsed statements and access them.
BTW, it is not wide known but MySQL has grown out from NoSQL database. Company where authors of MySQL David and Monty worked was data warehousing company and they often had to write custom solutions for uncommon tasks. This leaded to big stack of homebrew C libraries used to manually write database functions when Oracle and other were performing poorly. SQL was added to this nearly 20 years old zoo on 1996 for fun. What came after you know.
Actually you can avoid SQL overhead with MySQL. But usually SQL parsing is not the slowest part but just good to know. To test parser overhead you may just make benchmark for "SELECT 1" for example ;).