You need to think, what are the actual benefits of threads? Remember that on a single core machine, threads don't actually allow concurrent execution, just the impression of it. Behind the scenes, the CPU is context-switching between the different threads, doing a little work on each every time. Therefore, if I have several tasks that involve no waiting, running them concurrently (on a single core) will be no quicker than running them linearly. In fact, it will be slower, due to the added overhead of the frequent context-switching.
If that is the case then, why ever use threads on a single core machine? Well firstly, because sometimes tasks can involve long periods of waiting on some external resource, such as a disk or other hardware device, to become available. Whilst the task in a waiting stage, threading allows other tasks to continue, thus using the CPU's time more efficiency.
Secondly, tasks may have a deadline of some sort in which to complete, particularly if they are responding to an event. The classic example is the user interface of an application. The computer should respond to user action events as quickly as possible, even if it is busy performing some other long running task, otherwise the user will be become agitated and may believe the application has crashed. Threading allows this to happen.
As for games, I am not a games programmer, but my understanding of the situation is this: 3D games create a programmatic model of the game world; players, enemies, items, terrain, etc. This game world is updated in discrete steps, based on the amount of time that has elapsed since the previous update. So, if 1ms has passed since the last time round the game loop, the position of an object is updated by using its velocity and the elapsed time to determine the delta (obviously the physics is a bit more complicated than that, but you get the idea). Other factors such as AI and input keys may also contribute to the update. When everything is finished, the updated game world is rendered as a new frame and the process begins again. This process usually occurs many times per second.
When we think about the game loop in this way, we can see that the engine is in fact achieving a very similar goal to that of threading. It has a number of long running tasks (updating the world's physics, handling user input, etc), and it gives the impression that they are happening concurrently by breaking them down into small pieces of work and interleaving these pieces, but instead of relying on the CPU or operating system to manage the time spent on each, it is doing it itself. This means it can keep all the different tasks properly synchronized, and avoid the complexities that come with real threading: locks, pre-emption, re-entrant code, etc. There is no performance implication to this approach either, because as we said a single core machine can only really execute code linearly anyway.
Things change when have a multi-core system. Now, tasks can be running genuinely concurrently and there may indeed be a benefit to using threading to handle different parts of the game world updates, so long as we can manage to synchronise the results to render consistent frames. We would expect therefore, that with the advent of multi-core systems, games engine developers would be working on this. And so it turns out, they are. Valve, the makers of Half Life, have recently introduced multi-processor support into their Source Engine, and I imagine many other engine developers are following suit.
Well, that turned out a little longer than I expected. I'm not a threading or games expert, but I hope I haven't made any especially glaring errors. If I have I'm sure people will correct me :)