What is the difference between kernel threads and user threads?

后端 未结 4 1606
长发绾君心
长发绾君心 2021-02-01 10:38

What is the difference between kernel threads and user threads? Is it that kernel thread are scheduled and executed in kernel mode? What are techniques used for creating kernel

4条回答
  •  暖寄归人
    2021-02-01 11:02

    Kernel thread means a thread that the kernel is responsible for scheduling. This means, among other things, that the kernel is able to schedule each thread on different cpus/cores at the same time.

    How to use them varies a lot with programming languages and threading APIs, but as a simple illustration,

    void task_a();
    void task_b();
    int main() {
        new_thread(task_a);
        new_thread(task_b);
        // possibly do something else in the main thread
        // wait for the threads to complete their work
    }
    

    In every implementation I am familiar with, the kernel may pause them at any time. ("pre-emptive")

    User threads, or "User scheduled threads", make the program itself responsible for switching between them. There are many ways of doing this and correspondingly there is a variety of names for them.

    On one end you have "Green threads"; basically trying to do the same thing as kernel threads do. Thus you keep all the complications of programming with real threads.

    On the opposite end, you have "Fibers", which are required to yield before any other fiber gets run. This means

    • The fibers are run sequentially. There is no parallell performance gains to be had.
    • The interactions between fibers is very well defined. Other code run only at the exact points you yield. Other code won't be changing variables while you're working on them.
    • Most of the low-level complexities programmers struggle with in multithreading, such as cache coherency (looking at MT questions on this site, most people don't get that), are not a factor.

    As the simplest example of fibers I can think of:

    while(tasks_not_done) {
        do_part_of_a();
        do_part_of_b();
    }
    

    where each does some work, then returns when that part is done. Note that these are done sequentially in the same "hardware thread" meaning you do not get a performance increase from parallellism. On the other hand, interactions between them are very well defined, so you don't have race conditions. The actual working of each function can vary. They could also be "user thread objects" from some vector/array.

提交回复
热议问题