Why not using GPUs as a CPU?

后端 未结 4 1971
情歌与酒
情歌与酒 2021-02-01 20:28

I know the question is only partially programming-related because the answer I would like to get is originally from these two questions:

Why are CPU cores number so low

4条回答
  •  予麋鹿
    予麋鹿 (楼主)
    2021-02-01 21:27

    Current GPUs lack many of the facilities of a modern CPU that are generally considered important (crucial, really) to things like an OS.

    Just for example, an OS normally used virtual memory and paging to manage processes. Paging allows the OS to give each process its own address space, (almost) completely isolated from every other process. At least based on publicly available information, most GPUs don't support paging at all (or at least not in the way an OS needs).

    GPUs also operate at much lower clock speeds than CPUs. Therefore, they only provide high performance for embarrassingly parallel problems. CPUs are generally provide much higher performance for single threaded code. Most of the code in an OS isn't highly parallel -- in fact, a lot of it is quite difficult to make parallel at all (e.g., for years, Linux had a giant lock to ensure only one thread executed most kernel code at any given time). For this kind of task, a GPU would be unlikely to provide any benefit.

    From a programming viewpoint, a GPU is a mixed blessing (at best). People have spent years working on programming models to make programming a GPU even halfway sane, and even so it's much more difficult (in general) than CPU programming. Given the difficulty of getting even relatively trivial things to work well on a GPU, I can't imagine attempting to write anything even close to as large and complex as an operating system to run on one.

提交回复
热议问题