How are you taking advantage of Multicore?

后端 未结 22 2083
囚心锁ツ
囚心锁ツ 2020-12-12 10:24

As someone in the world of HPC who came from the world of enterprise web development, I\'m always curious to see how developers back in the \"real world\" are taking advanta

相关标签:
22条回答
  • 2020-12-12 11:02

    I'm taking advantage of multicore using C, PThreads, and a home brew implementation of Communicating Sequential Processes on an OpenVPX platform with Linux using the PREEMPT_RT patch set's scheduler. It all adds up to nearly 100% CPU utilisation across multiple OS instances with no CPU time used for data exchange between processor cards in the OpenVPX chassis, and very low latency too. Also using sFPDP to join multiple OpenVPX chassis together into a single machine. Am not using Xeon's internal DMA so as to relieve memory pressure inside CPUs (DMA still uses memory bandwidth at the expense of the CPU cores). Instead we're leaving data in place and passing ownership of it around in a CSP way (so not unlike the philosophy of .NET's task parallel data flow library).

    1) Software Roadmap - we have pressure to maximise the use real estate and available power. Making the very most of the latest hardware is essential

    2) Software domain - effectively Scientific Computing

    3) What we're doing with existing code? Constantly breaking it apart and redistributing parts of it across threads so that each core is maxed out doing the most it possibly can without breaking out real time requirement. New hardware means quite a lot of re-thinking (faster cores can do more in the given time, don't want them to be under utilised). Not as bad as it sounds - the core routines are very modular so easily assembled into thread-sized lumps. Although we planned on taking control of thread affinity away from Linux, we've not yet managed to extract significant extra performance by doing so. Linux is pretty good at getting data and code in more or less the same place.

    4) In effect already there - total machine already adds up to thousands of cores

    5) Parallel computing is essential - it's a MISD system.

    If that sounds like a lot of work, it is. some jobs require going whole hog on making the absolute most of available hardware and eschewing almost everything that is high level. We're finding that the total machine performance is a function of CPU memory bandwidth, not CPU core speed, L1/L2/L3 cache size.

    0 讨论(0)
  • 2020-12-12 11:04

    I can now separate my main operating system from my development / install whatever I like os using vitualisation setups with Virtual PC or VMWare.

    Dual core means that one CPU runs my host OS, the other runs my development OS with a decent level of performance.

    0 讨论(0)
  • 2020-12-12 11:05

    So far, nothing more than more efficient compilation with make:

    gmake -j
    

    the -j option allows tasks that don't depend on one another to run in parallel.

    0 讨论(0)
  • 2020-12-12 11:06

    My graduate work is in developing concepts for doing bare-metal multicore work & teaching same in embedded systems.

    I'm also working a bit with F# to bring up my high-level multiprocess-able language facilities to speed.

    0 讨论(0)
  • 2020-12-12 11:07

    I said some of this in answer to a different question (hope this is OK!): there is a concept/methodology called Flow-Based Programming (FBP) that has been around for over 30 years, and is being used to handle most of the batch processing at a major Canadian bank. It has thread-based implementations in Java and C#, although earlier implementations were fiber-based (C++ and mainframe Assembler). Most approaches to the problem of taking advantage of multicore involve trying to take a conventional single-threaded program and figure out which parts can run in parallel. FBP takes a different approach: the application is designed from the start in terms of multiple "black-box" components running asynchronously (think of a manufacturing assembly line). Since the interface between components is data streams, FBP is essentially language-independent, and therefore supports mixed-language applications, and domain-specific languages. Applications written this way have been found to be much more maintainable than conventional, single-threaded applications, and often take less elapsed time, even on single-core machines.

    0 讨论(0)
  • 2020-12-12 11:08

    For web applications it's very, very easy: ignore it. Unless you've got some code that really begs to be done in parallel you can simply write old-style single-threaded code and be happy.

    You usually have a lot more requests to handle at any given moment than you have cores. And since each one is handled in its own Thread (or even process, depending on your technology) this is already working in parallel.

    The only place you need to be careful is when accessing some kind of global state that requires synchronization. Keep that to a minimum to avoid introducing artificial bottlenecks to an otherwise (almost) perfectly scalable world.

    So for me multi-core basically boils down to these items:

    • My servers have less "CPUs" while each one sports more cores (not much of a difference to me)
    • The same number of CPUs can substain a larger amount of concurrent users
    • When the seems to be performance bottleneck that's not the result of the CPU being 100% loaded, then that's an indication that I'm doing some bad synchronization somewhere.
    0 讨论(0)
提交回复
热议问题