How do I get Windows to go as fast as Linux for compiling C++?

后端 未结 13 944
生来不讨喜
生来不讨喜 2020-12-22 15:14

I know this is not so much a programming question but it is relevant.

I work on a fairly large cross platform project. On Windows I use VC++ 2008. On Linux I use gcc

相关标签:
13条回答
  • 2020-12-22 15:49

    I recently could archive an other way to speed up compilation by about 10% on Windows using Gnu make by replacing the mingw bash.exe with the version from win-bash

    (The win-bash is not very comfortable regarding interactive editing.)

    0 讨论(0)
  • 2020-12-22 15:53

    Incremental linking

    If the VC 2008 solution is set up as multiple projects with .lib outputs, you need to set "Use Library Dependency Inputs"; this makes the linker link directly against the .obj files rather than the .lib. (And actually makes it incrementally link.)

    Directory traversal performance

    It's a bit unfair to compare directory crawling on the original machine with crawling a newly created directory with the same files on another machine. If you want an equivalent test, you should probably make another copy of the directory on the source machine. (It may still be slow, but that could be due to any number of things: disk fragmentation, short file names, background services, etc.) Although I think the perf issues for dir /s have more to do with writing the output than measuring actual file traversal performance. Even dir /s /b > nul is slow on my machine with a huge directory.

    0 讨论(0)
  • 2020-12-22 15:54

    IMHO this is all about disk I/O performance. The order of magnitude suggests a lot of the operations go to disk under Windows whereas they're handled in memory under Linux, i.e. Linux is caching better. Your best option under windows will be to move your files onto a fast disk, server or filesystem. Consider buying an Solid State Drive or moving your files to a ramdisk or fast NFS server.

    I ran the directory traversal tests and the results are very close to the compilation times reported, suggesting this has nothing to do with CPU processing times or compiler/linker algorithms at all.

    Measured times as suggested above traversing the chromium directory tree:

    • Windows Home Premium 7 (8GB Ram) on NTFS: 32 seconds
    • Ubuntu 11.04 Linux (2GB Ram) on NTFS: 10 seconds
    • Ubuntu 11.04 Linux (2GB Ram) on ext4: 0.6 seconds

    For the tests I pulled the chromium sources (both under win/linux)

    git clone http://github.com/chromium/chromium.git 
    cd chromium
    git checkout remotes/origin/trunk 
    

    To measure the time I ran

    ls -lR > ../list.txt ; time ls -lR > ../list.txt # bash
    dir -Recurse > ../list.txt ; (measure-command { dir -Recurse > ../list.txt }).TotalSeconds  #Powershell
    

    I did turn off access timestamps, my virus scanner and increased the cache manager settings under windows (>2Gb RAM) - all without any noticeable improvements. Fact of the matter is, out of the box Linux performed 50x better than Windows with a quarter of the RAM.

    For anybody who wants to contend that the numbers wrong - for whatever reason - please give it a try and post your findings.

    0 讨论(0)
  • 2020-12-22 15:55

    I'm pretty sure it's related to the filesystem. I work on a cross-platform project for Linux and Windows where all the code is common except for where platform-dependent code is absolutely necessary. We use Mercurial, not git, so the "Linuxness" of git doesn't apply. Pulling in changes from the central repository takes forever on Windows compared to Linux, but I do have to say that our Windows 7 machines do a lot better than the Windows XP ones. Compiling the code after that is even worse on VS 2008. It's not just hg; CMake runs a lot slower on Windows as well, and both of these tools use the file system more than anything else.

    The problem is so bad that most of our developers that work in a Windows environment don't even bother doing incremental builds anymore - they find that doing a unity build instead is faster.

    Incidentally, if you want to dramatically decrease compilation speed in Windows, I'd suggest the aforementioned unity build. It's a pain to implement correctly in the build system (I did it for our team in CMake), but once done automagically speeds things up for our continuous integration servers. Depending on how many binaries your build system is spitting out, you can get 1 to 2 orders of magnitude improvement. Your mileage may vary. In our case I think it sped up the Linux builds threefold and the Windows one by about a factor of 10, but we have a lot of shared libraries and executables (which decreases the advantages of a unity build).

    0 讨论(0)
  • 2020-12-22 15:59

    The issue with visual c++ is, as far I can tell, that it is not a priority for the compiler team to optimize this scenario. Their solution is that you use their precompiled header feature. This is what windows specific projects have done. It is not portable, but it works.

    Furthermore, on windows you typically have virus scanners, as well as system restore and search tools that can ruin your build times completely if they monitor your buid folder for you. windows 7 resouce monitor can help you spot it. I have a reply here with some further tips for optimizing vc++ build times if you're really interested.

    0 讨论(0)
  • 2020-12-22 16:02

    I personally found running a windows virtual machine on linux managed to remove a great deal of the IO slowness in windows, likely because the linux vm was doing lots of caching that Windows itself was not.

    Doing that I was able to speed up compile times of a large (250Kloc) C++ project I was working on from something like 15 minutes to about 6 minutes.

    0 讨论(0)
提交回复
热议问题