Practical limitations on amount of constexpr computation

假装没事ソ 提交于 2019-12-07 04:05:41

问题


As an experiment, I just put together some code to generate a std::array<uint32_t, 256> at compile time. The table contents themselves are a fairly typical CRC lookup table - about the only new thing is the use of constexpr functions to calculate the entries as opposed to putting an autogenerated magic table directly in the source code.

Anyway, this exercise got me curious: would there be any practical limitations on the amount of computation a compiler would be willing to do to evaluate a constexpr function or variable definition at compile time? e.g. something like gcc's -ftemplate-depth parameter creating practical limits on the amount of template metaprogramming evaluation. (I also wonder if there might be practical limitations on the length of a parameter pack - which would limit the size of a compile-time std::array created using a std::integer_sequence intermediate object.)


回答1:


Recommendations for such can be found in [implimits] ¶2:

(2.35)   —   Recursive constexpr function invocations [512]

(2.36)   —   Full-expressions evaluated within a core constant expression [1 048 576]

GCC and Clang allow adjustment via -fconstexpr-depth (which is the flag you were looking for).

Constant expression evaluation practically runs in a sandbox, because undefined behavior must be preempted by the implementation. With that in mind, I don't see why the implementation couldn't use the entire resources of the host machine. Then again, I wouldn't recommend writing programs whose compilation requires gigabytes of memory or other unreasonable resources...



来源:https://stackoverflow.com/questions/38060007/practical-limitations-on-amount-of-constexpr-computation

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!