I am setting
ulimit -c unlimited.
And in c++ program we are doing
struct rlimit corelimit;
if (getrlimit(RLIMIT_CORE,
If you are using coredumpctl
, a possible solution could be to edit /etc/systemd/coredump.conf
and increase ProcessSizeMax
and ExternalSizeMax
:
[Coredump]
#Storage=external
#Compress=yes
ProcessSizeMax=20G
ExternalSizeMax=20G
#JournalSizeMax=767M
#MaxUse=
#KeepFree=
Hard limits and soft limits have some specifics to them that can be a little hairy: see this about using sysctl to make name the changes last.
There is a file you can edit that should make the limit sizes last, although there is probably a corresponding sysctl command that will do so...
Similar issue happened when I killed the program manually with kill -3. It happened simply because I did not wait enough time for core file to finish generating.
Make sure that the file stopped growing in size, and only then open it.
I remember there is a hard limit which can be set by the administrator, and a soft limit which is set by the user. If the soft limit is stronger than the hard limit, the hard limit value is taken. I'm not sure this is valid for any shell though, I only know it from bash.
I had the same problem with core files getting truncated.
Further investigation showed that ulimit -f
(aka file size
, RLIMIT_FSIZE
) also affects core files, so check that limit is also unlimited / suitably high. [I saw this on Linux kernel 3.2.0 / debian wheezy.]
This solution works when the automated bug reporting tool (abrt) is used.
After I tried everything that was already suggested (nothing helped), I found one more setting, which affects dump size, in the /etc/abrt/abrt.conf
MaxCrashReportsSize = 5000
and increased its value.
Then restarted abrt daemon: sudo service abrtd restart
, re-ran the crashing application and got a full core dump file.