
On Mon, 13 Oct 2014, Erik Christiansen <dvalin@internode.on.net> wrote:
"By default, systemd saves core dumps to the journal, instead of the file system. Core dumps must be explicitly queried using coredumpctl4"
I doubt that would be a mandatory feature. While it sounds like an extreme thing to do, there are many systems out there which have a problem of managing core dumps. It's a particular problem when running proprietary software. Some years ago when 9G SCSI hard drives were common in servers (and 46G was the biggest hard drive I owned) I managed a number of Solaris systems that had CA Unicenter installed. Unicenter was total rubbish like all CA software. I wrote a script to gather the core dumps from the ~10 servers that ran Unicenter and collected about 500M of core dumps per day. The volume of core files was great enough that systems were at risk of running out of disk space from core files. If I was to run low quality proprietary software on a number of Linux servers then it would be useful to send core dumps to the systemd journal and then limit the journal size to something that won't cause problems. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/