Re: [Off Topic] how to fix Mandrake's coredump prob.

From: Patrick Dughi (
Date: 03/12/00

> Didn't seem to work that way on my server.
> Mine was set to 0 for one reason or another, you just had to be careful
> not to exceed the max core size or it'd set it to 0 (I did 14Mb, which was
> a bit below the max). Sort of a decimal search process to find the optimum
> value. *shrugs*
> Could certainly be doing something weird on yours, I suppose, but unless
> the server admin WANTS people to be unable to get cores from the programs
> there's no reason for it to be set up that way by default.
        For some reason, the redhat distribuition of this was funky.  It
took me a while, and I eventually had to go kernel diving to prove to
myself that I wasn't going insane. In the end this is what had to happen:

        1. In /etc/pam.d, edit the login file, add this to it:
session    required     /lib/security/

        2. in /etc/security/limits.conf, add:
username         hard    core            1000000
           ... where 1000000 is the max coredump size.

        3. In either the systemwide profile, or the individual
.login/cshrc/kshrc/etc...  add the setting for the coredump size (which is
default 0 - but a user _can_ modify it upwards).  This will be something

ulimit -Sc 1000000 or
limit core 1000000

        .... this seems to only work _well_ from bash, and once it's set
up from 0, it is subject to the normal restrictions; can only modify it

        Or you could just stick some code in the kernel in the getrlimit
functions to always work with a particular UID.  It does get around this
whole mess ;)


     | Ensure that you have read the CircleMUD Mailing List FAQ:  |
     |  |

This archive was generated by hypermail 2b30 : 04/10/01 PDT