[Israel.pm] catching segmentation faults and other crashes

Mikhael Goikhman migo at homemail.com
Mon Nov 12 03:58:26 PST 2007


On 12 Nov 2007 12:05:40 +0200, Tal Kelrich wrote:
> 
> On Mon, 12 Nov 2007 09:53:24 +0000
> Mikhael Goikhman <migo at homemail.com> wrote:
> 
> > On 12 Nov 2007 09:51:04 +0200, Yona Shlomo wrote:
> > > 
> > > Can you recommend a way to catch the crash of the tool,
> > > despite the fact that it still emits some (possibly good)
> > > output?
> > 
> > Run these commands, one with and one without internal segfault:
> > 
> >   perl -e 'print qx(perl -e "\$| = 1; print qq(output\n); 1 &&
> > dump()"); print "Core dumped\n" if $? & 128'
> > 
> >   perl -e 'print qx(perl -e "\$| = 1; print qq(output\n); 0 &&
> > dump()"); print "Core dumped\n" if $? & 128'
> > 
> > You should remember to remove core files if any (named like core or
> > perl.core or core.12345, depending on the OS).
> 
> You should really be using the POSIX::W* macros. there's really no
> guarantee that $? bit twiddling will get you the right results.

This is generally true, except that "man POSIX" says "Core dumping is not
a portable concept, so there's no portable way to test for that.", so
this would not help Shlomo.

I see that at least on NetBSD 3.x the check "$? & 128" does not work and
"ulimit -c" or "limit coredumpsize" does not affect $? value at all (it
is the same regardless of core file creation). On these systems I don't
know about a way to check for core dump, except to check for existence of
"application.core" file.

Of course, if Shlomo just wanted to detect abnormal exit and not "core
dump", then checking for non-zero $? should be portable enough.

Regards,
Mikhael.

-- 
perl -e 'print+chr(64+hex)for+split//,d9b815c07f9b8d1e'



More information about the Perl mailing list