Mailing List Archive

NETaa14561 patched
Here's a patch against perl5.001m for

NETaa14561 N 1 k $@ vs DESTROY

This prevents mortal OBJECTs (with user-defined destructors) created in the
last statement in an eval block from clobbering the eval's $@.

This will also trap a fatal error inside DESTROY's (but only display the
error message as a mandatory warning).

[.I have attempted a feeble optimization (using hv_fetch() instead of a
gv_fetchpv()), but I think the gv_fetchmethod() and perl_call_sv()
would be far more expensive...]

- Sarathy.
gsar@engin.umich.edu
------------------------------------8<------------------------------
*** sv.c.dist Mon Sep 4 23:02:45 1995
--- sv.c Mon Sep 4 23:02:58 1995
***************
*** 2112,2117 ****
--- 2112,2126 ----
SAVEFREESV(SvSTASH(sv));
if (destructor && GvCV(destructor)) {
SV ref;
+ SV *errsv = Nullsv;
+ SV *saverr = Nullsv;
+ GV **gvp = (GV**)hv_fetch(defstash, "@", 1, TRUE);
+ if (gvp && *gvp && (SvTYPE(*gvp) == SVt_PVGV)) {
+ SvMULTI_on(*gvp);
+ errsv = GvSV(*gvp);
+ }
+ if (SvTRUE(errsv)) /* save previous $@ */
+ saverr = newSVsv(errsv);

Zero(&ref, 1, SV);
sv_upgrade(&ref, SVt_RV);
***************
*** 2124,2129 ****
--- 2133,2144 ----
PUSHs(&ref);
PUTBACK;
perl_call_sv((SV*)destructor, G_DISCARD|G_EVAL);
+ if (SvTRUE(errsv))
+ warn("Trapped error in DESTROY: %s", SvPVx(errsv, na));
+ if (saverr != Nullsv) {
+ sv_setsv(errsv, saverr);
+ sv_free(saverr);
+ }
del_XRV(SvANY(&ref));
}
LEAVE;
Re: NETaa14561 patched [ In reply to ]
> From: Gurusamy Sarathy <gsar@engin.umich.edu>
>
> Here's a patch against perl5.001m for
>
> NETaa14561 N 1 k $@ vs DESTROY
>
> This prevents mortal OBJECTs (with user-defined destructors) created in the
> last statement in an eval block from clobbering the eval's $@.
>
> This will also trap a fatal error inside DESTROY's (but only display the
> error message as a mandatory warning).
>
> [.I have attempted a feeble optimization (using hv_fetch() instead of a
> gv_fetchpv()), but I think the gv_fetchmethod() and perl_call_sv()
> would be far more expensive...]
>
> - Sarathy.
> gsar@engin.umich.edu
> ------------------------------------8<------------------------------
> *** sv.c.dist Mon Sep 4 23:02:45 1995
> --- sv.c Mon Sep 4 23:02:58 1995
> ***************
> *** 2112,2117 ****
> --- 2112,2126 ----
> SAVEFREESV(SvSTASH(sv));
> if (destructor && GvCV(destructor)) {
> SV ref;
> + SV *errsv = Nullsv;
> + SV *saverr = Nullsv;
> + GV **gvp = (GV**)hv_fetch(defstash, "@", 1, TRUE);
> + if (gvp && *gvp && (SvTYPE(*gvp) == SVt_PVGV)) {
> + SvMULTI_on(*gvp);
> + errsv = GvSV(*gvp);
> + }
> + if (SvTRUE(errsv)) /* save previous $@ */
> + saverr = newSVsv(errsv);
>
> Zero(&ref, 1, SV);
> sv_upgrade(&ref, SVt_RV);
> ***************
> *** 2124,2129 ****
> --- 2133,2144 ----
> PUSHs(&ref);
> PUTBACK;
> perl_call_sv((SV*)destructor, G_DISCARD|G_EVAL);
> + if (SvTRUE(errsv))
> + warn("Trapped error in DESTROY: %s", SvPVx(errsv, na));
> + if (saverr != Nullsv) {
> + sv_setsv(errsv, saverr);
> + sv_free(saverr);
> + }
> del_XRV(SvANY(&ref));
> }
> LEAVE;
>
Umm, as written errsv may be null so SvTRUE(errsv) would dump core.

I'm rather worried about the cost of this, especially during global
destruction. (I think gv_fetchpv("@",TRUE, SVt_PV) should be stored
in a 'global' like envgv, siggv, incgv etc.)

It would be better to place a fix at the point where an error is
created/stored and so only pay a price if used. It would also be
better to add error text arising from DESTROY's to $_ rather than
have it 'leak out the side'. Note the way that pp_die adds
"\t...propagated" and an earlier patch of mine in die_where inserts
the text into $_ rather than overwriting it.

In fact I think die_where is probably what needs to be worked on.
It's even possible that just deleting the line:

sv_setpv(errsv, ""); /* clear $@ before destroying */

will fix the problem!

Tim.
Re: NETaa14561 patched [ In reply to ]
On Tue, 05 Sep 1995 14:09:59 BST, Tim Bunce wrote:
>
>> From: Gurusamy Sarathy <gsar@engin.umich.edu>
>>
>> Here's a patch against perl5.001m for
>>
>> NETaa14561 N 1 k $@ vs DESTROY
>>
>
>Umm, as written errsv may be null so SvTRUE(errsv) would dump core.
>

Huh? I don't see how. SvTRUE is defined to return 0 if !sv both in
the macro and function versions.

#define SvTRUE(sv) ( \
!sv \
? 0 \
: .... etc

>I'm rather worried about the cost of this, especially during global
>destruction. (I think gv_fetchpv("@",TRUE, SVt_PV) should be stored
>in a 'global' like envgv, siggv, incgv etc.)
>

Making a global "errsv" and eliminating the gv_fetchpv("@",...) cost
is definitely a good idea, but I went for the minimal modification
philosophy. Besides, the cost of the gv_fetchmethod() and perl_call_sv()
is a couple of orders of magnitude more expensive than the equivalent of
gv_fetchpv() in this particular case.

>It would be better to place a fix at the point where an error is
>created/stored and so only pay a price if used. It would also be
>better to add error text arising from DESTROY's to $_ rather than
>have it 'leak out the side'. Note the way that pp_die adds
>"\t...propagated" and an earlier patch of mine in die_where inserts
>the text into $_ rather than overwriting it.
>

I think I agree with this. The handling of DESTROYs can be much improved:

* work out a scheme for caching the DESTROY sub. This can
possibly benefit from caching of methods in general.

* stop abusing G_EVAL for perl internals and make up another
flag that does a clean call of the cv (without trampling on $@ etc),
but one that also takes care of the setjmp/longjmp issues that we
seem to depend on G_EVAL for now.

Nested evals also seem to need some work.

>In fact I think die_where is probably what needs to be worked on.
>It's even possible that just deleting the line:
>
> sv_setpv(errsv, ""); /* clear $@ before destroying */
>
>will fix the problem!

That was one of the first things I tried. What really happens is the errsv
is reset on _entry_ into perl_call_sv() (like it must), so there really is
no easy way around it, I think (without making up a flag that does what
G_EVAL does, but stops clobbering errsv).

As it stands, my patch is only a remedial palliative that makes it work
until there is a better fix (yeah, right, 5.002 :-)

>
>Tim.
>

Thanks very much for the feedback.

- Sarathy.
gsar@engin.umich.edu
Re: NETaa14561 patched [ In reply to ]
> From: Gurusamy Sarathy <gsar@engin.umich.edu>
>
> On Tue, 05 Sep 1995 14:09:59 BST, Tim Bunce wrote:
> >
> >> From: Gurusamy Sarathy <gsar@engin.umich.edu>
> >>
> >> Here's a patch against perl5.001m for
> >>
> >> NETaa14561 N 1 k $@ vs DESTROY
> >
> >Umm, as written errsv may be null so SvTRUE(errsv) would dump core.
>
> Huh? I don't see how. SvTRUE is defined to return 0 if !sv both in
> the macro and function versions.
>
Ooops, my mistake. I forget which check for nulls and which don't.

> >I'm rather worried about the cost of this, especially during global
> >destruction. (I think gv_fetchpv("@",TRUE, SVt_PV) should be stored
> >in a 'global' like envgv, siggv, incgv etc.)
>
> Making a global "errsv" and eliminating the gv_fetchpv("@",...) cost
> is definitely a good idea, but I went for the minimal modification
> philosophy.

Sure, I appreciate that.

> Besides, the cost of the gv_fetchmethod() and perl_call_sv()
> is a couple of orders of magnitude more expensive than the equivalent of
> gv_fetchpv() in this particular case.
>
> >It would be better to place a fix at the point where an error is
> >created/stored and so only pay a price if used. It would also be
> >better to add error text arising from DESTROY's to $_ rather than
> >have it 'leak out the side'. Note the way that pp_die adds
> >"\t...propagated" and an earlier patch of mine in die_where inserts
> >the text into $_ rather than overwriting it.
>
(Another ooops, I meant $@ there.)

> I think I agree with this. The handling of DESTROYs can be much improved:
>
> * work out a scheme for caching the DESTROY sub. This can
> possibly benefit from caching of methods in general.
>
> * stop abusing G_EVAL for perl internals and make up another
> flag that does a clean call of the cv (without trampling on $@ etc),
> but one that also takes care of the setjmp/longjmp issues that we
> seem to depend on G_EVAL for now.
>
> Nested evals also seem to need some work.
>
> >In fact I think die_where is probably what needs to be worked on.
> >It's even possible that just deleting the line:
> >
> > sv_setpv(errsv, ""); /* clear $@ before destroying */
> >
> >will fix the problem!
>
> That was one of the first things I tried. What really happens is the errsv
> is reset on _entry_ into perl_call_sv() (like it must), so there really is
> no easy way around it, I think (without making up a flag that does what
> G_EVAL does, but stops clobbering errsv).
>
I don't see a problem with something like a G_KEEPERR flag:

perl_call_sv((SV*)destructor, G_DISCARD|G_EVAL|G_KEEPERR);

In fact I think it's very simple and effective, especially when coupled
with the accumulation of $@ text. It would be used whenever calling a
function during cleanup operations. I like the idea!

> As it stands, my patch is only a remedial palliative that makes it work
> until there is a better fix (yeah, right, 5.002 :-)
>
5.002? What's that? :-)

> >Tim.
>
> Thanks very much for the feedback.
>
No problem.

Have a go with rolling sv_clear back to the 5.001m version and just
adding a G_KEEPERR flag.

> - Sarathy.
> gsar@engin.umich.edu
>
Tim.
Re: NETaa14561 patched [ In reply to ]
In <9509051407.aa22968@post.demon.co.uk>
On Tue, 5 Sep 1995 14:09:59 +0100
Tim Bunce <Tim.Bunce@ig.co.uk> writes:
>> From: Gurusamy Sarathy <gsar@engin.umich.edu>
>>
>
>I'm rather worried about the cost of this, especially during global
>destruction. (I think gv_fetchpv("@",TRUE, SVt_PV) should be stored
>in a 'global' like envgv, siggv, incgv etc.)

That would be very handy for Tk, or other things which use G_EVAL.
Re: NETaa14561 patched (once more) [ In reply to ]
On Tue, 05 Sep 1995 18:05:57 BST, Tim Bunce wrote:
>
>> From: Gurusamy Sarathy <gsar@engin.umich.edu>
>
>> I think I agree with this. The handling of DESTROYs can be much improved:
>>
[...]
>> * stop abusing G_EVAL for perl internals and make up another
>> flag that does a clean call of the cv (without trampling on $@ etc),
>> but one that also takes care of the setjmp/longjmp issues that we
>> seem to depend on G_EVAL for now.
>>
[...]
>I don't see a problem with something like a G_KEEPERR flag:
>
> perl_call_sv((SV*)destructor, G_DISCARD|G_EVAL|G_KEEPERR);
>
>In fact I think it's very simple and effective, especially when coupled
>with the accumulation of $@ text. It would be used whenever calling a
>function during cleanup operations. I like the idea!
>
[...]
>
>Have a go with rolling sv_clear back to the 5.001m version and just
>adding a G_KEEPERR flag.
>
>Tim.
>

Here's a trial version of an alternate patch for NETaa14561. As discussed
above, this introduces a new G_KEEPERR flag that can be used for calling
cleanup code with perl_call_sv(), in conjunction with G_EVAL. The effect of
the flag is to prepend any new errors (along with a "\t(caught) " prefix) to
$@ rather than to overwrite it. If there are no errors, the value of $@ is
preserved as is. (Contrast this with the current behavior, where
perl_call_sv() with the G_EVAL flag unconditionally wipes out $@). Another
reason for using this flag is if you want the G_EVAL behavior but also want
to be able to study the current value of $@ in DESTROY or in other code
called with the flag ($@ is currently always false inside a DESTROY or a
G_EVAL-ed piece of code).

In terms of efficiency, I believe this makes eval's a tad faster in general
owing to the inlined pp_entertry() in perl.c.

It might be a good idea to impose an upper limit on the size of $@ (since a
die() in a DESTROY will cause $@ to fill up fast if there are many OBJECTs
that are to be destroyed).

Here's a test case (courtesy Andreas Koenig) with the result after the patch:

package Demo;
sub new { bless {}}
sub DESTROY { die "fuz" } # try with empty body
sub foo {
my($self) = @_;
$self = $self->new() unless (ref $self);
die "foo";
}
package main;
eval { @a = Demo->foo() }; # try with Demo->new->foo()
print $@ if $@;
__END__
(caught) fuz at - line 3.
foo at - line 7.

The patch is generated against perl5.001m+my_patchset, but should apply
and work on plain 5.001m as well. Suggestions for enhancements very
welcome.

- Sarathy.
gsar@engin.umich.edu
-----------------------------------8<------------------------------------
*** perl.c.dist Thu Jun 22 18:38:28 1995
--- perl.c Tue Sep 5 21:46:14 1995
***************
*** 671,677 ****

cLOGOP->op_other = op;
markstack_ptr--;
! pp_entertry();
markstack_ptr++;

restart:
--- 671,693 ----

cLOGOP->op_other = op;
markstack_ptr--;
! /* we're trying to emulate pp_entertry() here */
! {
! register CONTEXT *cx;
! I32 gimme = GIMME;
!
! ENTER;
! SAVETMPS;
!
! push_return(op->op_next);
! PUSHBLOCK(cx, CXt_EVAL, stack_sp);
! PUSHEVAL(cx, 0, 0);
! eval_root = op; /* Only needed so that goto works right. */
!
! in_eval = 1;
! if (!(flags & G_KEEPERR))
! sv_setpv(GvSV(gv_fetchpv("@",TRUE, SVt_PV)),"");
! }
markstack_ptr++;

restart:
***************
*** 716,722 ****
if (op)
run();
retval = stack_sp - (stack_base + oldmark);
! if (flags & G_EVAL)
sv_setpv(GvSV(gv_fetchpv("@",TRUE, SVt_PV)),"");

cleanup:
--- 732,738 ----
if (op)
run();
retval = stack_sp - (stack_base + oldmark);
! if ((flags & G_EVAL) && !(flags & G_KEEPERR))
sv_setpv(GvSV(gv_fetchpv("@",TRUE, SVt_PV)),"");

cleanup:
*** sv.c.dist Tue Sep 5 21:54:45 1995
--- sv.c Tue Sep 5 18:55:15 1995
***************
*** 2123,2129 ****
PUSHMARK(SP);
PUSHs(&ref);
PUTBACK;
! perl_call_sv((SV*)destructor, G_DISCARD|G_EVAL);
del_XRV(SvANY(&ref));
}
LEAVE;
--- 2129,2135 ----
PUSHMARK(SP);
PUSHs(&ref);
PUTBACK;
! perl_call_sv((SV*)destructor, G_DISCARD|G_EVAL|G_KEEPERR);
del_XRV(SvANY(&ref));
}
LEAVE;
*** pp_ctl.c.dist Tue Sep 5 21:52:09 1995
--- pp_ctl.c Tue Sep 5 21:42:44 1995
***************
*** 946,952 ****

errsv = GvSV(gv_fetchpv("@",TRUE, SVt_PV));
/* As destructors may produce errors we set $@ at the last moment */
- sv_setpv(errsv, ""); /* clear $@ before destroying */

cxix = dopoptoeval(cxstack_ix);
if (cxix >= 0) {
--- 946,951 ----
***************
*** 968,973 ****
--- 967,977 ----

LEAVE;

+ if (SvTRUE(errsv)) {
+ char tmpbuf[1024];
+ strcpy(tmpbuf, message);
+ sprintf(message, "\t(caught) %s", tmpbuf);
+ }
sv_insert(errsv, 0, 0, message, strlen(message));
if (optype == OP_REQUIRE)
DIE("%s", SvPVx(GvSV(gv_fetchpv("@",TRUE, SVt_PV)), na));
*** cop.h.dist Sun Mar 12 22:25:47 1995
--- cop.h Tue Sep 5 18:14:17 1995
***************
*** 231,233 ****
--- 231,234 ----
#define G_DISCARD 2 /* Call FREETMPS. */
#define G_EVAL 4 /* Assume eval {} around subroutine call. */
#define G_NOARGS 8 /* Don't construct a @_ array. */
+ #define G_KEEPERR 16 /* Append errors to $@ rather than overwriting it */
Re: NETaa14561 patched (once more) [ In reply to ]
> From: Gurusamy Sarathy <gsar@engin.umich.edu>
>
> On Tue, 05 Sep 1995 18:05:57 BST, Tim Bunce wrote:
> >
> >Have a go with rolling sv_clear back to the 5.001m version and just
> >adding a G_KEEPERR flag.
>
> Here's a trial version of an alternate patch for NETaa14561. As discussed
> above, this introduces a new G_KEEPERR flag that can be used for calling
> cleanup code with perl_call_sv(), in conjunction with G_EVAL. The effect of
> the flag is to prepend any new errors (along with a "\t(caught) " prefix) to
> $@ rather than to overwrite it.

Umm, on reflection wouldn't it be better to append subsequent errors?
Perhaps with a "\talso " prefix:

Can't foo with bar at line x in baz.pm
also Can't do this with that at line y in other.pm
also Can't do that with this at line z in another.pm

Reads well to me.

> If there are no errors, the value of $@ is
> preserved as is. (Contrast this with the current behavior, where
> perl_call_sv() with the G_EVAL flag unconditionally wipes out $@). Another
> reason for using this flag is if you want the G_EVAL behavior but also want
> to be able to study the current value of $@ in DESTROY or in other code
> called with the flag ($@ is currently always false inside a DESTROY or a
> G_EVAL-ed piece of code).
>
> In terms of efficiency, I believe this makes eval's a tad faster in general
> owing to the inlined pp_entertry() in perl.c.
>
> It might be a good idea to impose an upper limit on the size of $@ (since a
> die() in a DESTROY will cause $@ to fill up fast if there are many OBJECTs
> that are to be destroyed).
>
A limit! In perl!

> Here's a test case (courtesy Andreas Koenig) with the result after the patch:
>
> package Demo;
> sub new { bless {}}
> sub DESTROY { die "fuz" } # try with empty body
> sub foo {
> my($self) = @_;
> $self = $self->new() unless (ref $self);
> die "foo";
> }
> package main;
> eval { @a = Demo->foo() }; # try with Demo->new->foo()
> print $@ if $@;
> __END__
> (caught) fuz at - line 3.
> foo at - line 7.
>
With appending that'd be:

foo at - line 7.
also fuz at - line 3.

which seems better.

> The patch is generated against perl5.001m+my_patchset, but should apply
> and work on plain 5.001m as well. Suggestions for enhancements very
> welcome.
>
Just a minor one ...

> + if (SvTRUE(errsv)) {
> + char tmpbuf[1024];
> + strcpy(tmpbuf, message);
> + sprintf(message, "\t(caught) %s", tmpbuf);
> + }

Another limit! :-) This should be redone as SV's with sv_cat*() etc.

Tim.
Re: NETaa14561 patched (once more) [ In reply to ]
Excerpts from the mail message of Tim Bunce:
) Umm, on reflection wouldn't it be better to append subsequent errors?
) Perhaps with a "\talso " prefix:
)
) Can't foo with bar at line x in baz.pm
) also Can't do this with that at line y in other.pm
) also Can't do that with this at line z in another.pm
)
) Reads well to me.
[...]
) With appending that'd be:
)
) foo at - line 7.
) also fuz at - line 3.
)
) which seems better.

I worry about 1) The important error scrolling off the screen and
2) The user missing that the first error is the important one.
But otherwise I like appending better than prepending (the
errors appear in chronological order, for one). Perhaps when
death actually occurs, move or repeat the primary error at
the end? I haven't come up with any nice wording.

Can't foo with bar at line x in baz.pm
also Can't do this with that at line y in other.pm
also Can't do that with this at line z in another.pm]
Can't foo with bar at line x in baz.pm

I say this with the experience of several years helping users of VMS
where the important error was almost always followed by something and
the "something" is usually what the user latched on to. We got a lot
of questions about `How do I fix a "symbolic stack dump"' (the standard
response is, of course, "Don't worry about it, it is only symbolic.").
--
Tye McQueen tye@metronet.com || tye@doober.usu.edu
Nothing is obvious unless you are overlooking something
http://www.metronet.com/~tye/ (scripts, links, nothing fancy)
Re: NETaa14561 patched (once more) [ In reply to ]
> From: Tye McQueen <tye@metronet.com>
>
> Excerpts from the mail message of Tim Bunce:
> ) Umm, on reflection wouldn't it be better to append subsequent errors?
> ) Perhaps with a "\talso " prefix:
> )
> ) Can't foo with bar at line x in baz.pm
> ) also Can't do this with that at line y in other.pm
> ) also Can't do that with this at line z in another.pm
> )
> ) Reads well to me.
> [...]
> ) With appending that'd be:
> )
> ) foo at - line 7.
> ) also fuz at - line 3.
> )
> ) which seems better.
>
> I worry about 1) The important error scrolling off the screen and
> 2) The user missing that the first error is the important one.
> But otherwise I like appending better than prepending (the
> errors appear in chronological order, for one).
> Perhaps when death actually occurs, move or repeat the primary error
> at the end?

Or try to reduce the number of lines by using %@ as a cache...

If perl is about to add an "\talso "... string to $@ you could
check %@ to see if the same text has already been added.

The only non-local cost of this is to clear %@ at the same time as $@.
Since we plan for '@'s GV to be 'global' checking %@ to see if it needs
clearing will be very cheap.

I believe this mechanism would be useful since the most likely cause of
lots of errors is due to many objects of a given type being destroyed
and hitting a common problem (such as using an undef as a ref). In such
a situation the error text would often be the same for many errors.

I've used a similar technique with great success in an application.


> I haven't come up with any nice wording.
>
> Can't foo with bar at line x in baz.pm
> also Can't do this with that at line y in other.pm
> also Can't do that with this at line z in another.pm
> Can't foo with bar at line x in baz.pm
>
Umm, if you go this route I'd suggest:

Can't foo with bar at line x in baz.pm
also Can't do this with that at line y in other.pm
also Can't do that with this at line z in another.pm
after initial Can't foo with bar at line x in baz.pm


> I say this with the experience of several years helping users of VMS
> where the important error was almost always followed by something and
> the "something" is usually what the user latched on to. We got a lot
> of questions about `How do I fix a "symbolic stack dump"' (the standard
> response is, of course, "Don't worry about it, it is only symbolic.").

Users, don't ya just luv'em

:-)

> --
> Tye McQueen tye@metronet.com || tye@doober.usu.edu
>
Tim.
Re: NETaa14561 patched (once more) [ In reply to ]
On Thu, 07 Sep 1995 19:35:03 BST, Tim Bunce wrote:
>
>> From: Tye McQueen <tye@metronet.com>
>>
>> Excerpts from the mail message of Tim Bunce:
>> ) Umm, on reflection wouldn't it be better to append subsequent errors?
>> ) Perhaps with a "\talso " prefix:
>> [...]
>> ) With appending that'd be:
>> ) foo at - line 7.
>> ) also fuz at - line 3.

The errors we are talking about here can be characterized as background
errors (those that happen not as a direct result of user code, but more as a
function of when/where Perl decides to do "background" code--DESTROYs, DIE
hooks and such). So I tended towards putting these errors "in the
background" by inserting them before an existing error. See below for
another argument.

>>
>> I worry about 1) The important error scrolling off the screen and
>> 2) The user missing that the first error is the important one.
>> But otherwise I like appending better than prepending (the
>> errors appear in chronological order, for one).
>> Perhaps when death actually occurs, move or repeat the primary error
>> at the end?
>
>Or try to reduce the number of lines by using %@ as a cache...
>
>If perl is about to add an "\talso "... string to $@ you could
>check %@ to see if the same text has already been added.

This sounds like a good idea, but how about preserving the _order_
of the errors? We'll suffer more overhead for that (since resetting
that is also a non-local cost).

>
>The only non-local cost of this is to clear %@ at the same time as $@.
>Since we plan for '@'s GV to be 'global' checking %@ to see if it needs
>clearing will be very cheap.
>
>I believe this mechanism would be useful since the most likely cause of
>lots of errors is due to many objects of a given type being destroyed
>and hitting a common problem (such as using an undef as a ref). In such
>a situation the error text would often be the same for many errors.
>

Also any error causing code sitting in a loop..

>> Can't foo with bar at line x in baz.pm
>> also Can't do this with that at line y in other.pm
>> also Can't do that with this at line z in another.pm
>> Can't foo with bar at line x in baz.pm
>>

This seems a little redundant to me. When there is less than a screenful of
errors (which is usually the case, I would assume), it looks silly. When
there is more than a screenful of errors, You can't see the top line
anyways, so it wouldn't make sense to append the background errors in the
first place. (Note the cunning clinching argument here :-} ) This is why I
chose to prepend (I will staunchly claim :-) However, I can live with either
appending, or prepending, but not both.

Before I post another patch, I think this needs some more thought. The
order in which the errors occurred in cleanup code may or may not make much
sense. Since use of %@ will mean not preserving the order, I suppose this
will be workable only if can get away with an unordered list of background
errors. Using the array slot in the GV to maintain the order seems not
worth it (since we have to check for and possibly clear $, % and @ in the GV
every time we enter an eval).

Tim's arguments against arbitrary limits etc., in the patch are well taken.
That's why I called it a "trial" version of the patch :-)


- Sarathy.
gsar@engin.umich.edu
Re: NETaa14561 patched (once more) [ In reply to ]
> From: Gurusamy Sarathy <gsar@engin.umich.edu>
>
> On Thu, 07 Sep 1995 19:35:03 BST, Tim Bunce wrote:
> >
> >> From: Tye McQueen <tye@metronet.com>
> >>
> >> Excerpts from the mail message of Tim Bunce:
> >> ) Umm, on reflection wouldn't it be better to append subsequent errors?
> >> ) Perhaps with a "\talso " prefix:
> >> [...]
> >> ) With appending that'd be:
> >> ) foo at - line 7.
> >> ) also fuz at - line 3.
>
> The errors we are talking about here can be characterized as background
> errors (those that happen not as a direct result of user code, but more as a
> function of when/where Perl decides to do "background" code--DESTROYs, DIE
> hooks and such). So I tended towards putting these errors "in the
> background" by inserting them before an existing error. See below for
> another argument.
>
> >> I worry about 1) The important error scrolling off the screen and
> >> 2) The user missing that the first error is the important one.
> >> But otherwise I like appending better than prepending (the
> >> errors appear in chronological order, for one).
> >> Perhaps when death actually occurs, move or repeat the primary error
> >> at the end?
> >
> >Or try to reduce the number of lines by using %@ as a cache...
> >
> >If perl is about to add an "\talso "... string to $@ you could
> >check %@ to see if the same text has already been added.
>
> This sounds like a good idea, but how about preserving the _order_
> of the errors?

The %@ is only used for checking. The error is built up in $@ as usual.

> We'll suffer more overhead for that (since resetting that is also a
> non-local cost).

Only needs to be reset if _more_ that one error occurs (that's rare).
And as stated below, the cost of checking is very low.

> >The only non-local cost of this is to clear %@ at the same time as $@.
> >Since we plan for '@'s GV to be 'global' checking %@ to see if it needs
> >clearing will be very cheap.
> >
> >I believe this mechanism would be useful since the most likely cause of
> >lots of errors is due to many objects of a given type being destroyed
> >and hitting a common problem (such as using an undef as a ref). In such
> >a situation the error text would often be the same for many errors.
>
> Also any error causing code sitting in a loop..
>
True.

> >> Can't foo with bar at line x in baz.pm
> >> also Can't do this with that at line y in other.pm
> >> also Can't do that with this at line z in another.pm
> >> Can't foo with bar at line x in baz.pm
>
> This seems a little redundant to me. When there is less than a screenful of
> errors (which is usually the case, I would assume), it looks silly. When
> there is more than a screenful of errors, You can't see the top line
> anyways, so it wouldn't make sense to append the background errors in the
> first place. (Note the cunning clinching argument here :-} ) This is why I
> chose to prepend (I will staunchly claim :-) However, I can live with either
> appending, or prepending, but not both.
>
Yea, not both :-)

> Before I post another patch, I think this needs some more thought.
> The order in which the errors occurred in cleanup code may or may not make
> much sense. Since use of %@ will mean not preserving the order, I suppose this
> will be workable only if can get away with an unordered list of background
> errors. Using the array slot in the GV to maintain the order seems not
> worth it (since we have to check for and possibly clear $, % and @ in the GV
> every time we enter an eval).
>
True, don't bother with @@, just build






the string into $@ as now.

> Tim's arguments against arbitrary limits etc., in the patch are well taken.
> That's why I called it a "trial" version of the patch :-)
>
:-)

I look forward to the next...

Thanks for the work Sarathy.

> - Sarathy.
> gsar@engin.umich.edu
>
Tim.