Mailing List Archive

1 2  View All
RE: 'stackless' python? [ In reply to ]
[Christian Tismer]
> ...
> Yup. With a little counting, it was easy to survive:
>
> def main():
> global a
> a=2
> thing (5)
> a=a-1
> if a:
> saved.throw (0)

Did "a" really need to be global here? I hope you see the same behavior
without the "global a"; e.g., this Scheme:

(define -cont- #f)

(define thing
(lambda (n)
(if (= n 2) (call/cc (lambda (k) (set! -cont- k))))
(display "n == ") (display n) (newline)
(if (= n 0)
(begin (display "Done!") (newline))
(thing (- n 1)))))

(define main
(lambda ()
(let ((a 2))
(thing 5)
(display "a is ") (display a) (newline)
(set! a (- a 1))
(if (> a 0)
(-cont- #f)))))

(main)

prints:

n == 5
n == 4
n == 3
n == 2
n == 1
n == 0
Done!
a is 2
n == 2
n == 1
n == 0
Done!
a is 1

Or does brute-force frame-copying cause the continuation to set "a" back to
2 each time?

> Weird enough

Par for the continuation course! They're nasty when eaten raw.

> and needs a much better interface.

Ya, like screw 'em and use threads <wink>.

> But finally I'm quite happy that it worked so smoothly
> after just a couple of hours (well, about six :)

Yup! Playing with Python internals is a treat.

to-be-continued-ly y'rs - tim
RE: 'stackless' python? [ In reply to ]
[Sam]
>>> Continuations are more powerful than coroutines, though I admit
>>> they're a bit esoteric.

[Tim]
>> "More powerful" is a tedious argument you should always avoid <wink>.

[Sam]
> More powerful in the sense that you can use continuations to build
> lots of different control structures (coroutines, backtracking,
> exceptions), but not vice versa.

"More powerful" is a tedious argument you should always avoid <frown -- I'm
not touching this, but you can fight it out now with Aaron et alia <wink>>.

>> Then the continuation would (eventually) "return to" the
>> "print repr(saved)" and we'd get an infinite output tail [...]
>> and never reach line 4. Right?

> Yes... the continuation object so far isn't very usable.

But it's proper behavior for a continuation all the same! So this aspect
shouldn't be "fixed".

> ...
> let/cc stores the continuation in a variable binding, while
> introducing a new scope. It requires a change to the underlying
> language:

Isn't this often implemented via a macro, though, so that

(let/cc name code)

"acts like"

(call/cc (lambda (name) code))

? I haven't used a Scheme with native let/cc, but poking around it appears
that the real intent is to support exception-style function exits with a
mechanism cheaper than 1st-class continuations: twice saw the let/cc object
(the thingie bound to "name") defined as being invalid the instant after
"code" returns, so it's an "up the call stack" gimmick. That doesn't sound
powerful enough for what you're after.

> [nice let/cc call/cc tutorialette]
> ...
> In order to do useful work like passing values back and forth between
> coroutines, we have to have some way of returning a value from the
> continuation when it is reinvoked.

Somehow, I suspect that's the least of our problems <0.5 wink>. If
continuations are in Python's future, though, I agree with the need as
stated.

> I should emphasize that most folks will never see call/cc 'in the
> raw', it will usually have some nice wrapper around to implement
> whatever construct is needed.

Python already has well-developed exception and thread facilities, so it's
hard to make a case for continuations as a catch-all implementation
mechanism. That may be the rub here: while any number of things *can* be
implementated via continuations, I think very few *need* to be implemented
that way, and full-blown continuations aren't easy to implement efficiently
& portably.

The Icon language was particularly concerned with backtracking searches, and
came up with generators as another clearer/cheaper implementation technique.
When it went on to full-blown coroutines, it's hard to say whether
continuations would have been a better approach. But the coroutine
implementation it has is sluggish and buggy and hard to port, so I doubt
they could have done noticeably worse.

Would full-blown coroutines be powerful enough for your needs?

assuming-the-practical-defn-of-"powerful-enough"-ly y'rs - tim
RE: 'stackless' python? [ In reply to ]
Tim Peters writes:
> Isn't this often implemented via a macro, though, so that
>
> (let/cc name code)
>
> "acts like"
>
> (call/cc (lambda (name) code))

Yup, they're equivalent, in the sense that given one you can make a
macro to do the other. call/cc is preferred because it doesn't
require a new binding construct.

> ? I haven't used a Scheme with native let/cc, but poking around it
> appears that the real intent is to support exception-style function
> exits with a mechanism cheaper than 1st-class continuations: twice
> saw the let/cc object (the thingie bound to "name") defined as
> being invalid the instant after "code" returns, so it's an "up the
> call stack" gimmick. That doesn't sound powerful enough for what
> you're after.

Except that since the escape procedure is 'first-class' it can be
stored away and invoked (and reinvoked) later. [.that's all that
'first-class' means: a thing that can be stored in a variable,
returned from a function, used as an argument, etc..]

I've never seen a let/cc that wasn't full-blown, but it wouldn't
surprise me.

> The Icon language was particularly concerned with backtracking
> searches, and came up with generators as another clearer/cheaper
> implementation technique. When it went on to full-blown
> coroutines, it's hard to say whether continuations would have been
> a better approach. But the coroutine implementation it has is
> sluggish and buggy and hard to port, so I doubt they could have
> done noticeably worse.

Many Scheme implementors either skip it, or only support non-escaping
call/cc (i.e., exceptions in Python).

> Would full-blown coroutines be powerful enough for your needs?

Yes, I think they would be. But I think with Python it's going to
be just about as hard, either way.

-Sam
Re: 'stackless' python? [ In reply to ]
Guido van Rossum writes:
> Perhaps it would help to explain what a continuation actually does
> with the run-time environment, instead of giving examples of how to
> use them and what the result it?

This helped me a lot, and is the angle used in "Essentials of
Programming Languages":

Usually when folks refer to a 'stack', they're refering to an
*implementation* of the stack data type: really an optimization that
assumes an upper bound on stack size, and that things will only be
pushed and popped in order.

If you were to implement a language's variable and execution stacks
with actual data structures (linked lists), then it's easy to see
what's needed: the head of the list represents the current state. As
functions exit, they pop things off the list.

The reason I brought this up (during a lull!) was that Python is
already paying all of the cost of heap-allocated frames, and it didn't
seem to me too much of a leap from there.

> 1. All program state is somehow contained in a single execution stack.
Yup.

> 2. A continuation does something equivalent to making a copy of the
> entire execution stack.
Yup.
> I.e. are there mutable *objects* in Scheme?
> (I know there are mutable and immutable *name bindings* -- I think.)

Yes, Scheme is pro-functional... but it has arrays, i/o, and set-cdr!,
all the things that make it 'impure'.

I think shallow copies are what's expected. In the examples I have,
the continuation is kept in a 'register', and call/cc merely packages
it up with a little function wrapper. You are allowed to stomp all
over lexical variables with "set!".

> 3. Calling a continuation probably makes the saved copy of the
> execution stack the current execution state; I presume there's also a
> way to pass an extra argument.
Yup.
> 4. Coroutines (which I *do* understand) are probably done by swapping
> between two (or more) continuations.
Yup. Here's an example in Scheme:

http://www.nightmare.com/stuff/samefringe.scm

Somewhere I have an example of coroutines being used for parsing, very
elegant. Something like one coroutine does lexing, and passes tokens
one-by-one to the next level, which passes parsed expressions to a
compiler, or whatever. Kinda like pipes.

> 5. Other control constructs can be done by various manipulations of
> continuations. I presume that in many situations the saved
> continuation becomes the main control locus permanently, and the
> (previously) current stack is simply garbage-collected. Of course
> the lazy copy makes this efficient.

Yes... I think backtracking would be an example of this. You're doing
a search on a large space (say a chess game). After a certain point
you want to try a previous fork, to see if it's promising, but you
don't want to throw away your current work. Save it, then unwind back
to the previous fork, try that option out... if it turns out to be
better then toss the original.

> If this all is close enough to the truth, I think that
> continuations involving C stack frames are definitely out -- as Tim
> Peters mentioned, you don't know what the stuff on the C stack of
> extensions refers to. (My guess would be that Scheme
> implementations assume that any pointers on the C stack point to
> Scheme objects, so that C stack frames can be copied and
> conservative GC can be used -- this will never happen in Python.)

I think you're probably right here - usually there are heavy
restrictions on what kind of data can pass through the C interface.
But I know of at least one Scheme (mzscheme/PLT) that uses
conservative gc and has c/c++ interfaces. [... dig dig ...]

From this:

http://www.cs.rice.edu/CS/PLT/packages/doc/insidemz/node21.htm#exceptions

and looking at the code it looks like they enforce the restriction
exactly as you described in an earlier mail: call/cc is safe for
c->scheme calls only if they invoke a new top-level machine.

> Continuations involving only Python stack frames might be
> supported, if we can agree on the the sharing / copying semantics.
> This is where I don't know enough see questions at #2 above).

Woo Hoo! Where do I send the Shrubbery?

-Sam
Re: 'stackless' python? [ In reply to ]
Guido van Rossum wrote:
>
> Sam (& others),
>
> I thought I understood what continuations were, but the examples of
> what you can do with them so far don't clarify the matter at all.
>
> Perhaps it would help to explain what a continuation actually does
> with the run-time environment, instead of giving examples of how to
> use them and what the result it?
>
> Here's a start of my own understanding (brief because I'm on a 28.8k
> connection which makes my ordinary typing habits in Emacs very
> painful).
>
> 1. All program state is somehow contained in a single execution stack.
> This includes globals (which are simply name bindings in the botton
> stack frame). It also includes a code pointer for each stack frame
> indicating where the function corresponding to that stack frame is
> executing (this is the return address if there is a newer stack frame,
> or the current instruction for the newest frame).

Right. For now, this information is on the C stack for each called
function, although almost completely available in the frame chain.

> 2. A continuation does something equivalent to making a copy of the
> entire execution stack. This can probably be done lazily. There are
> probably lots of details. I also expect that Scheme's semantic model
> is different than Python here -- e.g. does it matter whether deep or
> shallow copies are made? I.e. are there mutable *objects* in Scheme?
> (I know there are mutable and immutable *name bindings* -- I think.)

To make it lazy, a gatekeeper must be put on top of the two
splitted frames, which catches the event that one of them
returns. It appears to me that this it the same callcc.new()
object which catches this, splitting frames when hit by a return.

> 3. Calling a continuation probably makes the saved copy of the
> execution stack the current execution state; I presume there's also a
> way to pass an extra argument.
>
> 4. Coroutines (which I *do* understand) are probably done by swapping
> between two (or more) continuations.

Right, which is just two or three assignments.

> 5. Other control constructs can be done by various manipulations of
> continuations. I presume that in many situations the saved
> continuation becomes the main control locus permanently, and the
> (previously) current stack is simply garbage-collected. Of course the
> lazy copy makes this efficient.

Yes, great. It looks like that switching continuations
is not more expensive than a single Python function call.

> Continuations involving only Python stack frames might be supported,
> if we can agree on the the sharing / copying semantics. This is where
> I don't know enough see questions at #2 above).

This would mean to avoid creating incompatible continuations.
A continutation may not switch to a frame chain which was created
by a different VM incarnation since this would later on
corrupt the machine stack. One way to assure that would be
a thread-safe function in sys, similar to sys.exc_info()
which gives an id for the current interpreter. continuations
living somewhere in globals would be marked by the interpreter
which created them, and reject to be thrown if they don't match.

The necessary interpreter support appears to be small:

Extend the PyFrame structure by two fields:
- interpreter ID (addr of some local variable would do)
- stack pointer at current instruction.

Change the CALL_FUNCTION opcode to avoid calling eval recursively
in the case of a Python function/method, but the current frame,
build the new one and start over.
RETURN will pop a frame and reload its local variables instead
of returning, as long as there is a frame to pop.

I'm unclear how exceptions should be handled. Are they currently
propagated up across different C calls other than ceval2
recursions?

ciao - chris

--
Christian Tismer :^) <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH : Have a break! Take a ride on Python's
Kaiserin-Augusta-Allee 101 : *Starship* http://starship.python.net
10553 Berlin : PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
we're tired of banana software - shipped green, ripens at home
Re: 'stackless' python? [ In reply to ]
>>>>> "SR" == rushing <rushing@nightmare.com> writes:

SR> Somewhere I have an example of coroutines being used for
SR> parsing, very elegant. Something like one coroutine does
SR> lexing, and passes tokens one-by-one to the next level, which
SR> passes parsed expressions to a compiler, or whatever. Kinda
SR> like pipes.

This is the first example that's used in Structured Programming (Dahl,
Djikstra, and Hoare). I'd be happy to loan a copy to any of the
Python-dev people who sit nearby.

Jeremy
Re: 'stackless' python? [ In reply to ]
Tim Peters wrote:
>
> [Christian Tismer]
> > ...
> > Yup. With a little counting, it was easy to survive:
> >
> > def main():
> > global a
> > a=2
> > thing (5)
> > a=a-1
> > if a:
> > saved.throw (0)
>
> Did "a" really need to be global here? I hope you see the same behavior
> without the "global a"; e.g., this Scheme:

(Hüstel) Actually, I inserted the "global" later. It worked as well
with a local variable, but I didn't understand it. Still don't :-)

> Or does brute-force frame-copying cause the continuation to set "a" back to
> 2 each time?

No, it doesn't. Behavior is exactly the same with or without
global. I'm not sure wether this is a bug or a feature.
I *think* 'a' as a local has a slot in the frame, so it's
actually a different 'a' living in both copies. But this
would not have worked.
Can it be that before a function call, the interpreter
turns its locals into a dict, using fast_to_locals?
That would explain it.
This is not what I think it should be! Locals need to be
copied.

> > and needs a much better interface.
>
> Ya, like screw 'em and use threads <wink>.

Never liked threads. These fibers are so neat since
they don't need threads, no locking, and they are
available on systems without threads.

> > But finally I'm quite happy that it worked so smoothly
> > after just a couple of hours (well, about six :)
>
> Yup! Playing with Python internals is a treat.
>
> to-be-continued-ly y'rs - tim

throw(42) - chris

--
Christian Tismer :^) <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH : Have a break! Take a ride on Python's
Kaiserin-Augusta-Allee 101 : *Starship* http://starship.python.net
10553 Berlin : PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
we're tired of banana software - shipped green, ripens at home
Re: 'stackless' python? [ In reply to ]
Tim Peters wrote:
>
> [Christian Tismer]
> > ...
> > Yup. With a little counting, it was easy to survive:
> >
> > def main():
> > global a
> > a=2
> > thing (5)
> > a=a-1
> > if a:
> > saved.throw (0)
>
> Did "a" really need to be global here? I hope you see the same behavior
> without the "global a"; e.g., this Scheme:

Actually, the frame-copying was not enough to make this
all behave correctly. Since I didn't change the interpreter,
the ceval.c incarnations still had copies to the old frames.
The only effect which I achieved with frame copying was
that the refcounts were increased correctly.

I have to remove the hardware stack copying now.
Will try to create a non-recursive version of the interpreter.

ciao - chris

--
Christian Tismer :^) <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH : Have a break! Take a ride on Python's
Kaiserin-Augusta-Allee 101 : *Starship* http://starship.python.net
10553 Berlin : PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
we're tired of banana software - shipped green, ripens at home
RE: 'stackless' python? [ In reply to ]
[GvR]
> ...
> Perhaps it would help to explain what a continuation actually does
> with the run-time environment, instead of giving examples of how to
> use them and what the result it?

Paul Wilson (the GC guy) has a very nice-- but incomplete --intro to Scheme
and its implementation:

ftp://ftp.cs.utexas.edu/pub/garbage/cs345/schintro-v14/schintro_toc.html

You can pick up a lot from that fast. Is Steven (Majewski) on this list?
He doped most of this out years ago.

> Here's a start of my own understanding (brief because I'm on a 28.8k
> connection which makes my ordinary typing habits in Emacs very
> painful).
>
> 1. All program state is somehow contained in a single execution stack.
> This includes globals (which are simply name bindings in the botton
> stack frame).

Better to think of name resolution following lexical links. Lexical
closures with indefinite extent are common in Scheme, so much so that name
resolution is (at least conceptually) best viewed as distinct from execution
stacks.

Here's a key: continuations are entirely about capturing control flow
state, and nothing about capturing binding or data state. Indeed, mutating
bindings and/or non-local data are the ways distinct invocations of a
continuation communicate with each other, and for this reason true
functional languages generally don't support continuations of the call/cc
flavor.

> It also includes a code pointer for each stack frame indicating where
> the function corresponding to that stack frame is executing (this is
> the return address if there is a newer stack frame, or the current
> instruction for the newest frame).

Yes, although the return address is one piece of information in the current
frame's continuation object -- continuations are used internally for
"regular calls" too. When a function returns, it passes control thru its
continuation object. That process restores-- from the continuation
object --what the caller needs to know (in concept: a pointer to *its*
continuation object, its PC, its name-resolution chain pointer, and its
local eval stack).

Another key point: a continuation object is immutable.

> 2. A continuation does something equivalent to making a copy of the
> entire execution stack. This can probably be done lazily. There are
> probably lots of details.

The point of the above is to get across that for Scheme-calling-Scheme,
creating a continuation object copies just a small, fixed number of pointers
(the current continuation pointer, the current name-resolution chain
pointer, the PC), plus the local eval stack. This is for a "stackless"
interpreter that heap-allocates name-mapping and execution-frame and
continuation objects. Half the literature is devoted to optimizing one or
more of those away in special cases (e.g., for continuations provably
"up-level", using a stack + setjmp/longjmp instead).

> I also expect that Scheme's semantic model is different than Python
> here -- e.g. does it matter whether deep or shallow copies are made?
> I.e. are there mutable *objects* in Scheme? (I know there are mutable
> and immutable *name bindings* -- I think.)

Same as Python here; Scheme isn't a functional language; has mutable
bindings and mutable objects; any copies needed should be shallow, since
it's "a feature" that invoking a continuation doesn't restore bindings or
object values (see above re communication).

> 3. Calling a continuation probably makes the saved copy of the
> execution stack the current execution state; I presume there's also a
> way to pass an extra argument.

Right, except "stack" is the wrong mental model in the presence of
continuations; it's a general rooted graph (A calls B, B saves a
continuation pointing back to A, B goes on to call A, A saves a continuation
pointing back to B, etc). If the explicitly saved continuations are never
*invoked*, control will eventually pop back to the root of the graph, so in
that sense there's *a* stack implicit at any given moment.

> 4. Coroutines (which I *do* understand) are probably done by swapping
> between two (or more) continuations.
>
> 5. Other control constructs can be done by various manipulations of
> continuations. I presume that in many situations the saved
> continuation becomes the main control locus permanently, and the
> (previously) current stack is simply garbage-collected. Of course the
> lazy copy makes this efficient.

There's much less copying going on in Scheme-to-Scheme than you might think;
other than that, right on.

> If this all is close enough to the truth, I think that continuations
> involving C stack frames are definitely out -- as Tim Peters
> mentioned, you don't know what the stuff on the C stack of extensions
> refers to. (My guess would be that Scheme implementations assume that
> any pointers on the C stack point to Scheme objects, so that C stack
> frames can be copied and conservative GC can be used -- this will
> never happen in Python.)

"Scheme" has become a generic term covering dozens of implementations with
varying semantics, and a quick tour of the web suggests that cross-language
Schemes generally put severe restrictions on continuations across language
boundaries. Most popular seems to be to outlaw them by decree.

> Continuations involving only Python stack frames might be supported,
> if we can agree on the the sharing / copying semantics. This is where
> I don't know enough see questions at #2 above).

I'd like to go back to examples of what they'd be used for <wink> -- but
fully fleshed out. In the absence of Scheme's ubiquitous lexical closures
and "lambdaness" and syntax-extension facilities, I'm unsure they're going
to work out reasonably in Python practice; it's not enough that they can be
very useful in Scheme, and Sam is highly motivated to go to extremes here.

give-me-a-womb-and-i-still-won't-give-birth-ly y'rs - tim
Re: 'stackless' python? [ In reply to ]
Tim Peters wrote:
...

> > Continuations involving only Python stack frames might be supported,
> > if we can agree on the the sharing / copying semantics. This is where
> > I don't know enough see questions at #2 above).
>
> I'd like to go back to examples of what they'd be used for <wink> -- but
> fully fleshed out. In the absence of Scheme's ubiquitous lexical closures
> and "lambdaness" and syntax-extension facilities, I'm unsure they're going
> to work out reasonably in Python practice; it's not enough that they can be
> very useful in Scheme, and Sam is highly motivated to go to extremes here.
>
> give-me-a-womb-and-i-still-won't-give-birth-ly y'rs - tim

I've put quite many hours into a non-recursive ceval.c
already. Should I continue? At least this would be a little
improvement, also if the continuation thing will not be born. ?

- chris

--
Christian Tismer :^) <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH : Have a break! Take a ride on Python's
Kaiserin-Augusta-Allee 101 : *Starship* http://starship.python.net
10553 Berlin : PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
we're tired of banana software - shipped green, ripens at home
RE: 'stackless' python? [ In reply to ]
[Sam]
> ...
> Except that since the escape procedure is 'first-class' it can be
> stored away and invoked (and reinvoked) later. [.that's all that
> 'first-class' means: a thing that can be stored in a variable,
> returned from a function, used as an argument, etc..]
>
> I've never seen a let/cc that wasn't full-blown, but it wouldn't
> surprise me.

The let/cc's in question were specifically defined to create continuations
valid only during let/cc's dynamic extent, so that, sure, you could store
them away, but trying to invoke one later could be an error. It's in that
sense I meant they weren't "first class".

Other flavors of Scheme appear to call this concept "weak continuation", and
use a different verb to invoke it (like call-with-escaping-continuation, or
call/ec). Suspect the let/cc oddballs I found were simply confused
implementations (there are a lot of amateur Scheme implementations out
there!).

>> Would full-blown coroutines be powerful enough for your needs?

> Yes, I think they would be. But I think with Python it's going to
> be just about as hard, either way.

Most people on this list are comfortable with coroutines already because
they already understand them -- Jeremy can even reach across the hall and
hand Guido a helpful book <wink>. So pondering coroutines increase the
number of brain cells willing to think about the implementation.

continuation-examples-leave-people-still-going-"huh?"-after-an-
hour-of-explanation-ly y'rs - tim
RE: 'stackless' python? [ In reply to ]
[Christian Tismer]
>>> ...
>>> Yup. With a little counting, it was easy to survive:
>>>
>>> def main():
>>> global a
>>> a=2
>>> thing (5)
>>> a=a-1
>>> if a:
>>> saved.throw (0)

[Tim]
>> Did "a" really need to be global here? I hope you see the same behavior
>> without the "global a";
[which he does, but for mysterious reasons]

[Christian]
> Actually, the frame-copying was not enough to make this
> all behave correctly. Since I didn't change the interpreter,
> the ceval.c incarnations still had copies to the old frames.
> The only effect which I achieved with frame copying was
> that the refcounts were increased correctly.

All right! Now you're closer to the real solution <wink>; i.e., copying
wasn't really needed here, but keeping stuff alive was. In Scheme terms,
when we entered main originally a set of bindings was created for its
locals, and it is that very same set of bindings to which the continuation
returns. So the continuation *should* reuse them -- making a copy of the
locals is semantically hosed.

This is clearer in Scheme because its "stack" holds *only* control-flow info
(bindings follow a chain of static links, independent of the current "call
stack"), so there's no temptation to run off copying bindings too.

elegant-and-baffling-for-the-price-of-one<wink>-ly y'rs - tim
RE: 'stackless' python? [ In reply to ]
[Christian Tismer]
> I've put quite many hours into a non-recursive ceval.c
> already.

Does that mean 6 or 600 <wink>?

> Should I continue? At least this would be a little improvement, also
> if the continuation thing will not be born. ?

Guido wanted to move in the "flat interpreter" direction for Python2 anyway,
so my belief is it's worth pursuing.

but-then-i-flipped-a-coin-with-two-heads-ly y'rs - tim
Re: 'stackless' python? [ In reply to ]
Tim Peters wrote:
...
> [Christian]
> > Actually, the frame-copying was not enough to make this
> > all behave correctly. Since I didn't change the interpreter,
> > the ceval.c incarnations still had copies to the old frames.
> > The only effect which I achieved with frame copying was
> > that the refcounts were increased correctly.
>
> All right! Now you're closer to the real solution <wink>; i.e., copying
> wasn't really needed here, but keeping stuff alive was. In Scheme terms,
> when we entered main originally a set of bindings was created for its
> locals, and it is that very same set of bindings to which the continuation
> returns. So the continuation *should* reuse them -- making a copy of the
> locals is semantically hosed.

I tried the most simple thing, and this seemed to be duplicating
the current state of the machine. The frame holds the stack,
and references to all objects.
By chance, the locals are not in a dict, but unpacked into
the frame. (Sometimes I agree with Guido, that optimization
is considered harmful :-)

> This is clearer in Scheme because its "stack" holds *only* control-flow info
> (bindings follow a chain of static links, independent of the current "call
> stack"), so there's no temptation to run off copying bindings too.

The Python stack, besides its intermingledness with the machine
stack, is basically its chain of frames. The value stack pointer
still hides in the machine stack, but that's easy to change.
So the real Scheme-like part is this chain, methinks, with
the current bytecode offset and value stack info.

Making a copy of this in a restartable way means to increase
the refcount of all objects in a frame. Would it be correct
to undo the effect of fast locals before splitting, and redoing
it on activation?

Or do I need to rethink the whole structure? What should
be natural for Python, it at all?

ciao - chris

--
Christian Tismer :^) <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH : Have a break! Take a ride on Python's
Kaiserin-Augusta-Allee 101 : *Starship* http://starship.python.net
10553 Berlin : PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
we're tired of banana software - shipped green, ripens at home
Re: 'stackless' python? [ In reply to ]
>>>>> "CT" == Christian Tismer <tismer@appliedbiometrics.com> writes:

[Tim Peters]
>> This is clearer in Scheme because its "stack" holds *only*
>> control-flow info (bindings follow a chain of static links,
>> independent of the current "call stack"), so there's no
>> temptation to run off copying bindings too.

CT> The Python stack, besides its intermingledness with the machine
CT> stack, is basically its chain of frames. The value stack pointer
CT> still hides in the machine stack, but that's easy to change. So
CT> the real Scheme-like part is this chain, methinks, with the
CT> current bytecode offset and value stack info.

CT> Making a copy of this in a restartable way means to increase the
CT> refcount of all objects in a frame. Would it be correct to undo
CT> the effect of fast locals before splitting, and redoing it on
CT> activation?

Wouldn't it be easier to increase the refcount on the frame object?
Then you wouldn't need to worry about the recounts on all the objects
in the frame, because they would only be decrefed when the frame is
deallocated.

It seems like the two other things you would need are some way to get
a copy of the current frame and a means to invoke eval_code2 with an
already existing stack frame instead of a new one.

(This sounds too simple, so it's obviously wrong. I'm just not sure
where. Is the problem that you really need a seperate stack/graph to
hold the frames? If we leave them on the Python stack, it could be
hard to dis-entangle value objects from control objects.)

Jeremy
Re: 'stackless' python? [ In reply to ]
Jeremy Hylton wrote:

[TP+CT about frame copies et al]

> Wouldn't it be easier to increase the refcount on the frame object?
> Then you wouldn't need to worry about the recounts on all the objects
> in the frame, because they would only be decrefed when the frame is
> deallocated.

Well, the frame is supposed to be run twice, since there are
two incarnations of interpreters working on it: The original one,
and later, when it is thown, another one (or the same, but, in
principle).
The frame could have been in any state, with a couple
of objects on the stack. My splitting function can be invoked
in some nested context, so I have a current opcode position,
and a current stack position.
Running this once leaves the stack empty, since all the objects are
decrefed. Running this a second time gives a GPF, since the stack is
empty.
Therefore, I made a copy which means to create a duplicate frame
with an extra refcound for all the objects. This makes sure
that both can be restarted at any time.

> It seems like the two other things you would need are some way to get
> a copy of the current frame and a means to invoke eval_code2 with an
> already existing stack frame instead of a new one.

Well, that's exactly where I'm working on.

> (This sounds too simple, so it's obviously wrong. I'm just not sure
> where. Is the problem that you really need a seperate stack/graph to
> hold the frames? If we leave them on the Python stack, it could be
> hard to dis-entangle value objects from control objects.)

Oh, perhaps I should explain it a bit clearer?
What did you mean by the Python stack? The hardware machine stack?

What do we have at the moment:
The stack is the linked list of frames. Every frame has a
local Python evaluation stack. Calls of Python functions produce
a new frame, and the old one is put beneath. This is the control
stack. The additional info on the hardware stack happens to be
a parallel friend of this chain, and currently holds extra info,
but this is an artifact. Adding the current Python stack level
to the frame makes the hardware stack totally unnecessary.

There is a possible speed loss, anyway.
Today, the recursive call of ceval2 is optimized and quite
fast. The non-recursive Version will have to copy variables
in and out from the frames, instead, so there is of course
a little speed penalty to pay.

ciao - chris

--
Christian Tismer :^) <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH : Have a break! Take a ride on Python's
Kaiserin-Augusta-Allee 101 : *Starship* http://starship.python.net
10553 Berlin : PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
we're tired of banana software - shipped green, ripens at home
Re: 'stackless' python? [ In reply to ]
Tim Peters wrote:
>
> [Christian Tismer]
> > I've put quite many hours into a non-recursive ceval.c
> > already.
>
> Does that mean 6 or 600 <wink>?

6, or 10, or 20, if I count the time from the first
start with Sam's code, maybe.

>
> > Should I continue? At least this would be a little improvement, also
> > if the continuation thing will not be born. ?
>
> Guido wanted to move in the "flat interpreter" direction for Python2 anyway,
> so my belief is it's worth pursuing.
>
> but-then-i-flipped-a-coin-with-two-heads-ly y'rs - tim

Right. Who'se faces? :-)

On the stackless thing, what should I do.
I started to insert minimum patches, but it turns out
that I have to change frames a little (extending).

I can make quite small changes to the interpreter to replace
the recursive calls, but this involves extra flags in some cases,
where the interpreter is called the first time and so on.

What has more probability to be included into a future Python:
Tweaking the current thing only minimally, to make it as similar
as possible as the former?
Or do as much redesign as I think is needed to do it in
a clean way. This would mean to split eval_code2 into two functions,
where one is the interpreter kernel, and one is the frame manager.

There are also other places which do quite deep function calls
and finally call eval_code2. I think these should return a frame
object now. I could convince them to call or return frame,
depending on a flag, but it would be clean to rename the functions,
let them always deal with frames, and put the original function
on top of it.

Short, I can do larger changes which clean this all a bit up,
or I can make small changes which are more tricky to grasp,
but give just small diffs.

How to touch untouchable code the best? :-)

--
Christian Tismer :^) <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH : Have a break! Take a ride on Python's
Kaiserin-Augusta-Allee 101 : *Starship* http://starship.python.net
10553 Berlin : PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
we're tired of banana software - shipped green, ripens at home
Re: 'stackless' python? [ In reply to ]
I think it makes sense to avoid being obscure or unclear in order to
minimize the size of the patch or the diff. Realistically, it's
unlikely that anything like your original patch is going to make it
into the CVS tree. It's primary value is as proof of concept and as
code that the rest of us can try out. If you make large changes, but
they are clearer, you'll help us out a lot.

We can worry about minimizing the impact of the changes on the
codebase after, after everyone has figured out what's going on and
agree that its worth doing.

feeling-much-more-confident-because-I-didn't-say-continuation-ly yr's,
Jeremy
Re: 'stackless' python? [ In reply to ]
Jeremy Hylton wrote:
>
> I think it makes sense to avoid being obscure or unclear in order to
> minimize the size of the patch or the diff. Realistically, it's
> unlikely that anything like your original patch is going to make it
> into the CVS tree. It's primary value is as proof of concept and as
> code that the rest of us can try out. If you make large changes, but
> they are clearer, you'll help us out a lot.

Many many thanks. This is good advice.
I will make absolutely clear what's going on, keep
parts untouched as possible, cut out parts which must
change, and I will not look into speed too much.

Better have a function call more and a bit less optimization,
but a clear and rock-solid introduction of a concept.

> We can worry about minimizing the impact of the changes on the
> codebase after, after everyone has figured out what's going on and
> agree that its worth doing.
>
> feeling-much-more-confident-because-I-didn't-say-continuation-ly yr's,
> Jeremy

Hihi - the new little slot with local variables of the
interpreter happens to have the name "continuation".
Maybe I'd better rename it to "activation record"?.

Now, there is no longer a recoursive call. Instead, a frame
object is returned, which is waiting to be activated
by a dispatcher.

Some more ideas are popping up. Right now, only the recursive
calls can vanish. Callbacks from C code which is called by
the interpreter whcih is called by... is still a problem.

But it might perhaps vanish completely. We have to see
how much the cost is. But if I can manage to let the interpreter
duck and cover also on every call to a builtin? The interpreter
again returns to the dispatcher which then calls the builtin.
Well, if that builtin happens to call to the interpreter again,
it will be a dispatcher again. The machine stack grows a little,
but since everything is saved in the frames, these stacks are
no longer related. This means, the principle works with existing
extension modules, since interpreter-world and C-stack world
are decoupled.
To avoid stack growth, of course a number of builtins would
be better changed, but it is no must in the first place.
execfile for instance is a candidate which needn't call the
interpreter. It could equally parse the file, generate the
code object, build a frame and just return it. This is what
the dispatcher likes: returned frames are put on the chain
and fired.

waah, my bus - running - ciao - chris

--
Christian Tismer :^) <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH : Have a break! Take a ride on Python's
Kaiserin-Augusta-Allee 101 : *Starship* http://starship.python.net
10553 Berlin : PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
we're tired of banana software - shipped green, ripens at home
RE: 'stackless' python? [ In reply to ]
[Christian Tismer]
> I tried the most simple thing, and this seemed to be duplicating
> the current state of the machine. The frame holds the stack,
> and references to all objects.
> By chance, the locals are not in a dict, but unpacked into
> the frame. (Sometimes I agree with Guido, that optimization
> is considered harmful :-)

I don't see that the locals are a problem here -- provided you simply leave
them alone <wink>.

> The Python stack, besides its intermingledness with the machine
> stack, is basically its chain of frames.

Right.

> The value stack pointer still hides in the machine stack, but
> that's easy to change.

I'm not sure what "value stack" means here, or "machine stack". The latter
means the C stack? Then I don't know which values you have in mind that are
hiding in it (the locals are, as you say, unpacked in the frame, and the
evaluation stack too). By "evaluation stack" I mean specifically
f->f_valuestack; the current *top* of stack pointer (specifically
stack_pointer) lives in the C stack -- is that what we're talking about?
Whichever, when we're talking about the code, let's use the names the code
uses <wink>.

> So the real Scheme-like part is this chain, methinks, with
> the current bytecode offset and value stack info.

Curiously, f->f_lasti is already materialized every time we make a call, in
order to support tracing. So if capturing a continuation is done via a
function call (hard to see any other way it could be done <wink>), a
bytecode offset is already getting saved in the frame object.

> Making a copy of this in a restartable way means to increase
> the refcount of all objects in a frame.

You later had a vision of splitting the frame into two objects -- I think.
Whichever part the locals live in should not be copied at all, but merely
have its (single) refcount increased. The other part hinges on details of
your approach I don't know. The nastiest part seems to be f->f_valuestack,
which conceptually needs to be (shallow) copied in the current frame and in
all other frames reachable from the current frame's continuation (the chain
rooted at f->f_back today); that's the sum total (along with the same
frames' bytecode offsets) of capturing the control flow state.

> Would it be correct to undo the effect of fast locals before
> splitting, and redoing it on activation?

Unsure what splitting means, but in any case I can't conceive of a reason
for doing anything to the locals. Their values aren't *supposed* to get
restored upon continuation invocation, so there's no reason to do anything
with their values upon continuation creation either. Right? Or are we
talking about different things?

almost-as-good-as-pantomimem<wink>-ly y'rs - tim
Re: 'stackless' python? [ In reply to ]
Cleaning up, clarifying, trying to understand...

Tim Peters wrote:
>
> [Christian Tismer]
> > I tried the most simple thing, and this seemed to be duplicating
> > the current state of the machine. The frame holds the stack,
> > and references to all objects.
> > By chance, the locals are not in a dict, but unpacked into
> > the frame. (Sometimes I agree with Guido, that optimization
> > is considered harmful :-)
>
> I don't see that the locals are a problem here -- provided you simply leave
> them alone <wink>.

This depends on wether I have to duplicate frames
or not. Below...

> > The Python stack, besides its intermingledness with the machine
> > stack, is basically its chain of frames.
>
> Right.
>
> > The value stack pointer still hides in the machine stack, but
> > that's easy to change.
>
> I'm not sure what "value stack" means here, or "machine stack". The latter
> means the C stack? Then I don't know which values you have in mind that are
> hiding in it (the locals are, as you say, unpacked in the frame, and the
> evaluation stack too). By "evaluation stack" I mean specifically
> f->f_valuestack; the current *top* of stack pointer (specifically
> stack_pointer) lives in the C stack -- is that what we're talking about?

Exactly!

> Whichever, when we're talking about the code, let's use the names the code
> uses <wink>.

The evaluation stack pointer is a local variable in the
C stack and must be written to the frame to become independant
from the C stack. Sounds better now?

>
> > So the real Scheme-like part is this chain, methinks, with
> > the current bytecode offset and value stack info.
>
> Curiously, f->f_lasti is already materialized every time we make a call, in
> order to support tracing. So if capturing a continuation is done via a
> function call (hard to see any other way it could be done <wink>), a
> bytecode offset is already getting saved in the frame object.

You got me. I'm just completing what is partially there.

> > Making a copy of this in a restartable way means to increase
> > the refcount of all objects in a frame.
>
> You later had a vision of splitting the frame into two objects -- I think.

My wrong wording. Not splitting, but duplicting. If a frame is the
current state, I make it two frames to have two current states.
One will be saved, the other will be run. This is what I call
"splitting".
Actually, splitting must occour whenever a frame can be reached twice,
in order to keep elements alive.

> Whichever part the locals live in should not be copied at all, but merely
> have its (single) refcount increased. The other part hinges on details of
> your approach I don't know. The nastiest part seems to be f->f_valuestack,
> which conceptually needs to be (shallow) copied in the current frame and in
> all other frames reachable from the current frame's continuation (the chain
> rooted at f->f_back today); that's the sum total (along with the same
> frames' bytecode offsets) of capturing the control flow state.

Well, I see. You want one locals and one globals, shared by two
incarnations. Gets me into trouble.

> > Would it be correct to undo the effect of fast locals before
> > splitting, and redoing it on activation?
>
> Unsure what splitting means, but in any case I can't conceive of a reason
> for doing anything to the locals. Their values aren't *supposed* to get
> restored upon continuation invocation, so there's no reason to do anything
> with their values upon continuation creation either. Right? Or are we
> talking about different things?

Let me explain. What Python does right now is:
When a function is invoked, all local variables are copied
into fast_locals, well of course just references are copied
and counts increased. These fast locals give a lot of speed
today, we must have them.
You are saying I have to share locals between frames. Besides
that will be a reasonable slowdown, since an extra structure
must be built and accessed indirectly (right now, i's all fast,
living in the one frame buffer), I cannot say that I'm convinced
that this is what we need.

Suppose you have a function

def f(x):
# do something
...
# in some context, wanna have a snapshot
global snapshot # initialized to None
if not snapshot:
snapshot = callcc.new()
# continue computation
x = x+1
...

What I want to achieve is that I can run this again, from my
snapshot. But with shared locals, my parameter x of the
snapshot would have changed to x+1, which I don't find useful.
I want to fix a state of the current frame and still think
it should "own" its locals. Globals are borrowed, anyway.
Class instances will anyway do what you want, since
the local "self" is a mutable object.

How do you want to keep computations independent
when locals are shared? For me it's just easier to
implement and also to think with the shallow copy.
Otherwise, where is my private place?
Open for becoming convinced, of course :-)

ciao - chris

--
Christian Tismer :^) <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH : Have a break! Take a ride on Python's
Kaiserin-Augusta-Allee 101 : *Starship* http://starship.python.net
10553 Berlin : PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
we're tired of banana software - shipped green, ripens at home
Re: 'stackless' python? [ In reply to ]
>>>>> "CT" == Christian Tismer <tismer@appliedbiometrics.com> writes:

CT> What I want to achieve is that I can run this again, from my
CT> snapshot. But with shared locals, my parameter x of the snapshot
CT> would have changed to x+1, which I don't find useful. I want to
CT> fix a state of the current frame and still think it should "own"
CT> its locals. Globals are borrowed, anyway. Class instances will
CT> anyway do what you want, since the local "self" is a mutable
CT> object.

CT> How do you want to keep computations independent when locals are
CT> shared? For me it's just easier to implement and also to think
CT> with the shallow copy. Otherwise, where is my private place?
CT> Open for becoming convinced, of course :-)

I think you're making things a lot more complicated by trying to
instantiate new variable bindings for locals every time you create a
continuation. Can you give an example of why that would be helpful?
(Ok. I'm not sure I can offer a good example of why it would be
helpful to share them, but it makes intuitive sense to me.)

The call_cc mechanism is going to let you capture the current
continuation, save it somewhere, and call on it again as often as you
like. Would you get a fresh locals each time you used it? or just
the first time? If only the first time, it doesn't seem that you've
gained a whole lot.

Also, all the locals that are references to mutable objects are
already effectively shared. So it's only a few oddballs like ints
that are an issue.

Jeremy
RE: 'stackless' python? [ In reply to ]
[Christian]
[... clarified stuff ... thanks! ... much clearer ...]
> ...
> If a frame is the current state, I make it two frames to have two
> current states. One will be saved, the other will be run. This is
> what I call "splitting". Actually, splitting must occour whenever
> a frame can be reached twice, in order to keep elements alive.

That part doesn't compute: if a frame can be reached by more than one path,
its refcount must be at least equal to the number of its immediate
predecessors, and its refcount won't fall to 0 before it becomes
unreachable. So while you may need to split stuff for *some* reasons, I
can't see how keeping elements alive could be one of those reasons (unless
you're zapping frame contents *before* the frame itself is garbage?).

> ...
> Well, I see. You want one locals and one globals, shared by two
> incarnations. Gets me into trouble.

Just clarifying what Scheme does. Since they've been doing this forever, I
don't want to toss their semantics on a whim <wink>. It's at least a
conceptual thing: why *should* locals follow different rules than globals?
If Python2 grows lexical closures, the only thing special about today's
"locals" is that they happen to be the first guys found on the search path.
Conceptually, that's really all they are today too.

Here's the clearest Scheme example I can dream up:

(define k #f)

(define (printi i)
(display "i is ") (display i) (newline))

(define (test n)
(let ((i n))
(printi i)
(set! i (- i 1))
(printi i)
(display "saving continuation") (newline)
(call/cc (lambda (here) (set! k here)))
(set! i (- i 1))
(printi i)
(set! i (- i 1))
(printi i)))

No loops, no recursive calls, just a straight chain of fiddle-a-local ops.
Here's some output:

> (test 5)
i is 5
i is 4
saving continuation
i is 3
i is 2
> (k #f)
i is 1
i is 0
> (k #f)
i is -1
i is -2
> (k #f)
i is -3
i is -4
>

So there's no question about what Scheme thinks is proper behavior here.

> ...
> Let me explain. What Python does right now is:
> When a function is invoked, all local variables are copied
> into fast_locals, well of course just references are copied
> and counts increased. These fast locals give a lot of speed
> today, we must have them.

Scheme (most of 'em, anyway) also resolves locals via straight base + offset
indexing.

> You are saying I have to share locals between frames. Besides
> that will be a reasonable slowdown, since an extra structure
> must be built and accessed indirectly (right now, i's all fast,
> living in the one frame buffer),

GETLOCAL and SETLOCAL simply index off of the fastlocals pointer; it doesn't
care where that points *to* <wink -- but, really, it could point into some
other frame and ceval2 wouldn't know the difference). Maybe a frame entered
due to continuation needs extra setup work? Scheme saves itself by putting
name-resolution and continuation info into different structures; to mimic
the semantics, Python would need to get the same end effect.

> I cannot say that I'm convinced that this is what we need.
>
> Suppose you have a function
>
> def f(x):
> # do something
> ...
> # in some context, wanna have a snapshot
> global snapshot # initialized to None
> if not snapshot:
> snapshot = callcc.new()
> # continue computation
> x = x+1
> ...
>
> What I want to achieve is that I can run this again, from my
> snapshot. But with shared locals, my parameter x of the
> snapshot would have changed to x+1, which I don't find useful.

You need a completely fleshed-out example to score points here: the use of
call/cc is subtle, hinging on details, and fragments ignore too much. If
you do want the same x,

commonx = x
if not snapshot:
# get the continuation
# continue computation
x = commonx
x = x+1
...

That is, it's easy to get it. But if you *do* want to see changes to the
locals (which is one way for those distinct continuation invocations to
*cooperate* in solving a task -- see below), but the implementation doesn't
allow for it, I don't know what you can do to worm around it short of making
x global too. But then different *top* level invocations of f will stomp on
that shared global, so that's not a solution either. Maybe forget functions
entirely and make everything a class method.

> I want to fix a state of the current frame and still think
> it should "own" its locals. Globals are borrowed, anyway.
> Class instances will anyway do what you want, since
> the local "self" is a mutable object.
>
> How do you want to keep computations independent
> when locals are shared? For me it's just easier to
> implement and also to think with the shallow copy.
> Otherwise, where is my private place?
> Open for becoming convinced, of course :-)

I imagine it comes up less often in Scheme because it has no loops:
communication among "iterations" is via function arguments or up-level
lexical vrbls.

So recall your uses of Icon generators instead: like Python, Icon does have
loops, and two-level scoping, and I routinely build loopy Icon generators
that keep state in locals. Here's a dirt-simple example I emailed to Sam
earlier this week:

procedure main()
every result := fib(0, 1) \ 10 do
write(result)
end

procedure fib(i, j)
local temp
repeat {
suspend i
temp := i + j
i := j
j := temp
}
end

which prints

0
1
1
2
3
5
8
13
21
34

If Icon restored the locals (i, j, temp) upon each fib resumption, it would
generate a zero followed by an infinite sequence of ones(!).

Think of a continuation as a *paused* computation (which it is) rather than
an *independent* one (which it isn't <wink>), and I think it gets darned
hard to argue.

theory-and-practice-agree-here-in-my-experience-ly y'rs - tim
Re: 'stackless' python? [ In reply to ]
Jeremy Hylton wrote:
>
> >>>>> "CT" == Christian Tismer <tismer@appliedbiometrics.com> writes:
>
> CT> What I want to achieve is that I can run this again, from my
> CT> snapshot. But with shared locals, my parameter x of the snapshot
> CT> would have changed to x+1, which I don't find useful. I want to
> CT> fix a state of the current frame and still think it should "own"
> CT> its locals. Globals are borrowed, anyway. Class instances will
> CT> anyway do what you want, since the local "self" is a mutable
> CT> object.
>
> CT> How do you want to keep computations independent when locals are
> CT> shared? For me it's just easier to implement and also to think
> CT> with the shallow copy. Otherwise, where is my private place?
> CT> Open for becoming convinced, of course :-)
>
> I think you're making things a lot more complicated by trying to
> instantiate new variable bindings for locals every time you create a
> continuation. Can you give an example of why that would be helpful?

I'm not sure wether you all understand me, and vice versa.
There is no copying at all, but for the frame.
I copy the frame, which means I also incref all the
objects which it holds. Done. This is the bare minimum
which I must do.

> (Ok. I'm not sure I can offer a good example of why it would be
> helpful to share them, but it makes intuitive sense to me.)
>
> The call_cc mechanism is going to let you capture the current
> continuation, save it somewhere, and call on it again as often as you
> like. Would you get a fresh locals each time you used it? or just
> the first time? If only the first time, it doesn't seem that you've
> gained a whole lot.

call_cc does a copy of the state which is the frame. This is
stored away until it is revived. Nothing else happens.
As Guido pointed out, virtually the whole frame chain is
duplicated, but only on demand.

> Also, all the locals that are references to mutable objects are
> already effectively shared. So it's only a few oddballs like ints
> that are an issue.

Simply look at a frame, what it is. What do you need to do to
run it again with a given state. You have to preserve the stack
variables. And you have to preserve the current locals, since
some of them might even have a copy on the stack, and we want
to stay consistent.

I believe it would become obvious if you tried to implement it.
Maybe I should close my ears and get something ready to show?

ciao - chris

--
Christian Tismer :^) <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH : Have a break! Take a ride on Python's
Kaiserin-Augusta-Allee 101 : *Starship* http://starship.python.net
10553 Berlin : PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
we're tired of banana software - shipped green, ripens at home
Re: 'stackless' python? [ In reply to ]
Tim Peters wrote:
>
> [Christian]
> [... clarified stuff ... thanks! ... much clearer ...]

But still not clear enough, I fear.

> > ...
> > If a frame is the current state, I make it two frames to have two
> > current states. One will be saved, the other will be run. This is
> > what I call "splitting". Actually, splitting must occour whenever
> > a frame can be reached twice, in order to keep elements alive.
>
> That part doesn't compute: if a frame can be reached by more than one path,
> its refcount must be at least equal to the number of its immediate
> predecessors, and its refcount won't fall to 0 before it becomes
> unreachable. So while you may need to split stuff for *some* reasons, I
> can't see how keeping elements alive could be one of those reasons (unless
> you're zapping frame contents *before* the frame itself is garbage?).

I was saying that under the side condition that I don't want to
change frames as they are now. Maybe that's misconcepted, but
this is what I did:

If a frame as we have it today shall be resumed twice, then
it has to be copied, since:
The stack is in it and has some state which will change
after resuming.

That was the whole problem with my first prototype, which
was done hoping that I don't need to change the interpreter
at all. Wrong, bad, however.

What I actually did was more than seems to be needed:
I made a copy of the whole current frame chain. Later on,
Guido said this can be done on demand. He's right.

[Scheme sample - understood]

> GETLOCAL and SETLOCAL simply index off of the fastlocals pointer; it doesn't
> care where that points *to* <wink -- but, really, it could point into some
> other frame and ceval2 wouldn't know the difference). Maybe a frame entered
> due to continuation needs extra setup work? Scheme saves itself by putting
> name-resolution and continuation info into different structures; to mimic
> the semantics, Python would need to get the same end effect.

Point taken. The pointer doesn't save time of access, it just
saves allocating another structure.
So we can use something else without speed loss.

[have to cut a little]

> So recall your uses of Icon generators instead: like Python, Icon does have
> loops, and two-level scoping, and I routinely build loopy Icon generators
> that keep state in locals. Here's a dirt-simple example I emailed to Sam
> earlier this week:
>
> procedure main()
> every result := fib(0, 1) \ 10 do
> write(result)
> end
>
> procedure fib(i, j)
> local temp
> repeat {
> suspend i
> temp := i + j
> i := j
> j := temp
> }
> end

[prints fib series]

> If Icon restored the locals (i, j, temp) upon each fib resumption, it would
> generate a zero followed by an infinite sequence of ones(!).

Now I'm completely missing the point. Why should I want
to restore anything? At a suspend, which when done by continuations
will be done by temporarily having two identical states, one
is saved and another is continued. The continued one in your example
just returns the current value and immediately forgets about
the locals. The other one is continued later, and of course with
the same locals which were active when going asleep.

> Think of a continuation as a *paused* computation (which it is) rather than
> an *independent* one (which it isn't <wink>), and I think it gets darned
> hard to argue.

No, you get me wrong. I understand what you mean. It is just
the decision wether a frame, which will be reactivated later
as a continuation, should use a reference to locals like
the reference which it has for the globals. This causes me
a major frame redesign.

Current design:
A frame is: back chain, state, code, unpacked locals, globals, stack.

Code and globals are shared.
State, unpacked locals and stack are private.

Possible new design:
A frame is: back chain, state, code, variables, globals, stack.

variables is: unpacked locals.

This makes the variables into an extra structure which is shared.
Probably a list would be the thing, or abusing a tuple as
a mutable object.

Hmm. I think I should get something ready, and we should
keep this thread short, or we will loose the rest of
Guido's goodwill (if not already).

ciao - chris

--
Christian Tismer :^) <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH : Have a break! Take a ride on Python's
Kaiserin-Augusta-Allee 101 : *Starship* http://starship.python.net
10553 Berlin : PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
we're tired of banana software - shipped green, ripens at home

1 2  View All