Mailing List Archive

1 2  View All
ANN: Stackless Python 0.2 [ In reply to ]
> ...
> Py_BEGIN_ALLOW_THREADS
> ThreadedSpam();
> Py_END_ALLOW_THREADS
>
> it's-only-confusing-if-you-think-about-it-too-much<wink>-ly y'rs - tim

[Robin Becker]
> so do I then have to poll ThreadedSpam myself to see if it's finished or
> is there a python api version of mutexes/locks etc.

You're in C now -- you do anything you need to do, depending on the
specifics of ThreadedSpam (which was presumed to be a pre-existing
thread-safe C routine). The snippet above clearly assumes that ThreadedSpam
"is finished" when it returns from the call to it. If your flavor of
ThreadedSpam doesn't enjoy this property, that's fine too, but then you do
whatever *it* requires you to do. This is like asking me whether the right
answer is bigger than 5, or less than 4: how the heck should I know <wink>?

You can certainly use Python's lock abstraction in your own C code, but it's
unlikely a pre-existing C function is using that too.

python-doesn't-restrict-what-c-code-can-do-except-to-insist-that-it-
acquire-the-lock-before-returning-to-python-ly y'rs - tim
ANN: Stackless Python 0.2 [ In reply to ]
Toby J Sargeant wrote:
>
> On Mon, Jun 28, 1999 at 12:34:48AM -0400, Tim Peters wrote:
> > Nope! It's a beauty of the implementation already that each code object
> > knows exactly how much "Python stack space" it needs, and (just) that much
> > is allocated directly into the code object's runtime frame object. IOW,
> > there isn't "a Python stack" as such, so there's nothing to change here --
> > the stack is implicit in the way frames link up to each other.
>
> This might be completely irrelevant, but during the course of my masters, I
> considered doing this kind of thing to java in order to allow asynchronous
> threads to share stack frames (don't ask...). My supervisor complained bitterly
> on the grounds that function invocations where orders of magnitude more
> common than object creation, and hence associating memory
> allocation/deallocation with every call was considered horrendously
> inefficient.

This appears to be the conventional wisdom. The optimization class I
took used an ML-like language, and the first thing we did was to move
heap frames, which was the default, to the stack (i.e. closure
optimization.)
It might be less of an issue in an interpreted language if it already has
heap
allocation (or other sorts of) overhead to deal with when making calls.
There may be other allocation tricks which can be used, as well.

> It seems that this should affect Stackless Python equally as much. Does anyone
> have anything to add on the subject? I would imagine that frames could be
> allocated and managed in chunks to alleviate a lot of the memory management
> load...

I would think that as well (at least at first.) Maybe pools of different
sizes of frame to limit the space overhead.

> Toby.

Gary Duzan
GTE Laboratories
ANN: Stackless Python 0.2 [ In reply to ]
Gary Duzan wrote:
>
> Toby J Sargeant wrote:
> >
...
> This appears to be the conventional wisdom. The optimization class I
> took used an ML-like language, and the first thing we did was to move
> heap frames, which was the default, to the stack (i.e. closure
> optimization.)

Loosing much of flexibility by doing that, btw.

> It might be less of an issue in an interpreted language if it already has
> heap
> allocation (or other sorts of) overhead to deal with when making calls.
> There may be other allocation tricks which can be used, as well.
>
> > It seems that this should affect Stackless Python equally as much. Does anyone
> > have anything to add on the subject? I would imagine that frames could be
> > allocated and managed in chunks to alleviate a lot of the memory management
> > load...
>
> I would think that as well (at least at first.) Maybe pools of different
> sizes of frame to limit the space overhead.

Maybe you should really look into the sources.
You are discussing things which are not Python
related, since this simply does not apply.

The Python "stacks" are so tiny that you can forget about them.
Allocations always never occour, unless your program runs
astray in recursion.

The cost of a call comes from the slightly complicated
function setup which is done all the time. Frames belong
to the fastest available Python objects already,
like, say dictionaries.

--
Christian Tismer :^) <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH : Have a break! Take a ride on Python's
Kaiserin-Augusta-Allee 101 : *Starship* http://starship.python.net
10553 Berlin : PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
we're tired of banana software - shipped green, ripens at home
ANN: Stackless Python 0.2 [ In reply to ]
From: Christian Tismer <tismer@appliedbiometrics.com>



Gary Duzan wrote:
>
> Toby J Sargeant wrote:
> >
...
> This appears to be the conventional wisdom. The optimization class I
> took used an ML-like language, and the first thing we did was to move
> heap frames, which was the default, to the stack (i.e. closure
> optimization.)

Loosing much of flexibility by doing that, btw.

> It might be less of an issue in an interpreted language if it already has
> heap
> allocation (or other sorts of) overhead to deal with when making calls.
> There may be other allocation tricks which can be used, as well.
>
> > It seems that this should affect Stackless Python equally as much. Does
anyone
> > have anything to add on the subject? I would imagine that frames could be
> > allocated and managed in chunks to alleviate a lot of the memory management
> > load...
>
> I would think that as well (at least at first.) Maybe pools of different
> sizes of frame to limit the space overhead.

Maybe you should really look into the sources.
You are discussing things which are not Python
related, since this simply does not apply.

The Python "stacks" are so tiny that you can forget about them.
Allocations always never occour, unless your program runs
astray in recursion.

The cost of a call comes from the slightly complicated
function setup which is done all the time. Frames belong
to the fastest available Python objects already,
like, say dictionaries.

--
Christian Tismer :^) <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH : Have a break! Take a ride on Python's
Kaiserin-Augusta-Allee 101 : *Starship* http://starship.python.net
10553 Berlin : PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
we're tired of banana software - shipped green, ripens at home
ANN: Stackless Python 0.2 [ In reply to ]
In Message <37791B4D.43ADC401@appliedbiometrics.com> ,
Christian Tismer <tismer@appliedbiometrics.com> wrote:

=>Gary Duzan wrote:
=>>
=>> Toby J Sargeant wrote:
=>> >
=>...
=>> This appears to be the conventional wisdom. The optimization class I
=>> took used an ML-like language, and the first thing we did was to move
=>> heap frames, which was the default, to the stack (i.e. closure
=>> optimization.)
=>
=>Loosing much of flexibility by doing that, btw.

True. For example, (if I remember correctly; it has been a while)
tail call optimization is defeated. In this particular case, however,
we didn't break the language semantics (which didn't guarantee constant
memory tail recursion) because we only did stack conversion after
checking that it wouldn't break anything. Otherwise, the closure is
left on the heap. I see that I wasn't clear about that in my original
post; sorry 'bout that.

=>> It might be less of an issue in an interpreted language if it already has
=>> heap
=>> allocation (or other sorts of) overhead to deal with when making calls.
=>> There may be other allocation tricks which can be used, as well.


=>> > It seems that this should affect Stackless Python equally as much. Does a
nyone
=>> > have anything to add on the subject? I would imagine that frames could be
=>> > allocated and managed in chunks to alleviate a lot of the memory manageme
nt
=>> > load...
=>>
=>> I would think that as well (at least at first.) Maybe pools of different
=>> sizes of frame to limit the space overhead.
=>
=>Maybe you should really look into the sources.
=>You are discussing things which are not Python
=>related, since this simply does not apply.

Freely conceeded.

=>The Python "stacks" are so tiny that you can forget about them.
=>Allocations always never occour, unless your program runs
=>astray in recursion.
=>
=>The cost of a call comes from the slightly complicated
=>function setup which is done all the time. Frames belong
=>to the fastest available Python objects already,
=>like, say dictionaries.

I thought that might be the case, which is the point I was making in
my earlier paragraph. I was just briefly pondering possibilities for
reducing frame memory management time, if it were necessary. Stackless
Python certainly seems to me to be a Good Thing.

Gary Duzan
GTE Laboratories
ANN: Stackless Python 0.2 [ In reply to ]
From: "Gary D. Duzan" <gdd0@gte.com>

In Message <37791B4D.43ADC401@appliedbiometrics.com> ,
Christian Tismer <tismer@appliedbiometrics.com> wrote:

=>Gary Duzan wrote:
=>>
=>> Toby J Sargeant wrote:
=>> >
=>...
=>> This appears to be the conventional wisdom. The optimization class I
=>> took used an ML-like language, and the first thing we did was to move
=>> heap frames, which was the default, to the stack (i.e. closure
=>> optimization.)
=>
=>Loosing much of flexibility by doing that, btw.

True. For example, (if I remember correctly; it has been a while)
tail call optimization is defeated. In this particular case, however,
we didn't break the language semantics (which didn't guarantee constant
memory tail recursion) because we only did stack conversion after
checking that it wouldn't break anything. Otherwise, the closure is
left on the heap. I see that I wasn't clear about that in my original
post; sorry 'bout that.

=>> It might be less of an issue in an interpreted language if it already has
=>> heap
=>> allocation (or other sorts of) overhead to deal with when making calls.
=>> There may be other allocation tricks which can be used, as well.


=>> > It seems that this should affect Stackless Python equally as much. Does a
nyone
=>> > have anything to add on the subject? I would imagine that frames could be
=>> > allocated and managed in chunks to alleviate a lot of the memory manageme
nt
=>> > load...
=>>
=>> I would think that as well (at least at first.) Maybe pools of different
=>> sizes of frame to limit the space overhead.
=>
=>Maybe you should really look into the sources.
=>You are discussing things which are not Python
=>related, since this simply does not apply.

Freely conceeded.

=>The Python "stacks" are so tiny that you can forget about them.
=>Allocations always never occour, unless your program runs
=>astray in recursion.
=>
=>The cost of a call comes from the slightly complicated
=>function setup which is done all the time. Frames belong
=>to the fastest available Python objects already,
=>like, say dictionaries.

I thought that might be the case, which is the point I was making in
my earlier paragraph. I was just briefly pondering possibilities for
reducing frame memory management time, if it were necessary. Stackless
Python certainly seems to me to be a Good Thing.

Gary Duzan
GTE Laboratories
ANN: Stackless Python 0.2 [ In reply to ]
"Gary D. Duzan" wrote:
...

> =>The cost of a call comes from the slightly complicated
> =>function setup which is done all the time. Frames belong
> =>to the fastest available Python objects already,
> =>like, say dictionaries.
>
> I thought that might be the case, which is the point I was making in
> my earlier paragraph. I was just briefly pondering possibilities for
> reducing frame memory management time, if it were necessary.

I hoped it would be possible to save most of the
frame initialization, in cases where the same function
call appears in the same place again and again.
The barrier is that I would have to keep references
longer than necessary, so I'm not sure if that can
be done.

BTW, the continuations which I am just testing
are doing a great job of creating hard to release
references to frames. They are correct, but create
some cycles, quickly. This project is becoming
very time expensive already :-)

cheers - chris

--
Christian Tismer :^) <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH : Have a break! Take a ride on Python's
Kaiserin-Augusta-Allee 101 : *Starship* http://starship.python.net
10553 Berlin : PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
we're tired of banana software - shipped green, ripens at home
ANN: Stackless Python 0.2 [ In reply to ]
From: Christian Tismer <tismer@appliedbiometrics.com>



"Gary D. Duzan" wrote:
...

> =>The cost of a call comes from the slightly complicated
> =>function setup which is done all the time. Frames belong
> =>to the fastest available Python objects already,
> =>like, say dictionaries.
>
> I thought that might be the case, which is the point I was making in
> my earlier paragraph. I was just briefly pondering possibilities for
> reducing frame memory management time, if it were necessary.

I hoped it would be possible to save most of the
frame initialization, in cases where the same function
call appears in the same place again and again.
The barrier is that I would have to keep references
longer than necessary, so I'm not sure if that can
be done.

BTW, the continuations which I am just testing
are doing a great job of creating hard to release
references to frames. They are correct, but create
some cycles, quickly. This project is becoming
very time expensive already :-)

cheers - chris

--
Christian Tismer :^) <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH : Have a break! Take a ride on Python's
Kaiserin-Augusta-Allee 101 : *Starship* http://starship.python.net
10553 Berlin : PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
we're tired of banana software - shipped green, ripens at home

1 2  View All