Mailing List Archive

fork()
Hi All,

I've switched from 'threading' to 'fork' (remember the discussion some
days ago) since that gives me real parallel processing. But (off course)
there are other problems when forking a process ;-( :

1. The parent process reads in a discitonary with about 250000 keys,
this is a sort of static database which never changes.

2. A bidirectional pipe is created for each child process

3. The parent forks

4. The children claculate some for which they've to access the large
dictionary.

5. All the children are able to send pickled objects to the parent (via
the pipe)

All that woks fine now (after I spend a long time fiddling with the pipe
and select stuff to synchronize I/O). However the forking is a waste of
time and memory because all the children get their own large dictionary!
Is there a way to use shared memory in python, I mean how can different
processes access (for read only) one object without passing the keys of
the dictionary object through a pipe (the children need _quick_ access
to that dictionary)?

Thanks alot (again) for your suggestions,

Arne

--
Arne Mueller
Biomolecular Modelling Laboratory
Imperial Cancer Research Fund
44 Lincoln's Inn Fields
London WC2A 3PX, U.K.
phone : +44-(0)171 2693405 | Fax : +44-(0)171 269 3258
email : a.mueller@icrf.icnet.uk | http://www.icnet.uk/bmm/
fork() [ In reply to ]
Arne Mueller <a.mueller@icrf.icnet.uk> writes:

> All that woks fine now (after I spend a long time fiddling with the
> pipe and select stuff to synchronize I/O). However the forking is a
> waste of time and memory because all the children get their own
> large dictionary!

If you don't change the dictionary, the memory will not be copied.
Most modern Unixes support COW (copy on write) which means that the
actual fork()ing does surprisingly little. Only when the memory is
written to, a new copy is spawned off.

If the children are modifying the dictionary, that's well something
different...
fork() [ In reply to ]
Hrvoje Niksic wrote:
>
> Arne Mueller <a.mueller@icrf.icnet.uk> writes:
>
> > All that woks fine now (after I spend a long time fiddling with the
> > pipe and select stuff to synchronize I/O). However the forking is a
> > waste of time and memory because all the children get their own
> > large dictionary!
>
> If you don't change the dictionary, the memory will not be copied.
> Most modern Unixes support COW (copy on write) which means that the
> actual fork()ing does surprisingly little. Only when the memory is
> written to, a new copy is spawned off.
>
> If the children are modifying the dictionary, that's well something
> different...

Huh, that's realy good news! But I don't understand how that works, when
the child changes the dictionary it quickly gets it's own copy, so the
operating system is watching the child's activity and copies memory when
it's accessed for writing? However my children don't change anything in
the dictionary ;-)


Thanks,
Arne
fork() [ In reply to ]
Arne Mueller <a.mueller@icrf.icnet.uk> writes:

> Hrvoje Niksic wrote:
> >
> > Arne Mueller <a.mueller@icrf.icnet.uk> writes:
> >
> > > All that woks fine now (after I spend a long time fiddling with the
> > > pipe and select stuff to synchronize I/O). However the forking is a
> > > waste of time and memory because all the children get their own
> > > large dictionary!
> >
> > If you don't change the dictionary, the memory will not be copied.
> > Most modern Unixes support COW (copy on write) which means that the
> > actual fork()ing does surprisingly little. Only when the memory is
> > written to, a new copy is spawned off.
> >
> > If the children are modifying the dictionary, that's well something
> > different...
>
> Huh, that's realy good news! But I don't understand how that works, when
> the child changes the dictionary it quickly gets it's own copy, so the
> operating system is watching the child's activity and copies memory when
> it's accessed for writing? However my children don't change anything in
> the dictionary ;-)

I fear that in reality the memory is copied after all. If they use the
dictionary in any way they are modifying refcounts of at least some of
the objects in the dictionary. From the OS's point of view this counts
as modifying the memory.

The same applies to all python objects that are referenced in the child
or the parent after the fork(). If I'm not mistaken this includes some
object categories that take a fair amount of your program's memory like
identifiers (represented by string objects) and function/code objects.

--
Bernhard Herzog | Sketch, a python based drawing program
herzog@online.de | http://www.online.de/home/sketch/
fork() [ In reply to ]
Arne Mueller <a.mueller@icrf.icnet.uk> writes:

> All that woks fine now (after I spend a long time fiddling with the pipe
> and select stuff to synchronize I/O). However the forking is a waste of
> time and memory because all the children get their own large dictionary!
> Is there a way to use shared memory in python, I mean how can different
> processes access (for read only) one object without passing the keys of
> the dictionary object through a pipe (the children need _quick_ access
> to that dictionary)?

If you've got a modern operating system, fork() doesn't copy the
whole address space at once, the memory is shared as long as it is
not modified. Unfortunately, even if you only fetch an object from a
dictionary, Python is writing to previously shared memory behind the
scene (because the reference counts are updated). This means that the
memory sharing mechanism of the operating system doesn't work.

I hope others will have better suggestions than I. First of all, you
should be sure that you cannot solve this problem by throwing more RAM
on it. RAM is so cheap these days, so it might be too expensive to spend
hours on optimizing your program. (This obviously depends on the purpose
of your program.)

My second, more serious suggestion: Put the data contained in the
objects stored in the dictionary into a C struct, write a simple C
implementation of a hash table which stores those structs and a lookup
function which creates suitable Python objects on the fly. This way,
there are no reference counts which are perpetually updated and thus
break he copy-on-write scheme, so the main data structure can be shared
efficiently among the processes.

Another solution is probably a Python version using the Boem garbage
collector which gets rid of the reference counts as well. There are
some patches floating around, but I don't know whether they are suited
to recent Python releases or production use.
fork() [ In reply to ]
Arne Mueller <a.mueller@icrf.icnet.uk> writes:

> > If you don't change the dictionary, the memory will not be copied.
[...]
> Huh, that's realy good news! But I don't understand how that works,

It works by MMU (memory management unit) magic. The OS doesn't really
"watch" the system activity -- it's more that when the child writes to
one of the pages in question, a trap is invoked which copies the page
and remaps the child's memory address to point to the copy.

> However my children don't change anything in the dictionary ;-)

Then the memory should be kept. You can test this by running a script
like this:

import os, time

largestring = "x" * 10000000

for i in range(50):
if os.fork() != 0:
time.sleep(60)
os._exit(0)

time.sleep(60)

This creates fifty processes, each of which sleep a minute and exit,
and their parent does the same. Each process can access a 10M string.
Needless to say, I don't have 500M virtual memory on my Linux box, so
copy-on-write definitely works here. Also, `free' reports:

{pc-hrvoje}[~]$ free
total used free shared buffers cached
Mem: 63272 62572 700 534736 88 9844
-/+ buffers/cache: 52640 10632
Swap: 120924 22112 98812

Of 534M memory shared between the processes (under the "shared"
column), about 500 comes from the 50 python processes.
fork() [ In reply to ]
Bernhard Herzog <herzog@online.de> writes:

> I fear that in reality the memory is copied after all. If they use
> the dictionary in any way they are modifying refcounts of at least
> some of the objects in the dictionary. From the OS's point of view
> this counts as modifying the memory.

Eek. You're right. Refcounting truly sucks. :-(
fork() [ In reply to ]
Hrvoje Niksic wrote:
>
> Bernhard Herzog <herzog@online.de> writes:
>
> > I fear that in reality the memory is copied after all. If they use
> > the dictionary in any way they are modifying refcounts of at least
> > some of the objects in the dictionary. From the OS's point of view
> > this counts as modifying the memory.
>
> Eek. You're right. Refcounting truly sucks. :-(

Hm, that means as long as I realy don't look at the data of the parent
the data is shared between parent and child. Thats funny - it's a
useless feature. As soon as a child reference that dictionary it gets
it's own copy.

What about shared memory, is it possible to put objects into a shared
memory segment, most advanced language can do that?

greetings,

Arne
fork() [ In reply to ]
>>>>> "HN" == Hrvoje Niksic <hniksic@srce.hr> writes:

>> I fear that in reality the memory is copied after all. If they
>> use the dictionary in any way they are modifying refcounts of
>> at least some of the objects in the dictionary. From the OS's
>> point of view this counts as modifying the memory.

HN> Eek. You're right. Refcounting truly sucks. :-(

It needn't though. IIRC, NextStep didn't keep the refcounts in the
objects. Also, since most objects in that system were transient and
only had a refcount of 1, such objects didn't have entries in the
object refcount dictionary until their refcounts were increased to 2.

-Barry
fork() [ In reply to ]
Arne Mueller <a.mueller@icrf.icnet.uk> writes:

> Hrvoje Niksic wrote:
> >
> > Bernhard Herzog <herzog@online.de> writes:
> >
> > > I fear that in reality the memory is copied after all. If they use
> > > the dictionary in any way they are modifying refcounts of at least
> > > some of the objects in the dictionary. From the OS's point of view
> > > this counts as modifying the memory.
> >
> > Eek. You're right. Refcounting truly sucks. :-(
>
> Hm, that means as long as I realy don't look at the data of the parent
> the data is shared between parent and child. Thats funny - it's a
> useless feature. As soon as a child reference that dictionary it gets
> it's own copy.

It's still not completely useless; you don't have to explicitely say
which memory pages will be copied.

> What about shared memory, is it possible to put objects into a shared
> memory segment, most advanced language can do that?

Don't know if anyone wrote a Python wrapper around the POSIX shared memory
stuff. However, you probably don't want/need to do that. The reason is
that there's basically no (portable) way to get malloc() to allocate
new objects in the shared memory, so you can't create Python objects
in the shared memory. So you can probably only access the shared
memory as a special array a la the "array" module.

In which case this buys you little, since if you stored the data in
an "array" module array, the data would be shared automagically.
So that's the approach I recommend:

1. Build your shared data, painfully, in one or more "raw" arrays as provided
by the "array" module.
2. Don't change your arrays after that.
3. Do the fork and access the arrays from all the child processes.

This should work. (I think)

Greetings,

Stephan
fork() [ In reply to ]
Stephan Houben wrote:
...
> Don't know if anyone wrote a Python wrapper around the POSIX shared
> memory stuff. However, you probably don't want/need to do that. The
> reason is that there's basically no (portable) way to get malloc()
> to allocate new objects in the shared memory, so you can't create
> Python objects in the shared memory. So you can probably only access
> the shared memory as a special array a la the "array" module.

Vladimir Marangozov has a SysV shared memory / semphore module at
http://sirac.inrialpes.fr/~marangoz/python/shm/.

Like Mark Hammond's Win32 shared memory extension it uses a file like
API.

Probably not what's needed in the context of this thread, but handy
nonetheless.

- Gordon
fork() [ In reply to ]
[Bernhard Herzog]
> I fear that in reality the memory is copied after all. If they use
> the dictionary in any way they are modifying refcounts of at least
> some of the objects in the dictionary. From the OS's point of view
> this counts as modifying the memory.

[Hrvoje Niksic]
> Eek. You're right. Refcounting truly sucks. :-(

OTOH, in a straightfoward mark-and-sweep GC scheme there will at least be a
"mark bit" in each object header. So under that, or under any form of
compacting GC, the *entire live object space* will get copied, regardless of
whether *you* reference anything in it or not. So if RC truly sucks, what
does that make GC <wink>?

easy-answers-don't-always-work-ly y'rs - tim
fork() [ In reply to ]
Tim Peters (tim_one@email.msn.com) wrote:
: OTOH, in a straightfoward mark-and-sweep GC scheme there will at least be a
: "mark bit" in each object header. So under that, or under any form of
: compacting GC, the *entire live object space* will get copied, regardless of
: whether *you* reference anything in it or not. So if RC truly sucks, what
: does that make GC <wink>?

But the reality is that copying memory can in fact speed applications
up! Think about locality of reference after you have copied all live
blocks to the start of memory.

graham

--
Je suis pour le communisme
Je suis pour le socialisme
Je suis pour le capitalisme
Parce que je suis opportuniste
fork() [ In reply to ]
Graham Matthews wrote:
>
> Tim Peters (tim_one@email.msn.com) wrote:
> : OTOH, in a straightfoward mark-and-sweep GC scheme there will
> : at least be a "mark bit" in each object header. So under that,
> : or under any form of compacting GC, the *entire live object space*
> : will get copied, regardless of whether *you* reference anything in
> : it or not. So if RC truly sucks, what does that make GC <wink>?
>
> But the reality is that copying memory can in fact speed applications
> up! Think about locality of reference after you have copied all live
> blocks to the start of memory.
>

True enough. At least, it sounds true and promising ;-)
However, locality of reference (LOR) is something quite hard to measure
in practice. One has to tweak the kernel to collect some stats on this,
and even if one achieves LOR improvements on a partuclar system, say
Solaris, it could result in a LOR degradation on Linux or Windows.
Not to mention that nowadays nobody wants to play this game...

I also thought that Python suffers from very bad LOR, but this is and
will remain speculation (as well as statements regarding RC, GC or
RC+GC) until someone proves that one strategy is better or worse
than the other, in the average case, on the average system.

Graham, you said that you won't contribute a RC+GC scheme to Python, despite
your positive experience. If you change your mind and consider
devoting some spare time to give it a try, I'll devote some spare
time to help you with Python's internals and we'll see whether we
could come up with something viable, which could compete with the
actual RC scheme alone. Does this sound constructive enough? :-)

--
Vladimir MARANGOZOV | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252
fork() [ In reply to ]
Vladimir Marangozov wrote:
>
> Graham Matthews wrote:
> >
> > Tim Peters (tim_one@email.msn.com) wrote:
<snip/>

> Graham, you said that you won't contribute a RC+GC scheme to Python, despite
> your positive experience. If you change your mind and consider
> devoting some spare time to give it a try, I'll devote some spare
> time to help you with Python's internals and we'll see whether we
> could come up with something viable, which could compete with the
> actual RC scheme alone. Does this sound constructive enough? :-)

Just an idea here:
If we have knowlegde of the internals of every Python structure,
and if there were no hidden internal references to Python
objects, then we could quite easily build a non-pessimistic
GC, without the need to be hardware specific, scan the stack
and so on.
You know that I have an (early) Python with no stack.
Would this help?

Let's assume that we use our own allocator. Something which
allows to create a couple of heaps which can be identified.
Now, with every new interpreter incarnation, a new heap
could be associated.
Under the assumptions that
- only known types are garbage collected
- only known (well-behaved) functions are called,
it seems to be safe to me to run a garbage collector
for all these objects which were created in the
current interpreter.
And since stackless Python has very few interpreters
(after the imports, there is almost just one),
very many objects would have this nice property.

I don't know how to handle calls to unknown functions.
Perhaps, parameters to such functions must me recorded
in some structure which marks them as "unsafe to collect".

What do you think?

--
Christian Tismer :^) <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH : Have a break! Take a ride on Python's
Kaiserin-Augusta-Allee 101 : *Starship* http://starship.python.net
10553 Berlin : PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
we're tired of banana software - shipped green, ripens at home
fork() [ In reply to ]
Graham Matthews wrote:
> But the reality is that copying memory can in fact speed applications
> up! Think about locality of reference after you have copied all live
> blocks to the start of memory.
Vladimir Marangozov (Vladimir.Marangozov@inrialpes.fr) wrote:
: True enough. At least, it sounds true and promising ;-)
: However, locality of reference (LOR) is something quite hard to measure
: in practice. One has to tweak the kernel to collect some stats on this,
: and even if one achieves LOR improvements on a partuclar system, say
: Solaris, it could result in a LOR degradation on Linux or Windows.
: Not to mention that nowadays nobody wants to play this game...

Sure I agree with all this. I am just saying that it's quite simplistic
nowdays to assume that copying objects when collecting is a bad idea.
It used to be the case that copying was bad -- that's why mark and sweep
collectors were invented before copying collectors. But on modern CPUs
and modern OS's it's no longer uniformly true.

Vladimir Marangozov (Vladimir.Marangozov@inrialpes.fr) wrote:
: Graham, you said that you won't contribute a RC+GC scheme to Python, despite
: your positive experience. If you change your mind and consider
: devoting some spare time to give it a try, I'll devote some spare
: time to help you with Python's internals and we'll see whether we
: could come up with something viable, which could compete with the
: actual RC scheme alone. Does this sound constructive enough? :-)

Very constructive indeed! I am indeed sorry that I don't have the time
to contribute code for a collector for Python. I think it would be greatly
beneficial. But between working on a language of my own, and working on
my PhD, well there is only a finite amount of time in a week (damn shame
that!).

I was actually wondering if anyone had done any work putting the Boehm
collector under Python. The Boehm collector was designed to run in a
C environment.

graham
--
As you grow up and leave the playground
where you kissed your prince and found your frog
Remember the jester that showed you tears
the script for tears
fork() [ In reply to ]
Graham Matthews wrote:
> ...
> I was actually wondering if anyone had done any work putting the Boehm
> collector under Python. The Boehm collector was designed to run in a
> C environment.
>

FAQ 6.14

--
Vladimir MARANGOZOV | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252
fork() [ In reply to ]
[Tim]
> OTOH, in a straightfoward mark-and-sweep GC scheme there will
> at least be a "mark bit" in each object header. So under that,
> or under any form of compacting GC, the *entire live object space*
> will get copied, regardless of whether *you* reference anything in
> it or not. So if RC truly sucks, what does that make GC <wink>?

{Graham]
> But the reality is that copying memory can in fact speed applications
> up! Think about locality of reference after you have copied all live
> blocks to the start of memory.

Context: this offshoot of the thread was talking about memory use after a
fork() call in a copy-on-write implementation of fork. So long as
bookkeeping info is stored in object headers, "real GC" is at a real
disadvantage, in that respect, compared to RC. Simply because RC won't
touch something unless *you* do.

LOR is an unrelated issue, and harder to analyze. Yes, copying can speed
applications. It can also slow them. "Although copying garbage collectors
have predominated in the past, recent studies suggest that the choice
between mark-sweep and copying garbage collection may well depend as much on
the behavior of the client program as on the inherent properties of the
garbage collection algorithm" [Jones & Lins, "Garbage Collection", Wiley,
1996]. The hoary old theoretical arguments really aren't worth crap in this
field <0.5 wink>.

modern-hardware-is-too-hard-to-predict-ly y'rs - tim
fork() [ In reply to ]
[Tim]
> OTOH, in a straightfoward mark-and-sweep GC scheme there will
> at least be a "mark bit" in each object header. So under that,
> or under any form of compacting GC, the *entire live object space*
> will get copied, regardless of whether *you* reference anything in
> it or not. So if RC truly sucks, what does that make GC <wink>?

{Graham]
> But the reality is that copying memory can in fact speed applications
> up! Think about locality of reference after you have copied all live
> blocks to the start of memory.

Context: this offshoot of the thread was talking about memory use after a
fork() call in a copy-on-write implementation of fork. So long as
bookkeeping info is stored in object headers, "real GC" is at a real
disadvantage, in that respect, compared to RC. Simply because RC won't
touch something unless *you* do.

LOR is an unrelated issue, and harder to analyze. Yes, copying can speed
applications. It can also slow them. "Although copying garbage collectors
have predominated in the past, recent studies suggest that the choice
between mark-sweep and copying garbage collection may well depend as much on
the behavior of the client program as on the inherent properties of the
garbage collection algorithm" [Jones & Lins, "Garbage Collection", Wiley,
1996]. The hoary old theoretical arguments really aren't worth crap in this
field <0.5 wink>.

modern-hardware-is-too-hard-to-predict-ly y'rs - tim
fork() [ In reply to ]
"Tim Peters" <tim_one@email.msn.com> writes:

|Context: this offshoot of the thread was talking about memory use after a
|fork() call in a copy-on-write implementation of fork. So long as
|bookkeeping info is stored in object headers, "real GC" is at a real
|disadvantage, in that respect, compared to RC. Simply because RC won't
|touch something unless *you* do.

Indeed. Though some "real GC" keeps mark bits in separated pages to
avoid copying bunch of pages in such situation.

BTW, ref counting modifies objects for mere referencing, it causes
copying too. Also, it would be performance pain if Python's global
thread lock is removed someday, since ref counting requires mutual
lock to protect ref count update. I'm not sure this problem can be
solved or not.
matz.
fork() [ In reply to ]
{Graham]
> But the reality is that copying memory can in fact speed applications
> up! Think about locality of reference after you have copied all live
> blocks to the start of memory.
Tim Peters (tim_one@email.msn.com) wrote:
: LOR is an unrelated issue, and harder to analyze. Yes, copying can speed
: applications. It can also slow them. "Although copying garbage collectors
: have predominated in the past, recent studies suggest that the choice
: between mark-sweep and copying garbage collection may well depend as much on
: the behavior of the client program as on the inherent properties of the
: garbage collection algorithm" [Jones & Lins, "Garbage Collection", Wiley,
: 1996]. The hoary old theoretical arguments really aren't worth crap in this
: field <0.5 wink>.

Which "hoary old theoretical arguments" are you talking about Tim.
I said "copying memory *can* speed applications up". That is an
observable reality. Where are the hoary theoretical arguments? Or
is this just more of the same old bluff and bluster arguing.

graham
--
As you grow up and leave the playground
where you kissed your prince and found your frog
Remember the jester that showed you tears
the script for tears
fork() [ In reply to ]
On 9 Jun 1999 18:22:57 GMT, Graham Matthews <graham@sloth.math.uga.edu> wrote:

> observable reality. Where are the hoary theoretical arguments? Or
> is this just more of the same old bluff and bluster arguing.

Uh oh... This is starting to look like comp.lang.perl.misc. :-)

--
John Klassa / Alcatel USA / Raleigh, NC, USA
fork() [ In reply to ]
John Klassa <klassa@aur.alcatel.com> wrote:
: On 9 Jun 1999 18:22:57 GMT, Graham Matthews <graham@sloth.math.uga.edu> wrote:
:
: > observable reality. Where are the hoary theoretical arguments? Or
: > is this just more of the same old bluff and bluster arguing.
:
: Uh oh... This is starting to look like comp.lang.perl.misc. :-)

Yes, some of us have noticed. That may be a main reason why the person
to actually make the final decisions (Guido) often does not follow
these threads.

-Arcege
fork() [ In reply to ]
{Graham]
> Which "hoary old theoretical arguments" are you talking about Tim.

The historical literature is rampant with them. If you feel I was accusing
*you* of making them in this particular leaf of the thread, sorry, that's
just a misunderstanding.

> I said "copying memory *can* speed applications up". That is an
> observable reality.

I agreed. Also said it can slow them down, and gave a pointer to the most
helpful accessible reference I know of, for the benefit of anyone
interested.

> Where are the hoary theoretical arguments?

See the Jones & Lins book for a lucid presentation-- and debunking --of
them.

> Or is this just more of the same old bluff and bluster arguing.

Nope -- still too busy beating my wife, I guess <wink>.

oh-damn-now-i-have-to-get-married-ly y'rs - tim
fork() [ In reply to ]
[Tim]
> Context: this offshoot of the thread was talking about memory
> use after a fork() call in a copy-on-write implementation of fork.
> So long as bookkeeping info is stored in object headers, "real GC"
> is at a real disadvantage, in that respect, compared to RC. Simply
> because RC won't touch something unless *you* do.

[Matz]
> Indeed. Though some "real GC" keeps mark bits in separated pages to
> avoid copying bunch of pages in such situation.

And, as at least Barry pointed out earlier, some refcounting systems
separate the counts from the objects they're counting, and largely for the
same reason.

> BTW, ref counting modifies objects for mere referencing, it causes
> copying too.

That's actually the observation that *started* "this offshoot of the
thread" -- but as if it were unique to RC, which it isn't.

> Also, it would be performance pain if Python's global thread lock is
> removed someday, since ref counting requires mutual lock to protect
> ref count update. I'm not sure this problem can be solved or not.

Back at Python 1.4, a set of patches was developed (by Greg Stein) that
*did* remove the global lock. This meant fine-grained locking of other
stuff as needed, refcounts included. The interpreter has changed quite a
bit since then, so old numbers aren't quantitatively relevant anymore; but
it was indeed a significant slowdown for single-threaded Python programs.

It's particularly acute for Python because *everything* "is boxed"; nothing
is exempt, and refcounting is never delayed.

OTOH, people who gripe about needing to keep refcounts straight in
extensions really haven't lived until they've wrestled with slamming in
exactly the right macros in exactly the right places to stop threaded
non-conservative GC from finding things in an inconsistent state <0.1 wink>.

what-end-users-don't-see-would-kill-them<wink>-ly y'rs - tim

1 2 3  View All