Mailing List Archive

1 2 3  View All
Re: How explain why Python is easier/nicer than Lisp which has a simpler grammar/syntax? [ In reply to ]
On 8/7/20 3:54 PM, Marco Sulla wrote:
> On Fri, 7 Aug 2020 at 19:48, Richard Damon <Richard@damon-family.org> wrote:
>> The difference is that the two languages define 'expression' differently. [...]
> I don't know if this is interesting or pertinent to the topic.
>
> Christian Seberino just expressed a doubt about how a clear separation
> between a statement and an expression is quite desiderable in the
> "real" programming world. And I tried to explain it with the
> assignment operation, since a ton of programmers feel very frustrated
> about reading code of other programmers with assignment in an if
> statement. I'm quite sure that they thought, as I thought: "What do
> this?"
> Worse when their program failed and they discovered that they wrote
> `if (a=b)` instead of `if (a==b)`.
>
> I'm just more curious about why Lisp programmers think that it's
> better to not make a hard distinction between statements and
> expressions.

Actually, they might put a fairly hard distinction between statements
and expressions, a statement is a list that begins with a programmatic
atom, while an expression is a list that begins with an operator atom.

The fact that EVERYTHING is a list, makes those not used to the idea see
it as all the same (which I suppose in a way it is).

One side effect of this is that a program isn't just a bunch of
statements in sequence, but is actually a LIST of those statements, so
the whole program becomes effectively a single list.

The really interesting part is that since Lisp programs manipulate lists
as data, and the program is just a list, Lisp programs have the
theoretical ability to edit themselves (assuming the implementation give
access to the list of the program to the program).

Now for the general separation of expression from statement, which isn't
really as applicable to Lisp, since (if I remember right) assignment
doesn't use the = token, so you are less apt to make the mistake, there
are several arguments.

The confusion of assignment for equality comparison is likely on of the
big issues.

Some languages solve the problem by making assignments a special type of
statement, so you can't make the mistake, some of these even make = be
both, so the have to make the distinction.

Some languages (like C) make assignment just a 'normal' operator,  and
either just trust the programmer or use conventions to help generate
warnings (either at compile time or a separate Lint phase). People using
these languages will either like the additional power it give, or curse
the language for the additional opportunities to make mistakes (or
both). Some langagues make the mistake harder to make by using some
symbol other than = for assignment, like Pythons new := symbol.

One advantage of blurring the line between statements and expressions is
power, putting assignments in the middle of an expression, can allow
code to be more compact. Some extensions to C are even trying to take
this farther, and letting the programmer embed a { } block into a
statement as an expression, which is perhaps taking that idea to an extreme.

--
Richard Damon

--
https://mail.python.org/mailman/listinfo/python-list
Re: How explain why Python is easier/nicer than Lisp which has a simpler grammar/syntax? [ In reply to ]
On Sat, 8 Aug 2020 at 00:28, Richard Damon <Richard@damon-family.org> wrote:
> The really interesting part is that since Lisp programs manipulate lists
> as data, and the program is just a list, Lisp programs have the
> theoretical ability to edit themselves (assuming the implementation give
> access to the list of the program to the program).

This is a bit hard to understand for me.
I know that code can be translated to an AST, that is a tree. It's
quite difficult for me to imagine the code organized as a list. Do you
have some links about it?

On Sat, 8 Aug 2020 at 00:28, Richard Damon <Richard@damon-family.org> wrote:
> One advantage of blurring the line between statements and expressions is
> power, putting assignments in the middle of an expression, can allow
> code to be more compact.

I agree with you. I experimented a little with CPython code and I saw
assignments inside if statements. The code without them was less
readable. I also found this example:
https://stackoverflow.com/a/151920/1763602
My only fear is the abuse. How many people really use the walrus
operator to render the code more readable? My fear is that the
majority of programmer will use it for laziness and because it's
"cool" ^^
--
https://mail.python.org/mailman/listinfo/python-list
Re: How explain why Python is easier/nicer than Lisp which has a simpler grammar/syntax? [ In reply to ]
On 8/7/20 6:55 PM, Marco Sulla wrote:
> On Sat, 8 Aug 2020 at 00:28, Richard Damon <Richard@damon-family.org> wrote:
>> The really interesting part is that since Lisp programs manipulate lists
>> as data, and the program is just a list, Lisp programs have the
>> theoretical ability to edit themselves (assuming the implementation give
>> access to the list of the program to the program).
> This is a bit hard to understand for me.
> I know that code can be translated to an AST, that is a tree. It's
> quite difficult for me to imagine the code organized as a list. Do you
> have some links about it?

Lisp is built on Nested list, Lists were some (many) of the nodes are
other list. (Somewhat like you might build the AST in Python)

Generally the first element of the list defines what the list is, type
of statement or operation, and the rest are the parameters for it. Many
of these will be lists for sub expressions or dependent statements.

Perhaps the best option would be to search for the Lisp Language, and
see how you write programs.

>
> On Sat, 8 Aug 2020 at 00:28, Richard Damon <Richard@damon-family.org> wrote:
>> One advantage of blurring the line between statements and expressions is
>> power, putting assignments in the middle of an expression, can allow
>> code to be more compact.
> I agree with you. I experimented a little with CPython code and I saw
> assignments inside if statements. The code without them was less
> readable. I also found this example:
> https://stackoverflow.com/a/151920/1763602
> My only fear is the abuse. How many people really use the walrus
> operator to render the code more readable? My fear is that the
> majority of programmer will use it for laziness and because it's
> "cool" ^^

There is always the danger, that as you give the programmer more
expressive power, they can use it for 'good', or they can miss-use it to
make code harder to read. The question comes how much are you willing to
trust the programmer, or are you just going to give them enough rope so
that can do themselves in. 

--
Richard Damon

--
https://mail.python.org/mailman/listinfo/python-list
Re: How explain why Python is easier/nicer than Lisp which has a simpler grammar/syntax? [ In reply to ]
>> Readability of programming languages was measured
>> using an objective method, and Python was one of
>> the most readable.

Do you have a source for this?
--
https://mail.python.org/mailman/listinfo/python-list
Re: How explain why Python is easier/nicer than Lisp which has a simpler grammar/syntax? [ In reply to ]
On Sat, 8 Aug 2020 at 03:46, Christian Seberino <cseberino@gmail.com> wrote:
> >> Readability of programming languages was measured
> >> using an objective method, and Python was one of
> >> the most readable.
>
> Do you have a source for this?

This question means you have not read at all my suggestions :-D
Anyway, this is one: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3933420/

I do not agree with the entire work, but generally speaking it seems
to me they used a good approach. What really surprises me is the
absence of languages like JS and PHP.
--
https://mail.python.org/mailman/listinfo/python-list
Re: How explain why Python is easier/nicer than Lisp which has a simpler grammar/syntax? [ In reply to ]
Terry Reedy schreef op 7/08/2020 om 22:08:
> On 8/7/2020 11:46 AM, Chris Angelico wrote:
>
>> My point is that doing Fibonacci recursively is arguably more elegant
>> while being materially worse at performance.
>
> This is a common misconception. Linear iteration and tail recursion are
> equivalent. The issue is calculating values once versus multiple times.
> Here is the fast recursion equivalent to the fast iteration.
>
> def fib(n, pair=(1,0)):
> previous, current = pair
> if n:
> return fib(n-1, (current, previous + current))
> else:
> return current

Of course, but now you've lost the elegance of the recursive version
being near-literal translation of the mathematical definition.

It's a gripe I have with many introductory texts to functional
programming. Recursion is super cool, they say, it lets you decompose
problems in smaller problems in an elegant natural way. But then show
examples like Fibonacci where the elegant natural way is a bad solution,
so they introduce accumulators and stuff, and then Fibonacci does work
much better indeed. But they fail to notice that the new solution is not
elegant and natural at all anymore. It has just become iteration in a
recursive disguise.

I'm not saying there is nothing useful in functional programming and the
use of recursion; there most certainly is. But the way many texts
introduce it IMO doesn't help at all to understand the elegance that can
be achieved.

--
"Honest criticism is hard to take, particularly from a relative, a
friend, an acquaintance, or a stranger."
-- Franklin P. Jones

Roel Schroeven

--
https://mail.python.org/mailman/listinfo/python-list
Re: How explain why Python is easier/nicer than Lisp which has a simpler grammar/syntax? [ In reply to ]
On Tue, Aug 11, 2020 at 5:48 AM Roel Schroeven <roel@roelschroeven.net> wrote:
> I'm not saying there is nothing useful in functional programming and the
> use of recursion; there most certainly is. But the way many texts
> introduce it IMO doesn't help at all to understand the elegance that can
> be achieved.

Indeed. When I'm talking to students about recursion, often the
question "why bother" comes up... but when they finally 'get it', it's
usually because of an example far more elegant than Fibonacci numbers.
One of my favourites is: Given a binary tree, calculate its height.

def height(tree):
if not tree: return 0
return max(height(tree.left), height(tree.right)) + 1

THIS is the sort of thing that shows off the beauty of recursion.
Convoluted code with accumulator parameters just shows off that you
can write bad code in any style.

(Though the accumulator parameter form does have its place. Ever
written callback-based asynchronous code that needs to iterate over
something? Such fun.)

ChrisA
--
https://mail.python.org/mailman/listinfo/python-list
Re: How explain why Python is easier/nicer than Lisp which has a simpler grammar/syntax? [ In reply to ]
Christian Seberino wrote:
> A beginner I think could learn Lisp much faster than Python.

For talented beginners, Lisp rocks much like Python, in that easy assignments are easy enough to implement. On the high end, Lisp rocks again: Lisp masters are astonishingly productive. In between, beyond pedagogical exercises but short of our Turing Award masterwork, Lisp's extreme flexibility has a significant down-side in violating the middle third of Python Zen (PEP 20) guiding principle: "There should be one -- and preferably only one -- obvious way to do it."

Flexibility is good. In a programming language it's great, and Python is super flexible. Lisp is beyond. There are many Lisps, the big two of which are Common Lisp (CL) and Scheme. Common Lisp has been more industrially important, but Scheme out-Lisps CL. As we deal with Python's recent additions, such as "enum", the walrus operator, type hints, and async/await, I remain in awe of [call with current current continuation](https://en.wikipedia.org/wiki/Call-with-current-continuation), which in Scheme is abbreviated "call/cc".
--
https://mail.python.org/mailman/listinfo/python-list
Re: How explain why Python is easier/nicer than Lisp which has a simpler grammar/syntax? [ In reply to ]
Op 7/08/20 om 18:45 schreef Chris Angelico:
> On Sat, Aug 8, 2020 at 2:21 AM <2QdxY4RzWzUUiLuE@potatochowder.com> wrote:
>>
>> On 2020-08-07 at 17:55:45 +0200,
>> Marco Sulla <Marco.Sulla.Python@gmail.com> wrote:
>>> @Chris: note that "real" recursion in Python is not possible, since
>>> there's no support for tail recursion. Maybe something similar can be
>>> done using async functions.
>>
>> Python has real recursion. Whether or not there's tail recursion on the
>> inside is an implementation detail.
>
> More specifically: Python has real recursion. Whether or not tail
> recursion is optimized away is an implementation detail.
>
> Tail call optimization (there's no reason to restrict it to recursion
> alone) is something a Python implementation could choose to do, but
> the trouble is that full optimization tends to destroy traceback
> information, so it's often not worth doing.

I don't understand this argument. The trace back information that is
destroyed with this optimization, is information that isn't available
anyway if you write the code in an iterative fashion.

So people are advised to rewrite tail recursive code in an iterative
fashion, which results in less trace back information and the reason
for not doing tail call optimization is, that it destroy the trace back
information they don't have anyway by transforming the code to an iterative
version.

And the cases where
> partial optimization is of value just aren't compelling enough for
> anyone to implement it into CPython, nor any other major
> implementation (to my knowledge). The biggest uses for TCO tend to be
> the situations where recursion is the wrong solution to the problem.

I think writing code that is at heart a state machine would be a lot more
easy if python would have TCO. Then each state could be implemented by
a function and transitioning from one state to the next would be just
calling the next function.

Sure you can resolve this by writing your state function trampoline style
but it forces you into writing some boiler plate that otherwise wouldn't
be necessary.

--
Antoon Pardon.
--
https://mail.python.org/mailman/listinfo/python-list
Re: How explain why Python is easier/nicer than Lisp which has a simpler grammar/syntax? [ In reply to ]
On Sat, Aug 15, 2020 at 10:45 PM Antoon Pardon
<antoon.pardon@rece.vub.ac.be> wrote:
>
> Op 7/08/20 om 18:45 schreef Chris Angelico:
> > On Sat, Aug 8, 2020 at 2:21 AM <2QdxY4RzWzUUiLuE@potatochowder.com> wrote:
> >>
> >> On 2020-08-07 at 17:55:45 +0200,
> >> Marco Sulla <Marco.Sulla.Python@gmail.com> wrote:
> >>> @Chris: note that "real" recursion in Python is not possible, since
> >>> there's no support for tail recursion. Maybe something similar can be
> >>> done using async functions.
> >>
> >> Python has real recursion. Whether or not there's tail recursion on the
> >> inside is an implementation detail.
> >
> > More specifically: Python has real recursion. Whether or not tail
> > recursion is optimized away is an implementation detail.
> >
> > Tail call optimization (there's no reason to restrict it to recursion
> > alone) is something a Python implementation could choose to do, but
> > the trouble is that full optimization tends to destroy traceback
> > information, so it's often not worth doing.
>
> I don't understand this argument. The trace back information that is
> destroyed with this optimization, is information that isn't available
> anyway if you write the code in an iterative fashion.

Your counter-argument applies only to recursion, but TCO applies to
*any* tail call. Consider this:

@some_deco
def spam(n):
...
return spam(n // 2)

Not a recursive tail call and cannot be rewritten as a loop, unless
you know for sure that some_deco returns the original function. But
TCO can still optimize this - by collapsing the stack frames. Which
loses traceback info, unless you deliberately preserve it.

> And the cases where
> > partial optimization is of value just aren't compelling enough for
> > anyone to implement it into CPython, nor any other major
> > implementation (to my knowledge). The biggest uses for TCO tend to be
> > the situations where recursion is the wrong solution to the problem.
>
> I think writing code that is at heart a state machine would be a lot more
> easy if python would have TCO. Then each state could be implemented by
> a function and transitioning from one state to the next would be just
> calling the next function.

I'm honestly not sure how much you'd gain by writing the transitions
as additional calls. But then, I don't tend to write pure state
machines in Python. If anything, they're "state machines with
resumption" or something, and the additional wrinkles mean it's best
to maintain state in a data structure and maintain code in a function,
instead of trying to do both on the call stack.

ISTM that everyone who begs for tail recursion optimization is trying
to write loops as recursion. Maybe that's why Python has never
bothered - because it's an optimization for an inferior way to write
the same logic. Prove me wrong. Show me something where it actually
couldn't be better written iteratively, yet it still benefits from the
optimization.

ChrisA
--
https://mail.python.org/mailman/listinfo/python-list
Re: How explain why Python is easier/nicer than Lisp which has a simpler grammar/syntax? [ In reply to ]
@Chris: you're very right, but, I repeat, you can't have a real TCO
(asyncio apart):

(venv_3_10) marco@buzz:~$ python
Python 3.10.0a0 (heads/master-dirty:ba18c0b13b, Aug 14 2020, 17:52:45)
[GCC 10.1.1 20200718] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> def factorial(n):
... if n in (1, 2):
... return n
... return n * factorial(n-1)
...
>>> factorial(6)
720
>>> factorial(1000)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in factorial
File "<stdin>", line 4, in factorial
File "<stdin>", line 4, in factorial
[Previous line repeated 995 more times]
File "<stdin>", line 2, in factorial
RecursionError: maximum recursion depth exceeded in comparison

Anyway, tail call is introduced in Python, but only for asyncio:

"If a Future.set_exception() is called but the Future object is never
awaited on, the exception would never be propagated to the user code.
Enable the debug mode to get the traceback where the task was created"
https://docs.python.org/3.10/library/asyncio-dev.html#detect-never-retrieved-exceptions

So, in theory, nothing prevents Python from having a Tail Call
Optimization... because it already has it! The only problem is that
it's applied to asyncio only. IMHO it's not a big problem to support
generic TCO, and disable it for debugging purposes enabling the Python
dev mode.
--
https://mail.python.org/mailman/listinfo/python-list
Re: How explain why Python is easier/nicer than Lisp which has a simpler grammar/syntax? [ In reply to ]
Op 15/08/20 om 15:02 schreef Chris Angelico:
> On Sat, Aug 15, 2020 at 10:45 PM Antoon Pardon
> <antoon.pardon@rece.vub.ac.be> wrote:
>>
>>
>> I don't understand this argument. The trace back information that is
>> destroyed with this optimization, is information that isn't available
>> anyway if you write the code in an iterative fashion.
>
> Your counter-argument applies only to recursion, but TCO applies to
> *any* tail call. Consider this:
>
> @some_deco
> def spam(n):
> ...
> return spam(n // 2)
>
> Not a recursive tail call and cannot be rewritten as a loop, unless
> you know for sure that some_deco returns the original function. But
> TCO can still optimize this - by collapsing the stack frames. Which
> loses traceback info, unless you deliberately preserve it.

And how often does this kind of situation come up and destroy important
trace back information? Sure you can come up with artificial examples
like the above, but if the above code gets into trouble because you
run into the recursion limit, you still will have to transform it into
a loop somehow.

>> I think writing code that is at heart a state machine would be a lot more
>> easy if python would have TCO. Then each state could be implemented by
>> a function and transitioning from one state to the next would be just
>> calling the next function.
>
> I'm honestly not sure how much you'd gain by writing the transitions
> as additional calls. But then, I don't tend to write pure state
> machines in Python. If anything, they're "state machines with
> resumption" or something, and the additional wrinkles mean it's best
> to maintain state in a data structure and maintain code in a function,
> instead of trying to do both on the call stack.

Why are you talking about a stack?

I don't know what best suits your code but my code often enough doesn't
have enough wrinkles to bother with extra data structures.

>
> ISTM that everyone who begs for tail recursion optimization is trying
> to write loops as recursion. Maybe that's why Python has never
> bothered - because it's an optimization for an inferior way to write
> the same logic. Prove me wrong. Show me something where it actually
> couldn't be better written iteratively, yet it still benefits from the
> optimization.

What standard do you use to measure what is inferior of superior? Sometimes
a state machines is easiest defined as a number of mutual recursive functions
that just tail call each other. So with TCO I could just write it like the
following.

def state1(...):
...
if ready_for_2:
return state2(...)
elif ready_for_7:
return state7(...)
elif finished:
return result

def state2(...):
...

and you just call it like:

result = state1(...)

Without TCO I am forced to write it as follow:

def state1(...):
...
if ready_for_2:
return state2, (...) # notice the comma between function and args.
elif ready_for_7:
return state7, (...)
elif finished:
return None, result

def state2(...):
...

and then call it like:

state = state0
args = ...
while state is not None:
state, args = state(*args)
result = args

I realy don't see what I gain by having this loop or what I could gain by some extra
data structure.

--
Antoon Pardon.
--
https://mail.python.org/mailman/listinfo/python-list
Re: How explain why Python is easier/nicer than Lisp which has a simpler grammar/syntax? [ In reply to ]
A few comments come to mind about this discussion about TCO.

First, TCO, Tail Call Optimization, is talking about something that is
an optimization.

Optimizations, are generally some OPTIONAL improvement in the method of
executing the code that doesn't alter its DEFINED meaning.

First big point, they are optional. Yes, some languages may define
certain circumstances where where there is a promise that a given
optimization will be done, but this is unusual. Some things that might
seem like optimizations, really aren't but are defined behaviors, like
the short cutting operators 'and' and 'or' where the fact that the
second operand isn't always evaluated could be though of as an optimization.

Second, optimizations are generally only allow to be performed if it
doesn't change any defined behavior of the program. I am not so sure
that is possible for TCO to be done in python based on that.

for example, given:

def foo(x):

    if x == 0: return 1

    else return foo(x-1)

def bar(x)

    if x == 0: return 2

    else return bar(x-1)

t = foo

foo = bar

bar = t

foo(1)


By the twiddling done at the end, we have changed the self
tail-recursive functions into mutually tail-recursive functions. The
fact we can do this says that when compiling foo and bar into byte-code,
the recursive call to foo can't just automatically go to the beginning
of the current function, but needs to look up the name and enter with a
possibly need operation, something like a tail-call which becomes more
of a jump as it doesn't create a new stack frame but somehow replaces
the current frame with what will be the new frame while binding the 'new
x' with the old 'x-1'

Second, because Python has defined things like traceback, the presence
of TCO is observable, and thus violates one of the basic guidelines of
an optimization, that it shouldn't change defined behavior.

In my mid, this says that for Python to proper support TCO, it may need
to do something to make it an explicit request, or at least defined
specifically when it will do it and when not.

Perhaps it could be defined that a return statement whose top level of
the expression is a function call, becomes an optimized tail call,
ALWAYS (at least with that version of Python), or maybe some sort of
flag needs to also be on the statement to avoid making it a backwards
breaking change.

--
https://mail.python.org/mailman/listinfo/python-list

1 2 3  View All