Mailing List Archive

1 2 3 4  View All
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
Hello,

On Wed, 13 Jan 2021 05:04:36 -0000
"Jim J. Jewett" <jimjjewett@gmail.com> wrote:

> Paul Sokolovsky wrote:
> > Ok, let's take "module attribute" as an example. Why do you think
> > there's anything wrong with this code:
> > ======
> > import config
> > from .types import *
> > if config.SUPPORT_BIGINT:
> > var: bigint = 1
> > else:
> > var: int64 = 1
>
> "Wrong" is too strong, but it would be better as
>
> mybigint = bigint if config.SUPPORT_BIGINT else int64
> ...
> var:mybigint = 1

What's the explanation of why the above is better?

It seems following is ok with PEP649:

if config.LAYOUT_INT:
@dataclass
class MyData:
val: int
else:
@dataclass
class MyData:
val: float


So, how to explain to people that using the normal "if" is ok when
defining classes/dataclasses, but suddenly not normal when defining just
variables, and people should switch to the "if" expression?

> so asking people to rewrite it that way over the course of a major
> release is probably an acceptable price.

But why haste to ask people to rewrite their code? Why not start with
saying that PEP649 is not backward compatible, and ask it to explain
why it has pretty arbitrary limitations and discrepancies like above?
Then ask it how it can achieve backward compatibility? And that way is
obvious - the smart code objects which PEP649 creates, they should store
annotations just like PEP563 does, in a serialized form. Then those
smart code objects would deserialize and evaluate them. They may even
cache the end result.

But wait, PEP563 already has all that! It provides public API to get
annotations, typing.get_type_hints(), which already does all the
deserialization (maybe it doesn't do caching - *yet*), and effectively
treats __annotations__ as implementation detail. Because clearly, the
format of information stored there already depends on a particular
CPython version, and if you believe a thread running in parallel, going
to change going forward.

Seen like that, PEP649 is just a quest for making __annotations__ be
the "public API", instead of the already defined public API, and which
__annotations__ already can't be, as its format already varies widely
(and likely will keep varying going forward). And while questing for
that elusive goal, it even adds arbitrary restrictions for usage of
annotations which never were there before, truly breaking backward
compatibility and some annotation usages.


--
Best regards,
Paul mailto:pmiscml@gmail.com
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-leave@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/PWMEB3LWM6WMEEA5ZTZUPA3JHRLDSF5R/
Code of Conduct: http://python.org/psf/codeofconduct/
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
On 1/11/21 6:34 PM, Paul Bryan wrote:
> On Mon, 2021-01-11 at 17:56 -0800, Larry Hastings wrote:
>
>> On 1/11/21 5:02 PM, Paul Bryan wrote:
>>
>>> I'm probably naive, but is there a reason that one could not just
>>> store a callable in __annotations__, and use the descriptor to
>>> resolve it to a dictionary and store it when it is accessed? It
>>> would be one less dunder in the Python data model.
>>
>> That would work, but I think the API is a bit of a code smell. 
>> __annotations__ would no longer be stable:
>>
>>> a.__annotations__ = o
>>> assert a.__annotations__ == o
>>>
>> Would that assert fail?  It depends on what type(o) is, which is
>> surprising.
>>
>
> Equally surprising?:
>
> a.__co_annotations__ = o
> a.__annotations__
> assert a.__co_annotations__ == o


I've ruminated about this a bit over the past few days, and I finally
realized exactly why, yes, I think behavior is more surprising.  It's
because __annotations__ is now 12 years old (!), and never in that
entire time has it silently changed its value. It's always been
completely stable, and we have twelve years' worth of installed base
that may rely on that assumption.  In comparison, __co_annotations__ is
a new attribute.  While it's also surprising that __co_annotations__ can
be automatically unset, at least this would be a documented part of its
behavior from day 1.

Relatedly, __co_annotations__ is behaving somewhat like a cached value,
in that cached values get deleted when they're out-of-date.  (An
observation that may provide some guidance if we decide to rename
__co_annotations__.)  This idiom may be familiar to the user--unlike
your proposed semantics, which I don't recall ever seeing used in an API.

I admit it's only a small difference between what you proposed and what
I propose, but in the end I definitely prefer my approach.

Cheers,


//arry/
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
OK. Makes sense to think of __annotations__ as being the location of
the final, stable, "affixed" type hints.

I wonder if then the __co_annotations__ call and overwriting of
__annotations__ should be explicitly caused by a to get_type_hints
instead of (mysteriously) occurring on an attempt to getattr
__annotations__. I know this changes the descriptor behavior you
documented, but at least it would occur explicitly in a function call
and may be easier for developers to reason about?  It would also
address my other question of trying to access __annotations__, only to
be confronted with an exception raised within __co_annotations__.


On Fri, 2021-01-15 at 09:47 -0800, Larry Hastings wrote:
>
> On 1/11/21 6:34 PM, Paul Bryan wrote:
>
> On Mon, 2021-01-11 at 17:56 -0800, Larry Hastings wrote:
>
>
> > On 1/11/21 5:02 PM, Paul Bryan wrote:
> >
>
> >
> > > I'm probably naive, but is there a reason that one could not just
> > > store a callable in __annotations__, and use the descriptor to
> > > resolve it to a dictionary and store it when it is accessed? It
> > > would be one less dunder in the Python data model.
> > That would work, but I think the API is a bit of a code smell. 
> > __annotations__ would no longer be stable:
> >
> > > a.__annotations__ = o
> > > assert a.__annotations__ == o
> > Would that assert fail?  It depends on what type(o) is, which is
> > surprising.
>
> Equally surprising?:
>
> a.__co_annotations__ = o
> a.__annotations__
> assert a.__co_annotations__ == o
>
> I've ruminated about this a bit over the past few days, and I finally
> realized exactly why, yes, I think behavior is more surprising.  It's
> because __annotations__ is now 12 years old (!), and never in that
> entire time has it silently changed its value.  It's always been
> completely stable, and we have twelve years' worth of installed base
> that may rely on that assumption.  In comparison, __co_annotations__
> is a new attribute.  While it's also surprising that
> __co_annotations__ can be automatically unset, at least this would be
> a documented part of its behavior from day 1.
> Relatedly, __co_annotations__ is behaving somewhat like a cached
> value, in that cached values get deleted when they're out-of-date. 
> (An observation that may provide some guidance if we decide to rename
> __co_annotations__.)  This idiom may be familiar to the user--unlike
> your proposed semantics, which I don't recall ever seeing used in an
> API.
> I admit it's only a small difference between what you proposed and
> what I propose, but in the end I definitely prefer my approach.
> Cheers,
>
> /arry
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
Sorry it took me 3+ days to reply--I had a lot to think about here.  But
I have good things to report!


On 1/11/21 8:42 PM, Guido van Rossum wrote:
> On Mon, Jan 11, 2021 at 1:20 PM Larry Hastings <larry@hastings.org
> <mailto:larry@hastings.org>> wrote:
>
> PEP 563 states:
>
> For code that uses type hints, the typing.get_type_hints(obj,
> globalns=None, localns=None) function correctly evaluates
> expressions back from its string form.
>
> So, if you are passing in a localns argument that isn't None,
> okay, but you're not using them "correctly" according to the
> language.  Also, this usage won't be compatible with static type
> checkers.
>
> I think you're misreading PEP 563 here. The mention of globalns=None,
> localns=None refers to the fact that these parameters have defaults,
> not that you must pass None. Note that the next paragraph in that PEP
> mentions eval(ann, globals, locals) -- it doesn't say eval(ann, {}, {}).

I think that's misleading, then.  The passage is telling you how to
"correctly evaluate[s] expressions", and how I read it was, it's telling
me I have to supply globalns=None and localns=None for it to work
correctly--which, I had to discover on my own, were the default values. 
I don't understand why PEP 563 feels compelled to define a function that
it's not introducing, and in fact had already shipped with Python two
versions ago.


> Later in that same section, PEP 563 points out a problem with
> annotations that reference class-scoped variables, and claims that the
> implementation would run into problems because methods can't "see" the
> class scope. This is indeed a problem for PEP 563, but *you* can
> easily generate correct code, assuming the containing class exists in
> the global scope (and your solution requires that anyway). So in this case
> ```
> class Outer:
>     class Inner:
>        ...
>     def method(self, a: Inner, b: Outer) -> None:
>         ...
> ```
> The generated code for the `__annotations__` property could just have
> a reference to `Outer.Inner` for such cases:
> ```
> def __annotations__():
>     return {"a": Outer.Inner, "b": Outer, "return": None}
> ```

This suggestion was a revelation for me.  Previously, a combination of
bad experiences early on when hacking on compile and symtable, and my
misunderstanding of exactly what was being asserted in the November 2017
thread, led me to believe that all I could support was globals.  But
I've been turning this over in my head for several days now, and I
suspect I can support... just about anything.


I can name five name resolution scenarios I might encounter. I'll
discuss them below, in increasing order of difficulty.


*First* is references to globals / builtins.  That's already working,
it's obvious how it works, and I need not elaborate further.


*Second* is local variables in an enclosing function scope:

def outer_fn():
    class C: pass
    def inner_fn(a:C=None): pass
    return inner_fn

As you pointed out elsewhere in un-quoted text, I could make the
annotation a closure, so it could retain a reference to the value of
(what is from its perspective) the free variable "C".


*Third* is local variables in an enclosing class scope, as you describe
above:

class OuterCls:
    class InnerCls:
        def method(a:InnerCls=None): pass

If I understand what you're suggesting, I could notice inside the
compiler that Inner is being defined in a class scope, walk up the
enclosing scopes until I hit the outermost class, then reconstruct the
chain of pulling out attributes until it resolves globally. Thus I'd
rewrite this example to:

class OuterCls:
    class InnerCls:
        def method(a:OuterCls.InnerCls=None): pass

We've turned the local reference into a global reference, and we already
know globals work fine.


*Fourth* is local variables in an enclosing class scope, which are
themselves local variables in an enclosing function scope:

def outerfn():
    class OuterCls:
        class InnerCls:
            def method(a:InnerCls=None): pass
    return OuterCls.InnerCls

Even this is solvable, I just need to combine the "second" and "third"
approaches above.  I walk up the enclosing scopes to find the outermost
class scope, and if that's a function scope, I create a closure and
retain a reference to /that/ free variable.  Thus this would turn into

def outerfn():
    class OuterCls:
        class InnerCls:
            def method(a:OuterCls.InnerCls=None): pass

and method.__co_annotations__ would reference the free variable
"OuterCls" defined in outerfn.


*Fifth* is the nasty one.  Note that so far every definition we've
referred to in an annotation has been /before/ the definition of the
annotation.  What if we want to refer to something defined /after/ the
annotation?

def outerfn():
    class OuterCls:
        class InnerCls:
            def method(a:zebra=None): pass
            ...

We haven't seen the definition of "zebra" yet, so we don't know what
approach to take.  It could be any of the previous four scenarios.  What
do we do?

This is solvable too: we simply delay the compilation of
__co_annotations__ code objects until the very last possible moment. 
First, at the time we bind the class or function, we generate a stub
__co_annotations__ object, just to give the compiler what it expects. 
The compiler inserts it into the const table for the enclosing construct
(function / class / module), and we remember what index it went into. 
Then, after we've finished processing the entire AST tree for this
module, but before we we exit the compiler, we reconstruct the required
context for evaluating each __co_annotations__ function--the nested
chain of symbol tables, the compiler blocks if needed, etc--and evaluate
the annotations for real.  We assemble the correct __co_annotations__
code object and overwrite the stub in the const table with this
now-correct value.

I can't think of any more scenarios.  So, I think I can handle basically
anything!


However, there are two scenarios where the behavior of evaluations will
change in a way the user might find surprising. The first is when they
redefine a variable used in an annotation:

x = str
def fn(a:x="345"):  pass
x = int

With stock semantics, the annotation to "a" will be "str".  With PEP 563
or my PEP, the annotation to "a" will be "int".  (It gets even more
exciting if you said "del x".)

Similarly, delaying the annotations so that we make everything visible
means defining variables with the same name in multiple scopes may lead
to surprising behavior.

x = str
class Outer:
    def method(a:x="345"):  pass
    x = int

Again, stock gets you an annotation of "str", but PEP 563 and my PEP
gets you "str", because they'll see the /final/ result of evaluating the
body of Outer.

Sadly this is the price you pay for delayed evaluation of annotations. 
Delaying the evaluation of annotations is the goal, and the whole point
is to make changes, observable by the user, in how annotations are
evaluated.  All we can do is document these behaviors and hope our users
forgive us.


I think this is a vast improvement over the first draft of my PEP, and
assuming nobody points out major flaws in this approach (and,
preferably, at least a little encouragement), I plan to redesign my
prototype along these lines.  (Though not right away--I want to take a
break and attend to some other projects first.)


Thanks for the mind-blowing suggestions, Guido!  I must say, you're
pretty good at this Python stuff.


Cheers,


//arry/
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
On 1/15/21 10:12 AM, Paul Bryan wrote:
> I wonder if then the __co_annotations__ call and overwriting of
> __annotations__ should be explicitly caused by a to get_type_hints
> instead of (mysteriously) occurring on an attempt to getattr
> __annotations__.


I would say: absolutely not.  While all "type hints" are annotations,
not all annotations are "type hints".  As mentioned previously in this
thread, typing.get_type_hints() is opinionated in ways that users of
annotations may not want.  And personally I bristle at the idea of
gating a language feature behind a library function.

Besides, most users will never know or care about __co_annotations__. 
If you're not even aware that it exists, it's not mysterious ;-)


Cheers,


//arry/
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
On Fri, Jan 15, 2021 at 10:53 AM Larry Hastings <larry@hastings.org> wrote:

>
> Sorry it took me 3+ days to reply--I had a lot to think about here. But I
> have good things to report!
>
>
> On 1/11/21 8:42 PM, Guido van Rossum wrote:
>
> On Mon, Jan 11, 2021 at 1:20 PM Larry Hastings <larry@hastings.org> wrote:
>
>> PEP 563 states:
>>
>> For code that uses type hints, the typing.get_type_hints(obj,
>> globalns=None, localns=None) function correctly evaluates expressions back
>> from its string form.
>>
>> So, if you are passing in a localns argument that isn't None, okay, but
>> you're not using them "correctly" according to the language. Also, this
>> usage won't be compatible with static type checkers.
>>
> I think you're misreading PEP 563 here. The mention of globalns=None,
> localns=None refers to the fact that these parameters have defaults, not
> that you must pass None. Note that the next paragraph in that PEP mentions
> eval(ann, globals, locals) -- it doesn't say eval(ann, {}, {}).
>
> I think that's misleading, then. The passage is telling you how to
> "correctly evaluate[s] expressions", and how I read it was, it's telling me
> I have to supply globalns=None and localns=None for it to work
> correctly--which, I had to discover on my own, were the default values. I
> don't understand why PEP 563 feels compelled to define a function that it's
> not introducing, and in fact had already shipped with Python two versions
> ago.
>

I suppose PEP 563 is ambiguous because on the one hand global symbols are
the only things that work out of the box, on the other hand you can make
other things work by passing the right scope (and there's lots of code now
that does so), and on the third hand, it claims that get_type_hints() adds
the class scope, which nobody noticed or implemented until this week
(there's a PR, can't recall the number).

But I think all this is irrelevant given what comes below.

>
> Later in that same section, PEP 563 points out a problem with annotations
> that reference class-scoped variables, and claims that the implementation
> would run into problems because methods can't "see" the class scope. This
> is indeed a problem for PEP 563, but *you* can easily generate correct
> code, assuming the containing class exists in the global scope (and your
> solution requires that anyway). So in this case
> ```
> class Outer:
> class Inner:
> ...
> def method(self, a: Inner, b: Outer) -> None:
> ...
> ```
> The generated code for the `__annotations__` property could just have a
> reference to `Outer.Inner` for such cases:
> ```
> def __annotations__():
> return {"a": Outer.Inner, "b": Outer, "return": None}
> ```
>
> This suggestion was a revelation for me. Previously, a combination of bad
> experiences early on when hacking on compile and symtable, and my
> misunderstanding of exactly what was being asserted in the November 2017
> thread, led me to believe that all I could support was globals. But I've
> been turning this over in my head for several days now, and I suspect I can
> support... just about anything.
>
>
> I can name five name resolution scenarios I might encounter. I'll discuss
> them below, in increasing order of difficulty.
>
>
> *First* is references to globals / builtins. That's already working,
> it's obvious how it works, and I need not elaborate further.
>

Yup.

>
> *Second* is local variables in an enclosing function scope:
>
> def outer_fn():
> class C: pass
> def inner_fn(a:C=None): pass
> return inner_fn
>
> As you pointed out elsewhere in un-quoted text, I could make the
> annotation a closure, so it could retain a reference to the value of (what
> is from its perspective) the free variable "C".
>

Yup.

>
> *Third* is local variables in an enclosing class scope, as you describe
> above:
>
> class OuterCls:
> class InnerCls:
> def method(a:InnerCls=None): pass
>
> If I understand what you're suggesting, I could notice inside the compiler
> that Inner is being defined in a class scope, walk up the enclosing scopes
> until I hit the outermost class, then reconstruct the chain of pulling out
> attributes until it resolves globally. Thus I'd rewrite this example to:
>
> class OuterCls:
> class InnerCls:
> def method(a:OuterCls.InnerCls=None): pass
>
> We've turned the local reference into a global reference, and we already
> know globals work fine.
>

I think this is going too far. A static method defined in InnerCls does not
see InnerCls (even after the class definitions are complete). E.g.
```
class Outer:
class Inner:
@staticmethod
def foo(): return Inner
```
If you then call Outer.Inner.foo() you get "NameError: name 'Inner' is not
defined".


>
> *Fourth* is local variables in an enclosing class scope, which are
> themselves local variables in an enclosing function scope:
>
> def outerfn():
> class OuterCls:
> class InnerCls:
> def method(a:InnerCls=None): pass
> return OuterCls.InnerCls
>
> Even this is solvable, I just need to combine the "second" and "third"
> approaches above. I walk up the enclosing scopes to find the outermost
> class scope, and if that's a function scope, I create a closure and retain
> a reference to *that* free variable. Thus this would turn into
>
> def outerfn():
> class OuterCls:
> class InnerCls:
> def method(a:OuterCls.InnerCls=None): pass
>
> and method.__co_annotations__ would reference the free variable "OuterCls"
> defined in outerfn.
>

Probably also not needed.

>
> *Fifth* is the nasty one. Note that so far every definition we've
> referred to in an annotation has been *before* the definition of the
> annotation. What if we want to refer to something defined *after* the
> annotation?
>
> def outerfn():
> class OuterCls:
> class InnerCls:
> def method(a:zebra=None): pass
> ...
>
> We haven't seen the definition of "zebra" yet, so we don't know what
> approach to take. It could be any of the previous four scenarios. What do
> we do?
>

If you agree with me that (3) and (4) are unnecessary (or even
undesirable), the options here are either that zebra is a local in
outerfn() (then just make it a closure), and if it isn't you should treat
it as a global.


> This is solvable too: we simply delay the compilation of
> __co_annotations__ code objects until the very last possible moment.
> First, at the time we bind the class or function, we generate a stub
> __co_annotations__ object, just to give the compiler what it expects. The
> compiler inserts it into the const table for the enclosing construct
> (function / class / module), and we remember what index it went into.
> Then, after we've finished processing the entire AST tree for this module,
> but before we we exit the compiler, we reconstruct the required context for
> evaluating each __co_annotations__ function--the nested chain of symbol
> tables, the compiler blocks if needed, etc--and evaluate the annotations
> for real. We assemble the correct __co_annotations__ code object and
> overwrite the stub in the const table with this now-correct value.
>
> I can't think of any more scenarios. So, I think I can handle basically
> anything!
>
>
> However, there are two scenarios where the behavior of evaluations will
> change in a way the user might find surprising. The first is when they
> redefine a variable used in an annotation:
>
> x = str
> def fn(a:x="345"): pass
> x = int
>
> With stock semantics, the annotation to "a" will be "str". With PEP 563
> or my PEP, the annotation to "a" will be "int". (It gets even more
> exciting if you said "del x".)
>

This falls under the Garbage in, Garbage out principle. Mypy doesn't even
let you do this. Another type checker which is easy to install, pyright,
treats it as str. I wouldn't worry too much about it. If you strike the
first definition of x, the pyright complains and mypy treats it as int.


> Similarly, delaying the annotations so that we make everything visible
> means defining variables with the same name in multiple scopes may lead to
> surprising behavior.
>
> x = str
> class Outer:
> def method(a:x="345"): pass
> x = int
>
> Again, stock gets you an annotation of "str", but PEP 563 and my PEP gets
> you "str", because they'll see the *final* result of evaluating the body
> of Outer.
>
> Sadly this is the price you pay for delayed evaluation of annotations.
> Delaying the evaluation of annotations is the goal, and the whole point is
> to make changes, observable by the user, in how annotations are evaluated.
> All we can do is document these behaviors and hope our users forgive us.
>

Agreed.

>
> I think this is a vast improvement over the first draft of my PEP, and
> assuming nobody points out major flaws in this approach (and, preferably,
> at least a little encouragement), I plan to redesign my prototype along
> these lines. (Though not right away--I want to take a break and attend to
> some other projects first.)
>
>
> Thanks for the mind-blowing suggestions, Guido! I must say, you're pretty
> good at this Python stuff.
>

You're not so bad yourself -- without your wakeup call we would have
immortalized PEP 563's limitations.


--
--Guido van Rossum (python.org/~guido)
*Pronouns: he/him **(why is my pronoun here?)*
<http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
I knew I was missing something. Agree, annotations are not necessarily
type hints.

On Fri, 2021-01-15 at 10:56 -0800, Larry Hastings wrote:
>
> On 1/15/21 10:12 AM, Paul Bryan wrote:
>
> > I wonder if then the __co_annotations__ call and overwriting of
> > __annotations__ should be explicitly caused by a to get_type_hints
> > instead of (mysteriously) occurring on an attempt to getattr
> > __annotations__.
>
> I would say: absolutely not.  While all "type hints" are annotations,
> not all annotations are "type hints".  As mentioned previously in
> this thread, typing.get_type_hints() is opinionated in ways that
> users of annotations may not want.  And personally I bristle at the
> idea of gating a language feature behind a library function.
> Besides, most users will never know or care about
> __co_annotations__.  If you're not even aware that it exists, it's
> not mysterious ;-)
>
> Cheers,
>
> /arry
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
On 16/01/21 7:56 am, Larry Hastings wrote:
>
> As mentioned previously in this
> thread, typing.get_type_hints() is opinionated in ways that users of
> annotations may not want.

This brings us back to my idea of introducing a new
annotations() function to hide the details. It wouldn't
be the same as get_type_hints(), since it wouldn't make
any assumptions about what the annotations mean.

--
Greg
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-leave@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/OHMQLQXGJDZLYTACJBGZIPJV45H2LF5H/
Code of Conduct: http://python.org/psf/codeofconduct/
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
Would annotations() just access the dunder, like other builtins (and
then result in the descriptor resolving __co_annotations__ as
proposed), or would calling it be required to actually resolve
__co_annotations__? I think it should probably be the former.

On Sat, 2021-01-16 at 12:29 +1300, Greg Ewing wrote:
> On 16/01/21 7:56 am, Larry Hastings wrote:
> >
> > As mentioned previously in this
> > thread, typing.get_type_hints() is opinionated in ways that users
> > of
> > annotations may not want.
>
> This brings us back to my idea of introducing a new
> annotations() function to hide the details. It wouldn't
> be the same as get_type_hints(), since it wouldn't make
> any assumptions about what the annotations mean.
>
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
On 1/15/21 3:29 PM, Greg Ewing wrote:
> On 16/01/21 7:56 am, Larry Hastings wrote:
>>
>> As mentioned previously in this thread, typing.get_type_hints() is
>> opinionated in ways that users of annotations may not want.
>
> This brings us back to my idea of introducing a new
> annotations() function to hide the details. It wouldn't
> be the same as get_type_hints(), since it wouldn't make
> any assumptions about what the annotations mean.


I think it's simpler and nicer for the user to preserve the existing
interface, so I'm sticking with that approach.  If you feel strongly
about this, I encourage you to write your own competing PEP.


//arry/
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
On 16/01/21 9:38 am, Guido van Rossum wrote:
> On Fri, Jan 15, 2021 at 10:53 AM Larry Hastings <larry@hastings.org
> <mailto:larry@hastings.org>> wrote:
>
> class OuterCls:
>     class InnerCls:
>         def method(a:OuterCls.InnerCls=None): pass
>
> We've turned the local reference into a global reference, and we
> already know globals work fine.
>
>
> I think this is going too far. A static method defined in InnerCls does
> not see InnerCls (even after the class definitions are complete). E.g.
> ```
> class Outer:
>     class Inner:
>         @staticmethod
>         def foo(): return Inner
> ```
> If you then call Outer.Inner.foo() you get "NameError: name 'Inner' is
> not defined".

I'm not so sure about that. Conceptually, annotations are evaluated
in the environment existing when the class scope is being constructed.
The fact that we're moving them into a closure is an implementation
detail that I don't think should be exposed.

> What if we want to refer to something defined /after/
> the annotation?
>
> def outerfn():
>     class OuterCls:
>         class InnerCls:
>             def method(a:zebra=None): pass
>             ...
>
> We haven't seen the definition of "zebra" yet, so we don't know what
> approach to take.

I don't think that should be a problem. The compiler already knows
about all the assignments occurring in a scope before starting to
generate code for it.

--
Greg
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-leave@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/SX66DYOEZSQCLY6JJMIDBS4LFHPB76Y3/
Code of Conduct: http://python.org/psf/codeofconduct/
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
On Fri, Jan 15, 2021 at 4:45 PM Greg Ewing <greg.ewing@canterbury.ac.nz>
wrote:

> On 16/01/21 9:38 am, Guido van Rossum wrote:
> > On Fri, Jan 15, 2021 at 10:53 AM Larry Hastings <larry@hastings.org
> > <mailto:larry@hastings.org>> wrote:
> >
> > class OuterCls:
> > class InnerCls:
> > def method(a:OuterCls.InnerCls=None): pass
> >
> > We've turned the local reference into a global reference, and we
> > already know globals work fine.
>
> [.Above was what Larry wrote, the rest is me. I guess Greg's mailer is
having trouble with the GMail-style quoting. :-( ]

> I think this is going too far. A static method defined in InnerCls does
> > not see InnerCls (even after the class definitions are complete). E.g.
> > ```
> > class Outer:
> > class Inner:
> > @staticmethod
> > def foo(): return Inner
> > ```
> > If you then call Outer.Inner.foo() you get "NameError: name 'Inner' is
> > not defined".
>

[Greg]

> I'm not so sure about that. Conceptually, annotations are evaluated
> in the environment existing when the class scope is being constructed.
> The fact that we're moving them into a closure is an implementation
> detail that I don't think should be exposed.
>

Yeah, that wasn't very clear, and I'm not 100% sure I got it right. But
consider this:
```
class Outer:
foo = 1
class Inner:
print(foo)
```
This gives "NameError: name 'foo' is not defined". And here there is no
forward reference involved, and foo lives in the exactly the same
scope/namespace as Inner.

The reason for the NameError is that class scopes don't participate in the
closure game (an intentional design quirk to avoid methods referencing
unqualified class variables).

So I still think that Larry's example shouldn't (have to) work.

(I agree with Greg on the 'zebra' example.)

--
--Guido van Rossum (python.org/~guido)
*Pronouns: he/him **(why is my pronoun here?)*
<http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
On 16/01/21 2:09 pm, Guido van Rossum wrote:
> Yeah, that wasn't very clear, and I'm not 100% sure I got it right. But
> consider this:
> ```
> class Outer:
>     foo = 1
>     class Inner:
>         print(foo)

That's true. So maybe the user should have to be explicit in
cases like this:

class Outer:
class Inner:
def f(x: Outer.Inner): ...

However, I think cases like this should work:

class C:
t = List[int]
def f(x: t): ...

even though the closure placed in C.__co_annotations__ wouldn't
normally have access to t without qualification.

--
Greg
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-leave@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/YTZ3QRG3V6URZ3FDOZ6QON5DSYC52HGI/
Code of Conduct: http://python.org/psf/codeofconduct/
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
On Fri, Jan 15, 2021 at 18:15 Greg Ewing <greg.ewing@canterbury.ac.nz>
wrote:

> On 16/01/21 2:09 pm, Guido van Rossum wrote:
> > Yeah, that wasn't very clear, and I'm not 100% sure I got it right. But
> > consider this:
> > ```
> > class Outer:
> > foo = 1
> > class Inner:
> > print(foo)
>
> That's true. So maybe the user should have to be explicit in
> cases like this:
>
> class Outer:
> class Inner:
> def f(x: Outer.Inner): ...
>
> However, I think cases like this should work:
>
> class C:
> t = List[int]
> def f(x: t): ...
>
> even though the closure placed in C.__co_annotations__ wouldn't
> normally have access to t without qualification.


Yes, the immediately surrounding scope should be accessible for annotations
even if it’s a class.

>
> --
--Guido (mobile)
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
Given your comments below, I'd summarize the semantics you want as:

Looking up names for annotations should work exactly as it does
today with "stock" semantics, except annotations should also see
names that haven't been declared yet.

Thus an annotation should be able to see names set in the following
scopes, in order of most-preferred to least-preferred:

* names in the current scope (whether the current scope is a class
body, function body, or global),
* names in enclosing /function/ scopes, up to but not including the
first enclosing /class/ scope, and
* global scope,

whether they are declared before or after the annotation.

If the same name is defined multiple times, annotations will prefer the
definition from the "nearest" scope, even if that definition hasn't been
evaluated yet.  For example:

x = int
def foo():
    def bar(a:x): pass
    x = str

Here a would be annotated with "str".

Ambiguous conditions (referring to names that change value, referring to
names that may be deleted) will result in undefined behavior.


Does that sound right?


Thanks for the kind words,


//arry/

On 1/15/21 12:38 PM, Guido van Rossum wrote:
> On Fri, Jan 15, 2021 at 10:53 AM Larry Hastings <larry@hastings.org
> <mailto:larry@hastings.org>> wrote:
>
>
> Sorry it took me 3+ days to reply--I had a lot to think about
> here.  But I have good things to report!
>
>
> On 1/11/21 8:42 PM, Guido van Rossum wrote:
>> On Mon, Jan 11, 2021 at 1:20 PM Larry Hastings
>> <larry@hastings.org <mailto:larry@hastings.org>> wrote:
>>
>> PEP 563 states:
>>
>> For code that uses type hints, the
>> typing.get_type_hints(obj, globalns=None, localns=None)
>> function correctly evaluates expressions back from its
>> string form.
>>
>> So, if you are passing in a localns argument that isn't None,
>> okay, but you're not using them "correctly" according to the
>> language.  Also, this usage won't be compatible with static
>> type checkers.
>>
>> I think you're misreading PEP 563 here. The mention of
>> globalns=None, localns=None refers to the fact that these
>> parameters have defaults, not that you must pass None. Note that
>> the next paragraph in that PEP mentions eval(ann, globals,
>> locals) -- it doesn't say eval(ann, {}, {}).
>
> I think that's misleading, then.  The passage is telling you how
> to "correctly evaluate[s] expressions", and how I read it was,
> it's telling me I have to supply globalns=None and localns=None
> for it to work correctly--which, I had to discover on my own, were
> the default values.  I don't understand why PEP 563 feels
> compelled to define a function that it's not introducing, and in
> fact had already shipped with Python two versions ago.
>
>
> I suppose PEP 563 is ambiguous because on the one hand global symbols
> are the only things that work out of the box, on the other hand you
> can make other things work by passing the right scope (and there's
> lots of code now that does so), and on the third hand, it claims that
> get_type_hints() adds the class scope, which nobody noticed or
> implemented until this week (there's a PR, can't recall the number).
>
> But I think all this is irrelevant given what comes below.
>
>
>> Later in that same section, PEP 563 points out a problem with
>> annotations that reference class-scoped variables, and claims
>> that the implementation would run into problems because methods
>> can't "see" the class scope. This is indeed a problem for PEP
>> 563, but *you* can easily generate correct code, assuming the
>> containing class exists in the global scope (and your solution
>> requires that anyway). So in this case
>> ```
>> class Outer:
>>     class Inner:
>>        ...
>>     def method(self, a: Inner, b: Outer) -> None:
>>         ...
>> ```
>> The generated code for the `__annotations__` property could just
>> have a reference to `Outer.Inner` for such cases:
>> ```
>> def __annotations__():
>>     return {"a": Outer.Inner, "b": Outer, "return": None}
>> ```
>
> This suggestion was a revelation for me.  Previously, a
> combination of bad experiences early on when hacking on compile
> and symtable, and my misunderstanding of exactly what was being
> asserted in the November 2017 thread, led me to believe that all I
> could support was globals.  But I've been turning this over in my
> head for several days now, and I suspect I can support... just
> about anything.
>
>
> I can name five name resolution scenarios I might encounter.  I'll
> discuss them below, in increasing order of difficulty.
>
>
> *First* is references to globals / builtins. That's already
> working, it's obvious how it works, and I need not elaborate further.
>
>
> Yup.
>
>
> *Second* is local variables in an enclosing function scope:
>
> def outer_fn():
>     class C: pass
>     def inner_fn(a:C=None): pass
>     return inner_fn
>
> As you pointed out elsewhere in un-quoted text, I could make the
> annotation a closure, so it could retain a reference to the value
> of (what is from its perspective) the free variable "C".
>
>
> Yup.
>
>
> *Third* is local variables in an enclosing class scope, as you
> describe above:
>
> class OuterCls:
>     class InnerCls:
>         def method(a:InnerCls=None): pass
>
> If I understand what you're suggesting, I could notice inside the
> compiler that Inner is being defined in a class scope, walk up the
> enclosing scopes until I hit the outermost class, then reconstruct
> the chain of pulling out attributes until it resolves globally. 
> Thus I'd rewrite this example to:
>
> class OuterCls:
>     class InnerCls:
>         def method(a:OuterCls.InnerCls=None): pass
>
> We've turned the local reference into a global reference, and we
> already know globals work fine.
>
>
> I think this is going too far. A static method defined in InnerCls
> does not see InnerCls (even after the class definitions are complete).
> E.g.
> ```
> class Outer:
>     class Inner:
>         @staticmethod
>         def foo(): return Inner
> ```
> If you then call Outer.Inner.foo() you get "NameError: name 'Inner' is
> not defined".
>
>
> *Fourth* is local variables in an enclosing class scope, which are
> themselves local variables in an enclosing function scope:
>
> def outerfn():
>     class OuterCls:
>         class InnerCls:
>             def method(a:InnerCls=None): pass
>     return OuterCls.InnerCls
>
> Even this is solvable, I just need to combine the "second" and
> "third" approaches above.  I walk up the enclosing scopes to find
> the outermost class scope, and if that's a function scope, I
> create a closure and retain a reference to /that/ free variable. 
> Thus this would turn into
>
> def outerfn():
>     class OuterCls:
>         class InnerCls:
>             def method(a:OuterCls.InnerCls=None): pass
>
> and method.__co_annotations__ would reference the free variable
> "OuterCls" defined in outerfn.
>
>
> Probably also not needed.
>
>
> *Fifth* is the nasty one.  Note that so far every definition we've
> referred to in an annotation has been /before/ the definition of
> the annotation.  What if we want to refer to something defined
> /after/ the annotation?
>
> def outerfn():
>     class OuterCls:
>         class InnerCls:
>             def method(a:zebra=None): pass
>             ...
>
> We haven't seen the definition of "zebra" yet, so we don't know
> what approach to take.  It could be any of the previous four
> scenarios.  What do we do?
>
>
> If you agree with me that (3) and (4) are unnecessary (or even
> undesirable), the options here are either that zebra is a local in
> outerfn() (then just make it a closure), and if it isn't you should
> treat it as a global.
>
> This is solvable too: we simply delay the compilation of
> __co_annotations__ code objects until the very last possible
> moment.  First, at the time we bind the class or function, we
> generate a stub __co_annotations__ object, just to give the
> compiler what it expects.  The compiler inserts it into the const
> table for the enclosing construct (function / class / module), and
> we remember what index it went into.  Then, after we've finished
> processing the entire AST tree for this module, but before we we
> exit the compiler, we reconstruct the required context for
> evaluating each __co_annotations__ function--the nested chain of
> symbol tables, the compiler blocks if needed, etc--and evaluate
> the annotations for real.  We assemble the correct
> __co_annotations__ code object and overwrite the stub in the const
> table with this now-correct value.
>
> I can't think of any more scenarios.  So, I think I can handle
> basically anything!
>
>
> However, there are two scenarios where the behavior of evaluations
> will change in a way the user might find surprising.  The first is
> when they redefine a variable used in an annotation:
>
> x = str
> def fn(a:x="345"):  pass
> x = int
>
> With stock semantics, the annotation to "a" will be "str".  With
> PEP 563 or my PEP, the annotation to "a" will be "int".  (It gets
> even more exciting if you said "del x".)
>
>
> This falls under the Garbage in, Garbage out principle. Mypy doesn't
> even let you do this. Another type checker which is easy to install,
> pyright, treats it as str. I wouldn't worry too much about it. If you
> strike the first definition of x, the pyright complains and mypy
> treats it as int.
>
> Similarly, delaying the annotations so that we make everything
> visible means defining variables with the same name in multiple
> scopes may lead to surprising behavior.
>
> x = str
> class Outer:
>     def method(a:x="345"):  pass
>     x = int
>
> Again, stock gets you an annotation of "str", but PEP 563 and my
> PEP gets you "str", because they'll see the /final/ result of
> evaluating the body of Outer.
>
> Sadly this is the price you pay for delayed evaluation of
> annotations.  Delaying the evaluation of annotations is the goal,
> and the whole point is to make changes, observable by the user, in
> how annotations are evaluated.  All we can do is document these
> behaviors and hope our users forgive us.
>
>
> Agreed.
>
>
> I think this is a vast improvement over the first draft of my PEP,
> and assuming nobody points out major flaws in this approach (and,
> preferably, at least a little encouragement), I plan to redesign
> my prototype along these lines.  (Though not right away--I want to
> take a break and attend to some other projects first.)
>
>
> Thanks for the mind-blowing suggestions, Guido!  I must say,
> you're pretty good at this Python stuff.
>
>
> You're not so bad yourself -- without your wakeup call we would have
> immortalized PEP 563's limitations.
>
>
> --
> --Guido van Rossum (python.org/~guido <http://python.org/~guido>)
> /Pronouns: he/him //(why is my pronoun here?)/
> <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
This PEP doesn't cover about what happened when __co_annotation__()
failed (e.g. NameError).

Forward reference is a major reason, but not a only reason for using
string annotation. There are two other reasons:

* Avoid importing heavy module.
* Avoid circular imports.

In these cases, this pattern is used:

```
from __future__ import annotations
import typing
from dataclasses import dataclass

if typing.TYPE_CHECKING:
import other_mod # do not want to import actually

@dataclass
class Foo:
a: other_mod.spam
b: other_mod.ham

def fun(a: other_mod.spam, b: other_mod.ham) -> None: ...
```

Of course, mypy works well with string annotation because it is static checker.
IPython shows signature well too:

```
In [3]: sample.Foo?
Init signature: sample.Foo(a: 'other_mod.spam', b: 'other_mod.ham') -> None
Docstring: Foo(a: 'other_mod.spam', b: 'other_mod.ham')
```

PEP 563 works fine in this scenario. How PEP 649 works?

Regards,
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-leave@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/YMMYKST2B4IJJOHAQIFIBAT57MKBBG56/
Code of Conduct: http://python.org/psf/codeofconduct/
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
If you never examine __annotations__, then you can refer to symbols that
are never defined and nothing bad happens.  It's just like writing a
function that refers to undefined symbols, then never calling that function.

If you examine __annotations__, and the annotations refer to values that
aren't defined, the evaluation fails.  This too works like you'd expect:
the __co_annotation__ function raises NameError.  So your IPython use
case would raise a NameError.

Note that the code is deliberately written to allow you to fix the name
errors and try again.  (The __co_annotations__ attribute is only cleared
if calling it succeeds and it returns a legal value.)  So, if you
examine an annotation in IPython, and it fails with a NameError, you
could import the missing module--or otherwise do what is needed to fix
the problem--and try again.

If your imports are complicated, you could always hide them in a
function.  I just tried this and it seems to work fine:

def my_imports():
    global other_mod
    import other_mod

So, you could put all your imports in such a function, run it from
inside a "if typing.TYPE_CHECKING" block, and you'd have a convenient
way of doing all your imports from inside IPython too.

One final note: with PEP 649, you can still use strings as annotations
if you prefer.  You just have to write them as strings manually.  So the
IPython use case could continue to work correctly that way.  I realize
that this itself causes minor problems--it means no syntax checking is
done on the annotation, and it causes a little extra work for the
user--but I assume this is a rare use case and most users won't need to
bother.


Cheers,


//arry/

//

On 1/16/21 11:43 PM, Inada Naoki wrote:
> This PEP doesn't cover about what happened when __co_annotation__()
> failed (e.g. NameError).
>
> Forward reference is a major reason, but not a only reason for using
> string annotation. There are two other reasons:
>
> * Avoid importing heavy module.
> * Avoid circular imports.
>
> In these cases, this pattern is used:
>
> ```
> from __future__ import annotations
> import typing
> from dataclasses import dataclass
>
> if typing.TYPE_CHECKING:
> import other_mod # do not want to import actually
>
> @dataclass
> class Foo:
> a: other_mod.spam
> b: other_mod.ham
>
> def fun(a: other_mod.spam, b: other_mod.ham) -> None: ...
> ```
>
> Of course, mypy works well with string annotation because it is static checker.
> IPython shows signature well too:
>
> ```
> In [3]: sample.Foo?
> Init signature: sample.Foo(a: 'other_mod.spam', b: 'other_mod.ham') -> None
> Docstring: Foo(a: 'other_mod.spam', b: 'other_mod.ham')
> ```
>
> PEP 563 works fine in this scenario. How PEP 649 works?
>
> Regards,
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
On Sun, Jan 17, 2021 at 7:33 AM Larry Hastings <larry@hastings.org> wrote:

> If your imports are complicated, you could always hide them in a
> function. I just tried this and it seems to work fine:
>
> def my_imports():
> global other_mod
> import other_mod
>
> So, you could put all your imports in such a function, run it from inside
> a "if typing.TYPE_CHECKING" block, and you'd have a convenient way of doing
> all your imports from inside IPython too.
>

But static type checkers won't understand such imports. (Or is this about
annotations used for other purposes? Then I suppose it's fine, but only as
long as you completely give up static type checks for modules that use this
idiom.)

--
--Guido van Rossum (python.org/~guido)
*Pronouns: he/him **(why is my pronoun here?)*
<http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
On 1/18/21 12:24 PM, Guido van Rossum wrote:
> On Sun, Jan 17, 2021 at 7:33 AM Larry Hastings <larry@hastings.org
> <mailto:larry@hastings.org>> wrote:
>
> If your imports are complicated, you could always hide them in a
> function.  I just tried this and it seems to work fine:
>
> def my_imports():
>     global other_mod
>     import other_mod
>
> So, you could put all your imports in such a function, run it from
> inside a "if typing.TYPE_CHECKING" block, and you'd have a
> convenient way of doing all your imports from inside IPython too.
>
>
> But static type checkers won't understand such imports. (Or is this
> about annotations used for other purposes? Then I suppose it's fine,
> but only as long as you completely give up static type checks for
> modules that use this idiom.)


Oh, okay.  I haven't used the static type checkers, so it's not clear to
me what powers they do and don't have.  It was only a minor suggestion
anyway.  Perhaps PEP 649 will be slightly inconvenient to people
exploring their code inside IPython.

Or maybe it'd work if they gated the if statement on running in ipython?

if typing.TYPE_CHECKING or os.path.split(sys.argv[0])[1] == "ipython3":
    import other_mod


Cheers,


//arry/
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
On Tue, Jan 19, 2021 at 6:02 AM Larry Hastings <larry@hastings.org> wrote:
>
>
> Oh, okay. I haven't used the static type checkers, so it's not clear to me what powers they do and don't have. It was only a minor suggestion anyway. Perhaps PEP 649 will be slightly inconvenient to people exploring their code inside IPython.
>

Not only IPython, but many REPLs. Especially, Jupyter notebook is the
same to IPython.
We can see string annotations even in CPython REPL via pydoc.

```
>>> def func(a: "Optional[int]") -> "Optional[str]":
... ...
...
>>> help(func)

func(a: 'Optional[int]') -> 'Optional[str]'
```

Since this signature with type hints came from
inspect.signature(func), all tools using inspect.signature() will be
affected too.
I think Sphinx autodoc will be affected, but I am not sure.


> Or maybe it'd work if they gated the if statement on running in ipython?
>
> if typing.TYPE_CHECKING or os.path.split(sys.argv[0])[1] == "ipython3":
> import other_mod
>

It is possible for heavy modules, but not possible to avoid circular imports.
Additionally, there are some cases modules are not runtime importable.

* Optional dependency, user may not install it.
* Dummy modules having only "pyi" files.

If PEP 563 becomes the default, we can provide a faster way to get the
text signature without eval() annotated string. So eval() performance
is not a problem here.
Many type hinting use cases don't need type objects in runtime.
So I think PEP 563 is better for type hinting user experience.

Regards,

--
Inada Naoki <songofacandy@gmail.com>
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-leave@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/QR2KOGAXR5T4GTLGL5NLPWSVWPGVFQAI/
Code of Conduct: http://python.org/psf/codeofconduct/
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
On 1/18/21 3:42 PM, Inada Naoki wrote:
> Many type hinting use cases don't need type objects in runtime.
> So I think PEP 563 is better for type hinting user experience.


You mean, in situations where the user doesn't want to import the types,
because of heavyweight imports or circular imports?  I didn't think
those were very common.


//arry/
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
On Tue, Jan 19, 2021 at 8:54 AM Larry Hastings <larry@hastings.org> wrote:
>
> On 1/18/21 3:42 PM, Inada Naoki wrote:
>
> Many type hinting use cases don't need type objects in runtime.
> So I think PEP 563 is better for type hinting user experience.
>
> You mean, in situations where the user doesn't want to import the types, because of heavyweight imports or circular imports? I didn't think those were very common.
>

Personally, I dislike any runtime overhead caused by type hints. That
is one reason I don't use type hinting much for now.
I don't want to import modules used only in type hints. I don't want
to import even "typing".

I planned to use type hinting after I can drop Python 3.6 support and
use `from __future__ import annotations`.
And I love lightweight function annotation implementation (*) very much.

(*) https://github.com/python/cpython/pull/23316

I expect we can start to write type hints even in stdlibs, because it
doesn't require extra imports and overhead become very cheap.
Maybe, I am a minority. But I dislike any runtime overhead and extra
dependencies.

Regards,
--
Inada Naoki <songofacandy@gmail.com>
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-leave@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/WLGZULJRK7PLQ37HJDJZPIZL5SM3NGF2/
Code of Conduct: http://python.org/psf/codeofconduct/
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
On 19/01/21 1:13 pm, Inada Naoki wrote:
> I don't want to import modules used only in type hints. I don't want
> to import even "typing".

How about having a pseudo-module called __typing__ that is
ignored by the compiler:

from __typing__ import ...

would be compiled to a no-op, but recognised by type checkers.
If you want to do run-time typing stuff, you would use

from typing import ...

--
Greg
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-leave@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/24MR5KVWP7LR3PCE7V44OTLJNXDLLNZX/
Code of Conduct: http://python.org/psf/codeofconduct/
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
Hello,

On Tue, 19 Jan 2021 14:31:49 +1300
Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:

> On 19/01/21 1:13 pm, Inada Naoki wrote:
> > I don't want to import modules used only in type hints. I don't want
> > to import even "typing".
>
> How about having a pseudo-module called __typing__ that is
> ignored by the compiler:
>
> from __typing__ import ...
>
> would be compiled to a no-op, but recognised by type checkers.
> If you want to do run-time typing stuff, you would use

Please don't limit it to just "typing". There's a need for a module
which would handle "language-level" features, to not put newly added
things in the global namespace. By analogy with __future__, such a
module could be named __present__. Alternative names would be "lang" or
"python".

But analogy with __future__ is helpful, as there should be a place for
"pragma imports", which would change behavior of the programs, like
imports from __future__ do, except that features in __future__ are
destined to be "switchable" only temporary and become default later.
Breaking backward compatibility with each version has already become a
norm, but going further, even more radical changes would be required,
and so it should be possible to either enable or disable them, as part
of the standard, not temporary, language semantics, hence the idea of
__present__ as alternative to __future__.

>
> from typing import ...
>
> --
> Greg

[]

--
Best regards,
Paul mailto:pmiscml@gmail.com
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-leave@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/LX7EB4CT5PCGNAE64RWQFI2OP63FYGR6/
Code of Conduct: http://python.org/psf/codeofconduct/
Re: PEP: Deferred Evaluation Of Annotations Using Descriptors [ In reply to ]
On Mon, 18 Jan 2021 15:54:32 -0800
Larry Hastings <larry@hastings.org> wrote:
> On 1/18/21 3:42 PM, Inada Naoki wrote:
> > Many type hinting use cases don't need type objects in runtime.
> > So I think PEP 563 is better for type hinting user experience.
>
> You mean, in situations where the user doesn't want to import the types,
> because of heavyweight imports or circular imports?  I didn't think
> those were very common.

Probably not very common, but annoying anyway. For example, a library
(say PyArrow) may expose a function for importing Pandas data without
mandating a Pandas dependency.

Note: I don't use type hinting, I'm just responding to this particular
aspect (optional / heavy dependencies).

Regards

Antoine.

_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-leave@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/MV4R3BQCDJZNL6SN2CAAN43EBW5Q6UMF/
Code of Conduct: http://python.org/psf/codeofconduct/

1 2 3 4  View All