Mailing List Archive

When is logging.getLogger(__name__) needed?
Hi,

In my top level program file, main.py, I have

def main_function():

parser = argparse.ArgumentParser(description="my prog")

...

args = parser.parse_args()
config = configparser.ConfigParser()

if args.config_file is None:
config_file = DEFAULT_CONFIG_FILE
else:
config_file = args.config_file

config.read(config_file)

logging.config.fileConfig(fname=config_file)
logger = logging.getLogger(__name__)

do_some_stuff()

my_class_instance = myprog.MyClass()

def do_some_stuff():

logger.info("Doing stuff")

This does not work, because 'logger' is not known in the function
'do_some_stuff'.

However, if in 'my_prog/my_class.py' I have

class MyClass:

def __init__(self):

logger.debug("created instance of MyClass")

this 'just works'.

I can add

logger = logging.getLogger(__name__)

to 'do_some_stuff', but why is this necessary in this case but not in
the class?

Or should I be doing this entirely differently?

Cheers,

Loris

--
This signature is currently under constuction.
--
https://mail.python.org/mailman/listinfo/python-list
Re: When is logging.getLogger(__name__) needed? [ In reply to ]
On 31/03/2023 15:01, Loris Bennett wrote:
> Hi,
>
> In my top level program file, main.py, I have
>
> def main_function():
>
> parser = argparse.ArgumentParser(description="my prog")
>
> ...
>
> args = parser.parse_args()
> config = configparser.ConfigParser()
>
> if args.config_file is None:
> config_file = DEFAULT_CONFIG_FILE
> else:
> config_file = args.config_file
>
> config.read(config_file)
>
> logging.config.fileConfig(fname=config_file)
> logger = logging.getLogger(__name__)
>
> do_some_stuff()
>
> my_class_instance = myprog.MyClass()
>
> def do_some_stuff():
>
> logger.info("Doing stuff")
>
> This does not work, because 'logger' is not known in the function
> 'do_some_stuff'.
>
> However, if in 'my_prog/my_class.py' I have
>
> class MyClass:
>
> def __init__(self):
>
> logger.debug("created instance of MyClass")
>
> this 'just works'.

Take another look at your code -- you'll probably find

> logger = logging.getLogger(__name__)

on the module level in my_class.py.

> to 'do_some_stuff', but why is this necessary in this case but not in
> the class?

Your problem has nothing to do with logging -- it's about visibility
("scope") of names:

>>> def use_name():
print(name)


>>> def define_name():
name = "Loris"


>>> use_name()
Traceback (most recent call last):
File "<pyshell#56>", line 1, in <module>
use_name()
File "<pyshell#52>", line 2, in use_name
print(name)
NameError: name 'name' is not defined

Binding (=assigning to) a name inside a function makes it local to that
function. If you want a global (module-level) name you have to say so:

>>> def define_name():
global name
name = "Peter"


>>> define_name()
>>> use_name()
Peter

--
https://mail.python.org/mailman/listinfo/python-list
Re: When is logging.getLogger(__name__) needed? [ In reply to ]
On 01/04/2023 02.01, Loris Bennett wrote:
> Hi,
>
> In my top level program file, main.py, I have
>
> def main_function():
>
> parser = argparse.ArgumentParser(description="my prog")
>
> ...
>
> args = parser.parse_args()
> config = configparser.ConfigParser()
>
> if args.config_file is None:
> config_file = DEFAULT_CONFIG_FILE
> else:
> config_file = args.config_file
>
> config.read(config_file)
>
> logging.config.fileConfig(fname=config_file)
> logger = logging.getLogger(__name__)
>
> do_some_stuff()
>
> my_class_instance = myprog.MyClass()
>
> def do_some_stuff():
>
> logger.info("Doing stuff")
>
> This does not work, because 'logger' is not known in the function
> 'do_some_stuff'.
>
> However, if in 'my_prog/my_class.py' I have
>
> class MyClass:
>
> def __init__(self):
>
> logger.debug("created instance of MyClass")
>
> this 'just works'.
>
> I can add
>
> logger = logging.getLogger(__name__)
>
> to 'do_some_stuff', but why is this necessary in this case but not in
> the class?
>
> Or should I be doing this entirely differently?

Yes: differently.

To complement @Peter's response, two items for consideration:

1 once main_function() has completed, have it return logger and other
such values/constructs. The target-identifiers on the LHS of the
function-call will thus be within the global scope.

2 if the purposes of main_function() are condensed-down to a few
(English, or ..., language) phrases, the word "and" will feature, eg
- configure env according to cmdLN args,
- establish log(s),
- do_some_stuff(), ** AND **
- instantiate MyClass.

If these (and do_some_stuff(), like MyClass' methods) were split into
separate functions* might you find it easier to see them as separate
sub-solutions? Each sub-solution would be able to contribute to the
whole - the earlier ones as creating (outputting) a description,
constraint, or basis; which becomes input to a later function/method.


* there is some debate amongst developers about whether "one function,
one purpose" should be a rule, a convention, or tossed in the trash. YMMV!

Personal view: SOLID's "Single" principle applies: there should be only
one reason (hanging over the head of each method/function, like the
Sword of Damocles) for it to change - or one 'user' who could demand a
change to that function. In other words, an updated cmdLN option
shouldn't affect a function which establishes logging, for example.


Web.Refs:
https://people.engr.tamu.edu/choe/choe/courses/20fall/315/lectures/slide23-solid.pdf
https://www.hanselminutes.com/145/solid-principles-with-uncle-bob-robert-c-martin
https://idioms.thefreedictionary.com/sword+of+Damocles
https://en.wikipedia.org/wiki/Damocles

--
Regards,
=dn
--
https://mail.python.org/mailman/listinfo/python-list
Re: When is logging.getLogger(__name__) needed? [ In reply to ]
On 01/04/2023 02.01, Loris Bennett wrote:
> Hi,
>
> In my top level program file, main.py, I have
>
> def main_function():
>
> parser = argparse.ArgumentParser(description="my prog")
>
> ...
>
> args = parser.parse_args()
> config = configparser.ConfigParser()
>
> if args.config_file is None:
> config_file = DEFAULT_CONFIG_FILE
> else:
> config_file = args.config_file
>
> config.read(config_file)
>
> logging.config.fileConfig(fname=config_file)
> logger = logging.getLogger(__name__)
>
> do_some_stuff()
>
> my_class_instance = myprog.MyClass()
>
> def do_some_stuff():
>
> logger.info("Doing stuff")
>
> This does not work, because 'logger' is not known in the function
> 'do_some_stuff'.
>
> However, if in 'my_prog/my_class.py' I have
>
> class MyClass:
>
> def __init__(self):
>
> logger.debug("created instance of MyClass")
>
> this 'just works'.
>
> I can add
>
> logger = logging.getLogger(__name__)
>
> to 'do_some_stuff', but why is this necessary in this case but not in
> the class?
>
> Or should I be doing this entirely differently?

Yes: differently.

To complement @Peter's response, two items for consideration:

1 once main_function() has completed, have it return logger and other
such values/constructs. The target-identifiers on the LHS of the
function-call will thus be within the global scope.

2 if the purposes of main_function() are condensed-down to a few
(English, or ..., language) phrases, the word "and" will feature, eg
- configure env according to cmdLN args,
- establish log(s),
- do_some_stuff(), ** AND **
- instantiate MyClass.

If these (and do_some_stuff(), like MyClass' methods) were split into
separate functions* might you find it easier to see them as separate
sub-solutions? Each sub-solution would be able to contribute to the
whole - the earlier ones as creating (outputting) a description,
constraint, or basis; which becomes input to a later function/method.


* there is some debate amongst developers about whether "one function,
one purpose" should be a rule, a convention, or tossed in the trash. YMMV!

Personal view: SOLID's "Single" principle applies: there should be only
one reason (hanging over the head of each method/function, like the
Sword of Damocles) for it to change - or one 'user' who could demand a
change to that function. In other words, an updated cmdLN option
shouldn't affect a function which establishes logging, for example.


Web.Refs:
https://people.engr.tamu.edu/choe/choe/courses/20fall/315/lectures/slide23-solid.pdf
https://www.hanselminutes.com/145/solid-principles-with-uncle-bob-robert-c-martin
https://idioms.thefreedictionary.com/sword+of+Damocles
https://en.wikipedia.org/wiki/Damocles

--
Regards,
=dn
--
https://mail.python.org/mailman/listinfo/python-list
Re: When is logging.getLogger(__name__) needed? [ In reply to ]
Peter Otten <__peter__@web.de> writes:

> On 31/03/2023 15:01, Loris Bennett wrote:
[snip (53 lines)]

> Your problem has nothing to do with logging -- it's about visibility
> ("scope") of names:
>
>>>> def use_name():
> print(name)
>
>
>>>> def define_name():
> name = "Loris"
>
>
>>>> use_name()
> Traceback (most recent call last):
> File "<pyshell#56>", line 1, in <module>
> use_name()
> File "<pyshell#52>", line 2, in use_name
> print(name)
> NameError: name 'name' is not defined
>
> Binding (=assigning to) a name inside a function makes it local to that
> function. If you want a global (module-level) name you have to say so:
>
>>>> def define_name():
> global name
> name = "Peter"
>
>
>>>> define_name()
>>>> use_name()
> Peter

Thanks for the example and reminding me about Python's scopes.

With

global name

def use_name():
print(name)

def define_name():
name = "Peter"

define_name()
use_name()

I was initially surprised by the following error:

~/tmp $ python3 global.py
Traceback (most recent call last):
File "/home/loris/tmp/global.py", line 10, in <module>
use_name()
File "/home/loris/tmp/global.py", line 4, in use_name
print(name)
NameError: name 'name' is not defined

but I was misinterpreting

global name

to mean

define a global variable 'name'

whereas it actually seems to mean more like

use the global variable 'name'

Correct?

--
This signature is currently under constuction.
--
https://mail.python.org/mailman/listinfo/python-list
Re: When is logging.getLogger(__name__) needed? [ In reply to ]
dn <PythonList@DancesWithMice.info> writes:

> On 01/04/2023 02.01, Loris Bennett wrote:
>> Hi,
>> In my top level program file, main.py, I have
>> def main_function():
>> parser = argparse.ArgumentParser(description="my prog")
>> ...
>> args = parser.parse_args()
>> config = configparser.ConfigParser()
>> if args.config_file is None:
>> config_file = DEFAULT_CONFIG_FILE
>> else:
>> config_file = args.config_file
>> config.read(config_file)
>> logging.config.fileConfig(fname=config_file)
>> logger = logging.getLogger(__name__)
>> do_some_stuff()
>> my_class_instance = myprog.MyClass()
>> def do_some_stuff():
>> logger.info("Doing stuff")
>> This does not work, because 'logger' is not known in the function
>> 'do_some_stuff'.
>> However, if in 'my_prog/my_class.py' I have
>> class MyClass:
>> def __init__(self):
>> logger.debug("created instance of MyClass")
>> this 'just works'.
>> I can add
>> logger = logging.getLogger(__name__)
>> to 'do_some_stuff', but why is this necessary in this case but not
>> in
>> the class?
>> Or should I be doing this entirely differently?
>
> Yes: differently.
>
> To complement @Peter's response, two items for consideration:
>
> 1 once main_function() has completed, have it return logger and other
> such values/constructs. The target-identifiers on the LHS of the
> function-call will thus be within the global scope.
>
> 2 if the purposes of main_function() are condensed-down to a few
> (English, or ..., language) phrases, the word "and" will feature, eg
> - configure env according to cmdLN args,
> - establish log(s),
> - do_some_stuff(), ** AND **
> - instantiate MyClass.
>
> If these (and do_some_stuff(), like MyClass' methods) were split into
> separate functions* might you find it easier to see them as separate
> sub-solutions? Each sub-solution would be able to contribute to the
> whole - the earlier ones as creating (outputting) a description,
> constraint, or basis; which becomes input to a later function/method.

So if I want to modify the logging via the command line I might have the
following:

---------------------------------------------------------------------

#!/usr/bin/env python3

import argparse
import logging


def get_logger(log_level):
"""Get global logger"""

logger = logging.getLogger('example')
logger.setLevel(log_level)
ch = logging.StreamHandler()
formatter = logging.Formatter('%(levelname)s - %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)

return logger


def do_stuff():
"""Do some stuff"""

# logger.info("Doing stuff!")


def main():
"""Main"""

parser = argparse.ArgumentParser()
parser.add_argument("--log-level", dest="log_level", type=int)
args = parser.parse_args()

print(f"log level: {args.log_level}")

logger = get_logger(args.log_level)
logger.debug("Logger!")
do_stuff()


if __name__ == "__main__":
main()

---------------------------------------------------------------------

How can I get logging for 'do_stuff' in this case without explicitly
passing 'logger' as an argument or using 'global'?

Somehow I am failing to understand how to get 'logger' defined
sufficiently high up in the program that all references 'lower down' in
the program will be automatically resolved.

> * there is some debate amongst developers about whether "one function,
> one purpose" should be a rule, a convention, or tossed in the
> trash. YMMV!
>
> Personal view: SOLID's "Single" principle applies: there should be
> only one reason (hanging over the head of each method/function, like
> the Sword of Damocles) for it to change - or one 'user' who could
> demand a change to that function. In other words, an updated cmdLN
> option shouldn't affect a function which establishes logging, for
> example.
>
>
> Web.Refs:
> https://people.engr.tamu.edu/choe/choe/courses/20fall/315/lectures/slide23-solid.pdf
> https://www.hanselminutes.com/145/solid-principles-with-uncle-bob-robert-c-martin
> https://idioms.thefreedictionary.com/sword+of+Damocles
> https://en.wikipedia.org/wiki/Damocles

I don't really get the "one reason" idea and the Sword of Damocles
analogy. The later to me is more like "there's always a downside",
since the perks of being king may mean someone might try to usurp the
throne and kill you. Where is the "single principle" aspect?

However, the idea of "one responsibility" in the sense of "do only one
thing" seems relatively clear, especially if I think in terms of writing
unit tests.

Cheers,

Loris

--
This signature is currently under constuction.
--
https://mail.python.org/mailman/listinfo/python-list
Re: When is logging.getLogger(__name__) needed? [ In reply to ]
"Loris Bennett" <loris.bennett@fu-berlin.de> writes:

> dn <PythonList@DancesWithMice.info> writes:
>
>> On 01/04/2023 02.01, Loris Bennett wrote:
>>> Hi,
>>> In my top level program file, main.py, I have
>>> def main_function():
>>> parser = argparse.ArgumentParser(description="my prog")
>>> ...
>>> args = parser.parse_args()
>>> config = configparser.ConfigParser()
>>> if args.config_file is None:
>>> config_file = DEFAULT_CONFIG_FILE
>>> else:
>>> config_file = args.config_file
>>> config.read(config_file)
>>> logging.config.fileConfig(fname=config_file)
>>> logger = logging.getLogger(__name__)
>>> do_some_stuff()
>>> my_class_instance = myprog.MyClass()
>>> def do_some_stuff():
>>> logger.info("Doing stuff")
>>> This does not work, because 'logger' is not known in the function
>>> 'do_some_stuff'.
>>> However, if in 'my_prog/my_class.py' I have
>>> class MyClass:
>>> def __init__(self):
>>> logger.debug("created instance of MyClass")
>>> this 'just works'.
>>> I can add
>>> logger = logging.getLogger(__name__)
>>> to 'do_some_stuff', but why is this necessary in this case but not
>>> in
>>> the class?
>>> Or should I be doing this entirely differently?
>>
>> Yes: differently.
>>
>> To complement @Peter's response, two items for consideration:
>>
>> 1 once main_function() has completed, have it return logger and other
>> such values/constructs. The target-identifiers on the LHS of the
>> function-call will thus be within the global scope.
>>
>> 2 if the purposes of main_function() are condensed-down to a few
>> (English, or ..., language) phrases, the word "and" will feature, eg
>> - configure env according to cmdLN args,
>> - establish log(s),
>> - do_some_stuff(), ** AND **
>> - instantiate MyClass.
>>
>> If these (and do_some_stuff(), like MyClass' methods) were split into
>> separate functions* might you find it easier to see them as separate
>> sub-solutions? Each sub-solution would be able to contribute to the
>> whole - the earlier ones as creating (outputting) a description,
>> constraint, or basis; which becomes input to a later function/method.
>
> So if I want to modify the logging via the command line I might have the
> following:
>
> ---------------------------------------------------------------------
>
> #!/usr/bin/env python3
>
> import argparse
> import logging
>
>
> def get_logger(log_level):
> """Get global logger"""
>
> logger = logging.getLogger('example')
> logger.setLevel(log_level)
> ch = logging.StreamHandler()
> formatter = logging.Formatter('%(levelname)s - %(message)s')
> ch.setFormatter(formatter)
> logger.addHandler(ch)
>
> return logger
>
>
> def do_stuff():
> """Do some stuff"""
>
> # logger.info("Doing stuff!")

Looks like I just need

logger = logging.getLogger('example)
logger.info("Doing stuff!")

>
> def main():
> """Main"""
>
> parser = argparse.ArgumentParser()
> parser.add_argument("--log-level", dest="log_level", type=int)
> args = parser.parse_args()
>
> print(f"log level: {args.log_level}")
>
> logger = get_logger(args.log_level)
> logger.debug("Logger!")
> do_stuff()
>
>
> if __name__ == "__main__":
> main()
>
> ---------------------------------------------------------------------
>
> How can I get logging for 'do_stuff' in this case without explicitly
> passing 'logger' as an argument or using 'global'?
>
> Somehow I am failing to understand how to get 'logger' defined
> sufficiently high up in the program that all references 'lower down' in
> the program will be automatically resolved.
>
>> * there is some debate amongst developers about whether "one function,
>> one purpose" should be a rule, a convention, or tossed in the
>> trash. YMMV!
>>
>> Personal view: SOLID's "Single" principle applies: there should be
>> only one reason (hanging over the head of each method/function, like
>> the Sword of Damocles) for it to change - or one 'user' who could
>> demand a change to that function. In other words, an updated cmdLN
>> option shouldn't affect a function which establishes logging, for
>> example.
>>
>>
>> Web.Refs:
>> https://people.engr.tamu.edu/choe/choe/courses/20fall/315/lectures/slide23-solid.pdf
>> https://www.hanselminutes.com/145/solid-principles-with-uncle-bob-robert-c-martin
>> https://idioms.thefreedictionary.com/sword+of+Damocles
>> https://en.wikipedia.org/wiki/Damocles
>
> I don't really get the "one reason" idea and the Sword of Damocles
> analogy. The later to me is more like "there's always a downside",
> since the perks of being king may mean someone might try to usurp the
> throne and kill you. Where is the "single principle" aspect?
>
> However, the idea of "one responsibility" in the sense of "do only one
> thing" seems relatively clear, especially if I think in terms of writing
> unit tests.
>
> Cheers,
>
> Loris
--
Dr. Loris Bennett (Herr/Mr)
ZEDAT, Freie Universität Berlin
--
https://mail.python.org/mailman/listinfo/python-list
Re: When is logging.getLogger(__name__) needed? [ In reply to ]
Thank you for carefully considering suggestions (and implications) - and
which will 'work' for you.

Further comment below (and with apologies that, unusually for me, there
are many personal opinions mixed-in):-


On 06/04/2023 01.06, Loris Bennett wrote:
> "Loris Bennett" <loris.bennett@fu-berlin.de> writes:
>> dn <PythonList@DancesWithMice.info> writes:
>>> On 01/04/2023 02.01, Loris Bennett wrote:
>>>> Hi,
>>>> In my top level program file, main.py, I have
>>>> def main_function():
>>>> parser = argparse.ArgumentParser(description="my prog")
>>>> ...
>>>> args = parser.parse_args()
>>>> config = configparser.ConfigParser()
>>>> if args.config_file is None:
>>>> config_file = DEFAULT_CONFIG_FILE
>>>> else:
>>>> config_file = args.config_file
>>>> config.read(config_file)
>>>> logging.config.fileConfig(fname=config_file)
>>>> logger = logging.getLogger(__name__)
>>>> do_some_stuff()
>>>> my_class_instance = myprog.MyClass()
>>>> def do_some_stuff():
>>>> logger.info("Doing stuff")
>>>> This does not work, because 'logger' is not known in the function
>>>> 'do_some_stuff'.
>>>> However, if in 'my_prog/my_class.py' I have
>>>> class MyClass:
>>>> def __init__(self):
>>>> logger.debug("created instance of MyClass")
>>>> this 'just works'.
>>>> I can add
>>>> logger = logging.getLogger(__name__)
>>>> to 'do_some_stuff', but why is this necessary in this case but not
>>>> in
>>>> the class?
>>>> Or should I be doing this entirely differently?
>>>
>>> Yes: differently.
>>>
>>> To complement @Peter's response, two items for consideration:
>>>
>>> 1 once main_function() has completed, have it return logger and other
>>> such values/constructs. The target-identifiers on the LHS of the
>>> function-call will thus be within the global scope.
>>>
>>> 2 if the purposes of main_function() are condensed-down to a few
>>> (English, or ..., language) phrases, the word "and" will feature, eg
>>> - configure env according to cmdLN args,
>>> - establish log(s),
>>> - do_some_stuff(), ** AND **
>>> - instantiate MyClass.
>>>
>>> If these (and do_some_stuff(), like MyClass' methods) were split into
>>> separate functions* might you find it easier to see them as separate
>>> sub-solutions? Each sub-solution would be able to contribute to the
>>> whole - the earlier ones as creating (outputting) a description,
>>> constraint, or basis; which becomes input to a later function/method.
>>
>> So if I want to modify the logging via the command line I might have the
>> following:
>>
>> ---------------------------------------------------------------------
>>
>> #!/usr/bin/env python3
>>
>> import argparse
>> import logging
>>
>>
>> def get_logger(log_level):
>> """Get global logger"""
>>
>> logger = logging.getLogger('example')
>> logger.setLevel(log_level)
>> ch = logging.StreamHandler()
>> formatter = logging.Formatter('%(levelname)s - %(message)s')
>> ch.setFormatter(formatter)
>> logger.addHandler(ch)
>>
>> return logger
>>
>>
>> def do_stuff():
>> """Do some stuff"""
>>
>> # logger.info("Doing stuff!")
>
> Looks like I just need
>
> logger = logging.getLogger('example)
> logger.info("Doing stuff!")
>
>>
>> def main():
>> """Main"""
>>
>> parser = argparse.ArgumentParser()
>> parser.add_argument("--log-level", dest="log_level", type=int)
>> args = parser.parse_args()
>>
>> print(f"log level: {args.log_level}")
>>
>> logger = get_logger(args.log_level)
>> logger.debug("Logger!")
>> do_stuff()
>>
>>
>> if __name__ == "__main__":
>> main()
>>
>> ---------------------------------------------------------------------
>>
>> How can I get logging for 'do_stuff' in this case without explicitly
>> passing 'logger' as an argument or using 'global'?
>>
>> Somehow I am failing to understand how to get 'logger' defined
>> sufficiently high up in the program that all references 'lower down' in
>> the program will be automatically resolved.

At the risk of 'heresy', IMHO the idea of main() is (almost always)
unnecessary in Python, and largely a habit carried-over from other
languages (need for an entry-/end-point).

NB be sure of the difference between a "script" and a "module"...


My script template-overview:

''' Script docstring. '''
- author, license, etc docs

global constants such as import-s
set environment

if __name__ == "__main__":
do_this()
do_that()
ie the business of the script


Despite its frequent use, I'm somewhat amused by the apparent
duplication within:

if __name__ == "__main__":
main()

ie if this .py file is being executed as a script, call main() - where
main() is the purpose of the script. Whereas if the file is an import-ed
module, do not execute any of the contained-code.

Thus, the if-statement achieves the same separation as the main()
function encapsulation. Also, the word main() conveys considerably less
meaning than (say) establish_logging().


NB others may care to offer alternate advice for your consideration...


There is a good argument for main(), in that it might enable tests to be
written which assure the integration of several tasks called by the
script. Plus, if one excludes non-TDD test-able code, such as user
interfaces, even 'fancy headings' and printing of conclusions; there's
little left. Accordingly, I don't mind the apparent duplication involved
in coding an integration test which checks that do_this() and do_that()
are playing-nicely together. YMMV!


I'm not sure why, but my 'mainlines' never seem to be very long. Once I
had been introduced to "Modular Programming" (1970s?), it seemed that
the purpose of a mainline was merely calling one do-it function after
another.

To me mainline's seem highly unlikely to be re-usable. Thus, the
possibility that main() might need to be import-able to some other
script, is beyond my experience/recollection.

Further, my .py file scripts/mainlines tend to be short and often don't
contain any of the functions or classes which actually do-the-work - all
are import-ed (and thus, they are easier to re-use!?)


Returning to 'set environment': the code-examples include argparse and
logger. Both (in my thinking) are part of creating the environment in
which the code will execute.

Logging for example, is the only choice when we need to be aware of how
a web-based system is running (or running into problems) - otherwise, as
some would argue, we might as well use print(). Similarly, argparse will
often influence the way a script executes, the data it should use, etc.

Much of such forms the (global) environment in which (this run of) the
code will execute. Hence locating such setting of priorities and
constraints, adjacent to the import statements.

These must be established before getting on with 'the business'. That
said, if someone prefers to put it under if __main__, I won't burst into
tears. (see earlier comment about the .py file as a script cf module)


You have answered your own question about logging. Well done!

The logging instance can either be explicitly passed into each
(relevant) function, or it can be treated as a global and thus available
implicitly. (see also: "Zen of Python")

I prefer your approach of get_logger() - even if that name doesn't quite
describe the establishment of a logger and its settings. All of that
part of setting the environment is collected into one place. Which
in-turn, makes it easy to work on any changes, or work-in any parameters
which may come from other environment-setting activity.


As well as 'the documentation' there is a HowTo
(https://docs.python.org/3/howto/logging.html). Other logging-learners
have found the DigitalOcean tutorial helpful
(https://www.digitalocean.com/community/tutorials/how-to-use-logging-in-python-3)


>>> * there is some debate amongst developers about whether "one function,
>>> one purpose" should be a rule, a convention, or tossed in the
>>> trash. YMMV!
>>>
>>> Personal view: SOLID's "Single" principle applies: there should be
>>> only one reason (hanging over the head of each method/function, like
>>> the Sword of Damocles) for it to change - or one 'user' who could
>>> demand a change to that function. In other words, an updated cmdLN
>>> option shouldn't affect a function which establishes logging, for
>>> example.
>>>
>>>
>>> Web.Refs:
>>> https://people.engr.tamu.edu/choe/choe/courses/20fall/315/lectures/slide23-solid.pdf
>>> https://www.hanselminutes.com/145/solid-principles-with-uncle-bob-robert-c-martin
>>> https://idioms.thefreedictionary.com/sword+of+Damocles
>>> https://en.wikipedia.org/wiki/Damocles
>>
>> I don't really get the "one reason" idea and the Sword of Damocles
>> analogy. The later to me is more like "there's always a downside",
>> since the perks of being king may mean someone might try to usurp the
>> throne and kill you. Where is the "single principle" aspect?
>>
>> However, the idea of "one responsibility" in the sense of "do only one
>> thing" seems relatively clear, especially if I think in terms of writing
>> unit tests.

+1

Users are notoriously unable to write clear specifications. Ultimately,
and not necessarily unkindly, they often do not know what they want -
and particularly not specified at the level of detail we need. This is
the downfall of the "waterfall" SDLC development model, and the reason
why short 'Agile' sprints are (often/in-theory) more successful. The
sooner a mistake, misunderstanding, or omission is realised, the better!

Not wanting to do more than state the above, things change, life goes
on, etc, etc. So, when one first demonstrates code to a user, it is rare
that they won't want to change something. Similarly, over time, it is
highly unlikely that someone won't dream up some 'improvement' - or
similar need to amend the code be imposed by an externality, eg
government legislation or regulation. Such changes may be trivial, eg
cosmetic; others may be more far-reaching, eg imposition of a tax where
previously there was none (or v-v).

Accordingly, adding the fourth dimension to one's code, and
program[me]-design - and the advice about being kind to those who will
maintain the code after you or your six-months-time-self.

So, the 'sword of Damocles' is knowing that our code should not be
considered secure (or static). That change is inevitable. We don't need
to be worrying about danger afflicting us when we least expect it - your
users are unlikely to actually kill you. However, it is a good idea to
be aware that change may be required, and that it could come from any
random direction or "stakeholder".

Perhaps worse, change implies risk. We make a change to suit one aspect,
and something else 'breaks'. That is the reason many programmers
actively resist 'change'. That is also one of the promises of TDD - if
we can quickly run tests which assure (if not "prove") code-correctness,
then the risks of change decrease. Whereas, if there are no tests to
ensure 'life goes on' after a change, well, unhappiness reigns!

Hence the allusion.
(apologies if it was too oblique)


Accordingly, the idea that if a function does one job (and does it
well), should you decide/be required to change that function, then the
impacts, any side-effects, etc, of such will only affect that one
function (and its tests), and whatever is 'downstream' (integrating)
from there.

Which also reduces the chance that some change 'over here', will have
some unforeseen impact 'over there'. The structure and the flow work
together to achieve both a degree of separation and a bringing-together
or unity. The tension of team-work if you like, cf the dysfunction of
disorganisation.

NB not that a 'small change' means running only that one test in TDD -
still run the entire suite of tests (auto-magically) to ensure that
there are no surprises...


Change is inevitable. Thus, consider that your throne as code-author is
illusory - the user/client is able/likely to want change.

Then, if the code is organised according to functionality, the logging
environment (for example) can be changed without any need to re-code
somewhere else - and/or that tinkering with cmdLN arguments and such
code-changes can be done quite separately from the establishment of logging.

Ultimately, the smaller the function (think of establishing logging),
the more likely it will be able to be re-used in the next assignment
which requires a log! (whereas if it is mixed-in with argparse, re-use
is unlikely because the cmdLN args will be different)


There's plenty of references about such on the web. Will be happy to
discuss whichever sub-topics might be of-interest, further...

--
Regards,
=dn
--
https://mail.python.org/mailman/listinfo/python-list