Skip to content

Instantly share code, notes, and snippets.

@kingspp
Created April 22, 2017 07:14
Show Gist options
  • Save kingspp/9451566a5555fb022215ca2b7b802f19 to your computer and use it in GitHub Desktop.
Save kingspp/9451566a5555fb022215ca2b7b802f19 to your computer and use it in GitHub Desktop.
Python Comprehensive Logging using YAML Configuration
import os
import yaml
import logging.config
import logging
import coloredlogs
def setup_logging(default_path='logging.yaml', default_level=logging.INFO, env_key='LOG_CFG'):
"""
| **@author:** Prathyush SP
| Logging Setup
"""
path = default_path
value = os.getenv(env_key, None)
if value:
path = value
if os.path.exists(path):
with open(path, 'rt') as f:
try:
config = yaml.safe_load(f.read())
logging.config.dictConfig(config)
coloredlogs.install()
except Exception as e:
print(e)
print('Error in Logging Configuration. Using default configs')
logging.basicConfig(level=default_level)
coloredlogs.install(level=default_level)
else:
logging.basicConfig(level=default_level)
coloredlogs.install(level=default_level)
print('Failed to load configuration file. Using default configs')
version: 1
disable_existing_loggers: true
formatters:
standard:
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
error:
format: "%(levelname)s <PID %(process)d:%(processName)s> %(name)s.%(funcName)s(): %(message)s"
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: standard
stream: ext://sys.stdout
info_file_handler:
class: logging.handlers.RotatingFileHandler
level: INFO
formatter: standard
filename: /tmp/info.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
error_file_handler:
class: logging.handlers.RotatingFileHandler
level: ERROR
formatter: error
filename: /tmp/errors.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
debug_file_handler:
class: logging.handlers.RotatingFileHandler
level: DEBUG
formatter: standard
filename: /tmp/debug.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
critical_file_handler:
class: logging.handlers.RotatingFileHandler
level: CRITICAL
formatter: standard
filename: /tmp/critical.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
warn_file_handler:
class: logging.handlers.RotatingFileHandler
level: WARN
formatter: standard
filename: /tmp/warn.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
root:
level: NOTSET
handlers: [console]
propogate: yes
loggers:
<module>:
level: INFO
handlers: [console, info_file_handler, error_file_handler, critical_file_handler, debug_file_handler, warn_file_handler]
propogate: no
<module.x>:
level: DEBUG
handlers: [info_file_handler, error_file_handler, critical_file_handler, debug_file_handler, warn_file_handler]
propogate: yes
@default-work
Copy link

default-work commented Aug 11, 2021

Actually LOG_CFG is not needed, if we will use local search for logging.yml:

file = os.path.join(str(Path().absolute()), *__path__.split("/"), "logging.yaml")

@cocojim
Copy link

cocojim commented Aug 25, 2021

I still struggle to make this work., though promising.

I did try to digest https://kingspp.github.io/design/2017/11/06/the-head-and-tail-of-logging.html which the OP has posted.

We have defined a function which can be used for configuring Root Logger, which stands atop logging hierarchy, now let see how we can use this function, Open __init__.py of main module. Why we need to use __init__.py will be discussed in upcoming posts, for the time being bear with me,

and the code therein,
`# init.py

import os
from logging_config_manager import setup_logging

setup_logging(default_path=os.path.join("/".join(file.split('/')[:-1]), 'config', 'logging_config.yaml'))`

these seem to indicate that if one calls teh setup_logging() each time at each module which one wish to log, it will treat each module as "root" hence all the lovely customisation wouldn't kick in at all, (which what I have seen playing at my side)

Does anyone had a true "working" example that one can share? please?

https://gist.github.com/glenfant/4358668
is a simple example on line. There you can see "logging" only been imported once.
but for a layered multi-module package, how would one possibly achieve that please?

any thoughts are much apprieciated please.

thanks

@cocojim
Copy link

cocojim commented Aug 26, 2021

My version of the code needs the line logger.setLevel(logging.DEBUG) after the getting the logger.

i have the same issue of "missing the DEBUG" level. I have tried many many different ways
more importantly, here is my yaml file:

version: 1
disable_existing_loggers: True



formatters:
    standard:
        format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
    error:
        format: "%(levelname)s <PID %(process)d:%(processName)s> %(name)s.%(funcName)s(): %(message)s"

filters:
    infoFilter:
        (): setup.logging_config_manager.infoFilter
    debugFilter:
        (): setup.logging_config_manager.debugFilter
    errorFilter:
        (): setup.logging_config_manager.errorFilter
    warningFilter:
        (): setup.logging_config_manager.warningFilter            
    criticalFilter:
        (): setup.logging_config_manager.criticalFilter

handlers:
    console:
        class: logging.StreamHandler
        level: INFO
        formatter: standard
        stream: ext://sys.stdout

    info_file_handler:
        class: logging.handlers.RotatingFileHandler
        level: INFO
        formatter: standard
        filename: /tmp/info.log
        maxBytes: 10485760 # 10MB
        backupCount: 20
        filters: [infoFilter]
        encoding: utf8

    error_file_handler:
        class: logging.handlers.RotatingFileHandler
        level: ERROR
        formatter: error
        filename: /tmp/errors.log
        maxBytes: 10485760 # 10MB
        backupCount: 20
        filters: [errorFilter]
        encoding: utf8

    debug_file_handler:
        class: logging.handlers.RotatingFileHandler
        level: DEBUG
        formatter: standard
        filename: /tmp/debug.log
        maxBytes: 10485760 # 10MB
        backupCount: 20
        encoding: utf8

    critical_file_handler:
        class: logging.handlers.RotatingFileHandler
        level: CRITICAL
        formatter: standard
        filename: /tmp/critical.log
        maxBytes: 10485760 # 10MB
        backupCount: 20
        filters: [criticalFilter]
        encoding: utf8

    warn_file_handler:
        class: logging.handlers.RotatingFileHandler
        level: WARN
        formatter: standard
        filename: /tmp/warn.log
        maxBytes: 10485760 # 10MB
        backupCount: 20
        filters: [warningFilter]
        encoding: utf8

root:
    level: DEBUG
    handlers: [console, info_file_handler]
    propagate: no

loggers:

    <__main__>:
        level: INFO
        handlers:  [ info_file_handler, debug_file_handler, error_file_handler, critical_file_handler, warn_file_handler]
        propagate: no


    <module1>:
        level: INFO
        handlers:  [ info_file_handler, debug_file_handler, error_file_handler, critical_file_handler, warn_file_handler]
        propagate: no

    <module1.x>:
        level: INFO
        handlers:  [error_file_handler]
        propagate: no   

so at "root" level, i should have DEBUG as starting point.
and I was trying to achieve that
for the main, module1, module1.x the logger will have different handlers. but those handlers do not seem kick in at all. in fact all that matters seems to be what i put in at "root" level. the logs are produced on console and for the Info.log file.

p.s., i have played around the propagte, disable_existing_loggers, and see no effects.
pps, i have done this logging_setup ONLY at the main level. as suggested from other stackoverflow sites; to avoid not loading the logging.config.dictConfig() multiple times at different module levels. (which is what Icmtcf has done in the previous post)

@default-work
Copy link

default-work commented Jan 12, 2022

version: 1
disable_existing_loggers: no

formatters:
  standard:
    format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
  error:
    format: "%(asctime)s - %(name)s - %(levelname)s <PID %(process)d:%(processName)s> %(name)s.%(funcName)s(): %(message)s"

handlers:
  info_file_handler:
    class: logging.handlers.RotatingFileHandler
    level: INFO
    formatter: standard
    filename: logs/info.log
    maxBytes: 10485760 # 10MB
    backupCount: 20
    encoding: utf8

  warn_file_handler:
    class: logging.handlers.RotatingFileHandler
    level: WARN
    formatter: standard
    filename: logs/warn.log
    maxBytes: 10485760 # 10MB
    backupCount: 20
    encoding: utf8

  error_file_handler:
    class: logging.handlers.RotatingFileHandler
    level: ERROR
    formatter: error
    filename: logs/errors.log
    maxBytes: 10485760 # 10MB
    backupCount: 20
    encoding: utf8

  critical_file_handler:
    class: logging.handlers.RotatingFileHandler
    level: CRITICAL
    formatter: standard
    filename: logs/critical.log
    maxBytes: 10485760 # 10MB
    backupCount: 20
    encoding: utf8

  debug_file_handler:
    class: logging.handlers.RotatingFileHandler
    level: DEBUG
    formatter: standard
    filename: logs/debug.log
    maxBytes: 10485760 # 10MB
    backupCount: 20
    encoding: utf8

  root_file_handler:
    class: logging.handlers.RotatingFileHandler
    level: DEBUG
    formatter: standard
    filename: logs/logs.log
    maxBytes: 10485760 # 10MB
    backupCount: 20
    encoding: utf8

  console:
    class: logging.StreamHandler
    level: DEBUG
    formatter: standard
    stream: ext://sys.stdout

  error_console:
    class: logging.StreamHandler
    level: ERROR
    formatter: error
    stream: ext://sys.stderr

root:
  level: DEBUG
  handlers: [console, error_console, root_file_handler]
  propagate: yes

loggers:
  main:
    level: DEBUG
    handlers: [info_file_handler, warn_file_handler, error_file_handler, critical_file_handler, debug_file_handler]
    propagate: no
  werkzeug:
    level: DEBUG
    handlers: [info_file_handler, warn_file_handler, error_file_handler, critical_file_handler, debug_file_handler]
    propagate: yes
  api.app_server:
    level: DEBUG
    handlers: [info_file_handler, warn_file_handler, error_file_handler, critical_file_handler, debug_file_handler]
    propagate: yes

@saikrishnaksbs
Copy link

Can you please give an example of how to write yaml file for aiologger?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment