python logging, queuehandler

8 Tips for Using Python Logging

Python Logging is a built-in module typically used for capturing runtime events for diagnostic purposes. If you are using Python to build a tool for yourself or just for proof of concept, you may not necessarily need any logging. But if the code is to be used in production environment, it would be a better idea to add proper logging information to make your life easier later when you need to troubleshoot any production issues.

In this article, I will be sharing with you some tips for using Python logging module in your production code.

Basic Python Logging With basicConfig

When you have a very simple script and need a very basic logging mechanism to capture some runtime information, Python basicConfig is the best fit where everything comes with a default value.

Below is the code to create a root logger to write log to the console:

import logging
logging.basicConfig()
logging.warning('Authentication failure: %s', 'username or password incorrect.')

When running the above code, you shall see the below printed out in your console:

WARNING:root:Authentication failure: username or password incorrect.

It indicates the severity of the log record, the logger hierarchy as well as the actual log message. Usually you may also want to capture the timestamp of the log event. To specify what are the details (refer to the available attributes here) you want to add in your log record, you can provide your own logging format when initializing the root logger.

For instance the below code:

FORMAT = '[{asctime}] [{name}] [{levelname}] - {message}'
logging.basicConfig(format=FORMAT, style="{")

logging.warning('Authentication failure: %s', 'username or password incorrect.')

With the above custom formatting, you shall see the output as per below:

[2021-03-25 21:37:09,158] [root] [WARNING] - Authentication failure: username or password incorrect.

To capture the log into a file rather than printing it out in the console (sys.stderr), you can achieve it with a minor change to specify the filename parameter for the basicConfig method, you would then be able to write logs to the file you’ve specified.

logging.basicConfig(filename="a.log", format=FORMAT, style="{")

When filename parameter is not empty, python logging will create a FileHandler object to handle the I/O related operations.

Create Logger at Module Level

The Python logging module has a logger hierarchy for ancestors and descendant to share the configurations or passing around the messages. In the above examples, the root logger is initialized when calling the basicConfig.

A good practice is to create module-level logger so that you have the flexibility to handle the log messages differently for each module and track which module generated the logs.

Below is an example to create a module-level logger with getLogger method:

logger = logging.getLogger('module.x')
logger.setLevel(logging.DEBUG)

You can create also your own formatter and specify the handler you need for your module-level logger. Below is an example to create a StreamHandler with your own logging format:

FORMAT = '[{asctime}] [{name}] [{levelname}] - {message}'
formatter = logging.Formatter(FORMAT, style="{")

ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
ch.setFormatter(formatter)
logger.addHandler(ch)

To verify if your logger is working as expected, you can log some messages in different levels as per below:

def log_func(msg):
    logger.debug("Debugging now: %s", msg)
    logger.info("For your info only : %s", msg)
    logger.warning("This is a warning : %s", msg)
    logger.error("Error is happening : %s", msg)
    logger.critical("App is going to crash : %s", msg)

log_func("yolo")

You may see the output similar to below:

[2021-03-25 15:17:12,638] [module.x] [INFO] - For your info only : yolo
[2021-03-25 15:17:12,639] [module.x] [WARNING] - This is a warning : yolo
[2021-03-25 15:17:12,642] [module.x] [ERROR] - Error is happening : yolo
[2021-03-25 15:17:12,642] [module.x] [CRITICAL] - App is going to crash : yolo

Similar to Python package hierarchy, by specifying the logger name as “module.x.y”, a child logger will be created under “module.x”.

Pass Extra Information to LogRecord

When calling logger to log some event, a LogRecord instance is created automatically to pass to log handler. If you need some additional information to be added into the LogRecord, you can make use of the extra parameter together with the formatter.

d = {'clientip': '192.168.0.1'}
FORMAT = '[{clientip}] [{asctime}] [{name}] [{levelname}] - {message}'

logging.basicConfig(format=FORMAT, style="{")
logging.warning('Protocol problem: %s', 'connection reset', extra=d)

You shall see the output as per below:

[192.168.0.1] [2021-03-25 15:36:31,449] [root] [WARNING] - Authentication failure: password incorrect.

Create Multiple Log Files with RotatingFileHandler

There are cases where you need to switch to a new log file when the file size reaching to certain limit to avoid the particular log file growing too big. The RotatingFileHandler from the logging.handlers module provides a easy way to limit the max size of the file and number of backup files to be created.

Below is an example to set file size at 2KB and create max 5 backup files for logging:

from logging.handlers import RotatingFileHandler

logger = logging.getLogger("simple app")
logger.setLevel(logging.DEBUG)
fh = RotatingFileHandler(filename="app.log", maxBytes=2000, backupCount=5)
fh.setLevel(logging.DEBUG)
formatter = logging.Formatter('[{asctime}] - [{name}] - [{levelname}] - {message}', style="{")
fh.setFormatter(formatter)

logger.addHandler(fh)

When the app.log file grows to 2KB, the new logs will be automatically directed to a new file, and the old file will be renamed to app.log.X (until X reaches to 5).  If you’ve set backupCount as 1, the log file will be growing without changing to a new file.

Similarly, you can use TimedRotatingFileHandler to rotate the log files at certain time intervals such as every X hours/days etc.

Send Log to Email via SMTPHandler

If you would like to send log events via email to notify the recipients e.g. to notify the system administrator or support team on some critical runtime errors, you can achieve it with SMTPHandler.  Below is an example to use gmail to send out emails. (you will need to replace the email address and password to your own gmail account)

from logging.handlers import SMTPHandler

smtph = SMTPHandler(mailhost=("smtp.gmail.com", 587), 
fromaddr="from@gmail.com", 
toaddrs="to@gmail.com", 
subject="Test Error from Python Script", 
credentials=("from@gmail.com", "password"), 
secure=())
smtph.setLevel(logging.ERROR)
logger.addHandler(smtph)

logger.critical("App is going to crash")

Note that I have passed an empty tuple for the secure parameter in order for SMTPHandler to use TLS encryption for the SMTP connection.

(You will also need to enable the less secure apps for your google account in order to go through the authentication)

Use Queue to Process Log Asynchronously with QueueHandler

The previous example demonstrated how to use the SMTPHandler to send out email. If you have tried the code yourself, you may notice the sluggish behavior especially at the last line of the code. This is because sending email will block the main thread until it finishes, which may not be something you are expecting when you need an immediate response in your web application.

To make sure the performance-critical thread is not blocked by any of these slow operations, you shall consider to use queue to process them in separate threads. The QueueHandler and QueueListener are designed for this purpose. Below is an example to show you how to use queue handler and listener:

from logging.handlers import SMTPHandler, QueueHandler, QueueListener

log_queue = queue.Queue(-1)
queue_handler = QueueHandler(log_queue)

#smtph is the SMTPHandler created from previous example
listener = QueueListener(log_queue, smtph, respect_handler_level=True)
logger.addHandler(queue_handler)

listener.start()

#a log function created in previous example
log_func("Bingoooo")

listener.stop()

Take note that the respect_handler_level has been specified as True in order for the listener to filter the messages based on the severity level set in the handlers. E.g. the handler smtph has been set to handle messages with severity level above error, when respect_handler_level is True, only messages with severity level above error will be passed into the queue. Otherwise if respect_handler_level is False, all the messages regardless of the severity levels are passed into queue for processing.

Use dictConfig for Log Configuration

Setting up logger configurations line by line via code can be tedious sometimes especially when you have more than one handlers or formatters to be set up. The logging module also provides ways to load configurations from config files into dictionary object, and then load the configurations by calling the dictConfig method.

Below is an example for using dictionary to create a logger with StreamHandler and SMTPHandler:

LOGGING = {
    'version': 1,
    'disable_existing_loggers': True,
    'formatters': {
        'standard': {
            'format': '%(levelname)s %(asctime)s %(module)s %(message)s'
        }
    },
    'handlers': {
        'console':{
            'level':'DEBUG',
            'class':'logging.StreamHandler',
            'formatter': 'standard'
        },
        'email': {
            'level': 'ERROR',
            'formatter': 'standard',
            'class': 'logging.handlers.SMTPHandler',
            'mailhost': ("smtp.gmail.com", 587),
            'fromaddr': "from@gmail.com", 
            'toaddrs': "to@gmail.com", 
            'subject': "Test Error from Python Script",
            'credentials': ("from@gmail.com", "password"),
            'secure': ()            
        }
    },
    'loggers': {
        'app_log': {
            'handlers': ['console', 'email'],
            'level': 'INFO',
        }
    }
}

logging.config.dictConfig(LOGGING)
log = logging.getLogger("app_log")

In your project, you may put your configurations into a file in INI/JSON/YAML format, and load it as a dictionary to pass into the dictConfig method, so that you do not need to modify any source code whenever you want to change any logging configurations.

Suppress Error in Production Environment

Last but not the least, you shall consider to suppress any exceptions raised during the logging rather than letting it crash your entire application in a production environment. For instance, if your logs are written to a file and the disk is full or someone accidently updated the file permission to read-only, you would not wish such exceptions to bring down the entire application.

To prevent such issue, you can set raiseExceptions to False, so that any logging exceptions would not impact your application functionality.

logging.raiseExceptions = False

With above set to False, you shall not see any error even when there is some TypeError like below:

logging.info("For your info only : %s %s", msg)
# No error message will be showing and nothing will be logged

But make sure to set it as True (the default value) in your test environment, so that you can identify these type of errors before deploying your code into production. (refer to another topic for python suppressing stdout and stderror)

Conclusion

In this article, we have summarized 8 tips for using the Python logging module with some sample codes which hopefully will give you some ideas on how to use this module when you are developing some code for production use.

A lot of details are not covered such as the list of attributes for Formatter, LogRecord, Handler and Filter etc., you may need to go through the official documents if certain details you need is not discussed here in this article.

You may also like

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x