logging_helper – Logging extensions and helpers

Backports, extensions and customisations of the logging system

Use examples for QueueHandler and QueueListener can be found here, here and here.

QueueHandler and QueueListener are copied from the cPython 3.5 branch commit 9aee273bf8b7 and slightly modified (mostly documentation).

Examples

First the imports:

>>> import multiprocessing as mp
>>> import logging
>>> import sys
>>> import pyhetdex.tools.logging_helper as phlog
>>> import pyhetdex.tools.processes as p

and some utility functions

>>> # initialise the logger and the QueueHandler
>>> def init_logger(queue, name):
...     qhandler = phlog.QueueHandler(queue)
...     qhandler.setLevel(logging.INFO)
...     logger = logging.getLogger(name)
...     logger.setLevel(logging.INFO)
...     logger.addHandler(qhandler)
...     return logger
>>> # create the stdout handler
>>> def stdout_handler(fmt=None):
...     shandler = logging.StreamHandler(stream=sys.stdout)
...     if fmt is None:
...         fmt = "[%(levelname)s] %(message)s"
...     shandler.setFormatter(logging.Formatter(fmt=fmt))
...     shandler.setLevel(logging.INFO)
...     return shandler
>>> # function to execute in the worker pool
>>> def log_func(name, level, message):
...     logger = logging.getLogger(name)
...     logger.log(level, message)

Logging from the main process

This example show how to setup the QueueHandler and QueueListener to log from the main process to standard output via a queue. Although this is a bit of overkill for current the example, it could be useful if any of the handlers can blocks, like if it needs an internet connection.

>>> # create the logger and add the QueueHandler
>>> logger_name = "single_proc"
>>> q_sp = mp.Queue()
>>> logger = init_logger(q_sp, logger_name)
>>> # start the QueueListener and log two messages
>>> with phlog.SetupQueueListener(q_sp, handlers=[stdout_handler()],
...                               use_process=False):
...     logger.info("this is a demonstration")
...     logger.error("this is an error")
[INFO] this is a demonstration
[ERROR] this is an error
>>> q_sp.close()

Logging from subprocesses

This way of logging becomes even more useful if you want to log to a common place from multiple processes. First, it avoids logging multiple messages at the same time, which can corrupt them; secondly it avoids that, when using e.g. the logging.handlers.RotatingFileHandlers, multiple handlers try to move/rename/remove the same log file at the same time.

>>> logger_name = "multi_proc"
>>> q_mp = mp.Queue()
>>> # create a multiprocessing worker
>>> worker = p.get_worker(name="multip", multiprocessing=True,
...                       initializer=init_logger,
...                       initargs=(q_mp, logger_name))
>>> # start the QueueListener and run two jobs
>>> with phlog.SetupQueueListener(q_mp, handlers=[stdout_handler()],
...                               use_process=True):
...     worker(log_func, logger_name, logging.INFO, "this is a demonstration")
...     worker(log_func, logger_name, logging.ERROR, "this is an error")
...     worker.get_results()  
...     worker.close()
[INFO] this is a demonstration
[ERROR] this is an error
>>> q_mp.close()

QueueHandler – Put log records to a queue

class pyhetdex.tools.logging_helper.QueueHandler(queue)[source]

Bases: logging.Handler

This handler sends events to a queue. Typically, it would be used together with a multiprocessing Queue to centralise logging to file in one process (in a multi-process application), so as to avoid file write contention between processes.

This code is new in Python 3.2, but this class can be copy pasted into user code for use with earlier Python versions.

Parameters:queue : queue-like instance

Initialise an instance, using the passed queue.

emit(record)[source]

Emit a record.

Writes the LogRecord to the queue, preparing it for pickling first.

enqueue(record)[source]

Enqueue a record.

The base implementation uses put_nowait. You may want to override this method if you want to use blocking, timeouts or custom queue implementations.

prepare(record)[source]

Prepares a record for queuing. The object returned by this method is enqueued.

The base implementation formats the record to merge the message and arguments, and removes unpickleable items from the record in-place.

You might want to override this method if you want to convert the record to a dict or JSON string, or send a modified copy of the record while leaving the original intact.

QueueListener – Log records from a queue

class pyhetdex.tools.logging_helper.QueueListener(queue, handlers=[], respect_handler_level=False)[source]

This class implements an internal threaded listener which watches for LogRecords being added to a queue, removes them and passes them to a list of handlers for processing.

This code is new in Python 3.2

Parameters:

queue : queue-like instance

handlers : list of logging.Handler child instances

respect_handler_level : bool, optional

if True the handler’s level is respected

_monitor()[source]

Monitor the queue for records, and ask the handler to deal with them.

This method runs on a separate, internal thread. The thread will terminate if it sees a sentinel object in the queue.

dequeue(block)[source]

Dequeue a record and return it, optionally blocking.

The base implementation uses get. You may want to override this method if you want to use timeouts or work with custom queue implementations.

enqueue_sentinel()[source]

This is used to enqueue the sentinel record.

The base implementation uses put_nowait. You may want to override this method if you want to use timeouts or work with custom queue implementations.

handle(record)[source]

Handle a record.

This just loops through the handlers offering them the record to handle.

prepare(record)[source]

Prepare a record for handling.

This method just returns the passed-in record. You may want to override this method if you need to do any custom marshalling or manipulation of the record before passing it to the handlers.

start()[source]

Start the listener.

This starts up a background thread to monitor the queue for LogRecords to process.

stop()[source]

Stop the listener.

This asks the thread to terminate, and then waits for it to do so. Note that if you don’t call this before your application exits, there may be some records still left on the queue, which won’t be processed.

_sentinel = None

SetupQueueListener – Setup the QueueListener

class pyhetdex.tools.logging_helper.SetupQueueListener(queue_, handlers=[], respect_handler_level=True, use_process=True)[source]

Start the QueueListener, in a separate process if required.

Adapted from logging cookbook.

The SetupQueueListener instance can be used as a context manager for a with statement. Upon exiting the statement, the process and QueueListener are stopped. If an exception happens, it will be logged as critical before stopping: in order to avoid possible errors with missing formatter keywords, the handler formatters are temporarily substituted with “%(level)s %(message)s”.

Parameters:

queue_ : queue-like object

queue which contains messages to log

handlers : list of logging.Handler child instances

respect_handler_level : bool, optional

if True the handler’s level is respected

use_process : bool, optional

if True start the listener in a separate process

Attributes

queue, handlers (as above)
stop_event (multiprocessing.Event instance) event used to signal to stop the listener
lp (:class multiprocessing.Process instance) process running the listener, if use_process is True
listener (QueueListener instance)
_listener_process()[source]

This initialises logging with the given handlers.

To be used in a separate process.

Starts the listener and waits for the main process to signal completion via the event. The listener is then stopped.

_start_listener()[source]

Create, start and return the listener

stop()[source]

Stop the listener and, if it’s running in a process, join it. Should be called before the main process finishes to avoid losing logs.