Fabulous Python Decorators

Fabulous Python Decorators

@cache

@functools.cache(user_function)

Simple lightweight unbounded function cache. Sometimes called "memoize".

Returns the same as lru_cache(maxsize=None), creating a thin wrapper around a dictionary lookup for the function arguments. Because it never needs to evict (remove) old values, this is smaller and faster than lru_cache() with a size limit.

example:

from functools import cache

@cache
def factorial(n):
    return n * factorial(n-1) if n else 1

>>> factorial(10)      # no previously cached result, makes 11 recursive calls
3628800
>>> factorial.cache_info()
CacheInfo(hits=0, misses=11, maxsize=None, currsize=11)

>>> factorial(5)       # just looks up cached value result
120
>>> factorial.cache_info()
CacheInfo(hits=1, misses=11, maxsize=None, currsize=11)

>>> factorial(12)      # makes two new recursive calls, the other 10 are cached
479001600
>>> factorial.cache_info()
CacheInfo(hits=2, misses=13, maxsize=None, currsize=13)

@lru_cache

This decorator comes to us from the functools module. This module is included in the standard library, and is incredibly easy to use. This decorator can be used to speed up consecutive runs of functions and operations using cache. @lru_cache decorator wrap a function with a memoizing callable that saves up to the maxsize most recent calls. It can save time when an expensive or I/O bound function is periodically called with the same arguments.

@functools.lru_cache(user_function)
@functools.lru_cache(maxsize=128, typed=False)

Distinct argument patterns may be considered to be distinct calls with separate cache entries. For example, f(a=1, b=2) and f(b=2, a=1) differ in their keyword argument order and may have two separate cache entries.

If typed is set to true, function arguments of different types will be cached separately. For example, f(3) and f(3.0) will always be treated as distinct calls with distinct results. If typed is false, the implementation will usually but not always regard them as equivalent calls and only cache a single result.

The wrapped function is instrumented with a cache_parameters() function that returns a new dict showing the values for maxsize and typed. This is for information purposes only. Mutating the values has no effect.

To help measure the effectiveness of the cache and tune the maxsize parameter, the wrapped function is instrumented with a cache_info() function that returns a named tuple showing hits, misses, maxsize and currsize.

The decorator also provides a cache_clear() function for clearing or invalidating the cache. The original underlying function is accessible through the __wrapped__ attribute. This is useful for introspection, for bypassing the cache, or for rewrapping the function with a different cache.

example:

@lru_cache(maxsize=None)
def fib(n):
    if n < 2:
        return n
    return fib(n-1) + fib(n-2)

>>> [fib(n) for n in range(16)]
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610]

>>> fib.cache_info()
CacheInfo(hits=28, misses=16, maxsize=None, currsize=16)

>>> fib.cache_parameters()
{'maxsize': None, 'typed': False}

>>> fib.cache_clear()
>>> fib.cache_info()
CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)

@jit

JIT is short for Just In Time compilation. Normally whenever we run some code in Python, the first thing that happens is compilation. This compilation creates a bit of overhead, as types are allocated memory and stored as unassigned but named aliases. With Just In Time compilation, we do most of this work at execution. In a lot of ways, we can think of this as something akin to parallel computing, where the Python interpreter is working on two things at once in order to save some time.

The Numba JIT compiler is famous for providing that very concept into Python. Similarly to the @lru_cache, this decorator can be called pretty easily with an immediate boost to performance in your code. The Numba package provides the jit decorator, which makes running more intensive software a lot easier without having to drop into C.

example:

from numba import jit
import random

@jit(nopython=True)
def monte_carlo_pi(nsamples):
    acc = 0
    for i in range(nsamples):
        x = random.random()
        y = random.random()
        if (x ** 2 + y ** 2) < 1.0:
            acc += 1
    return 4.0 * acc / nsamples

@use_unit

Decorator that might come in handy quite often for scientific computing is the self-made use_unit decorator. This can be useful for those who don’t want to add units of measurement to their data, but still want people to know what those units are.

import functools

def use_unit(unit):
    '''Have a function return a Quantity with given unit'''
    def decorator_use_unit(func):
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            value = func(*args, **kwargs)
            return f'{value} {unit}'
        return wrapper
    return decorator_use_unit


@use_unit("meters per second")
def average_speed(distance, duration):
    return distance / duration

average_speed(100, 20)

# output:
# '5.0 meters per second'

@register

The register function comes from the module atexit. This decorator could have something to do with performing some action at termination. The register decorator names a function to be ran at termination. For example, this would work will with some software that needs to save whenever you exit.

>>> import atexit
>>> 
>>> @atexit.register
... def goodbye(name="Danny", adjective="nice"):
...     print(f'Goodbye {name}, it was {adjective} to meet you.')
... 
>>> # type CTRL + D for exit python shell 
Goodbye Danny, it was nice to meet you.