Incredible Python Code Snippets

Incredible Python Code Snippets

I’ve been writing Python professionally for a little over four years now. Long enough to stop being impressed by flashy libraries — and start obsessing over small, boring-looking functions that quietly save hours every week.

These aren’t “hello world” tricks.
They’re not the usual requests, pandas, or rich flexes either.

These are low-level, sharp-edged utilities I actually reach for when building real systems, debugging production issues, or automating things that shouldn’t need a full framework.

Every function below exists because it solved a problem I hit more than once.

Let’s get into it.

safe_open() – Atomic File Writes Without Corruption

Ever had a process crash mid-write and leave a file half-written?
Yeah. Me too. Once was enough.

This uses a temporary file + atomic rename to make writes crash-safe.

import os
import tempfile
from contextlib import contextmanager

@contextmanager
def safe_open(path, mode='w', **kwargs):
    dir_name = os.path.dirname(path) or '.'
    with tempfile.NamedTemporaryFile(mode=mode, dir=dir_name, delete=False, **kwargs) as tmp:
        temp_name = tmp.name
        yield tmp
    os.replace(temp_name, path)

Usage:

with safe_open("data.json") as f:
    f.write("important data")

Fact: os.replace is atomic on POSIX systems. That’s why this works.

chunks() – Memory-Safe Iteration Over Anything

I don’t like loading things into memory unless I absolutely have to. This function chunks any iterable, not just lists.

from itertools import islice

def chunks(iterable, size):
    it = iter(iterable)
    while True:
        chunk = list(islice(it, size))
        if not chunk:
            break
        yield chunk

Usage:

for batch in chunks(range(10_000_000), 1000):
    process(batch)

Why it matters: This is the difference between code that works in dev… and code that survives production data.

timed() – Measure Real Execution Time (Not Wall Clock Lies)

Fact: time.time() lies under load. If you care about performance, use a monotonic clock.

import time
from contextlib import contextmanager

@contextmanager
def timed(label="block"):
    start = time.perf_counter()
    yield
    end = time.perf_counter()
    print(f"{label}: {end - start:.6f}s")

Usage:

with timed("db query"):
    expensive_query()

I use this constantly during refactors. Micro-optimizations without measurements are just vibes.

Here is the same as python decorator.

deep_get() – Safe Access for Ugly Nested Data

APIs love returning dictionaries nested like Russian dolls. This avoids KeyError, TypeError, and your sanity breaking.

def deep_get(data, path: list, default=None) -> dict:
    for key in path:
        try:
            data = data[key]
        except (KeyError, TypeError, IndexError):
            return default
    return data

Usage:

email = deep_get(payload, ["user", "profile", "email"])

Why I prefer this over .get() chains: It works with dicts and lists — and it reads like intent, not defensive noise.

trace_memory() – Find Silent Memory Leaks

Memory leaks don’t announce themselves. They just slowly kill your process.

This uses tracemalloc (stdlib, criminally underused).

import tracemalloc
from contextlib import contextmanager

@contextmanager
def trace_memory(label=""):
    tracemalloc.start()
    yield
    current, peak = tracemalloc.get_traced_memory()
    tracemalloc.stop()
    print(f"{label} | Current: {current/1e6:.2f}MB, Peak: {peak/1e6:.2f}MB")

Usage:

with trace_memory("image processing"):
    process_images()

This has saved me from “why does this service need 2GB RAM?” more than once.

fail_fast() – Crash Loud, With Context

Silent failures are worse than crashes. This ensures exceptions print full tracebacks immediately, even in threaded or weird environments.

import faulthandler
import sys

def fail_fast():
    faulthandler.enable(file=sys.stderr, all_threads=True)

Call it once at startup.

Why this matters: In production, partial logs are useless. This makes crashes honest.

disk_cache() – Persistent Caching Without Redis

Sometimes you want caching… but not infrastructure. shelve gives you a persistent key-value store using nothing but files.

import shelve
from functools import wraps

def disk_cache(path):
    def decorator(fn):
        @wraps(fn)
        def wrapper(*args):
            key = str(args)
            with shelve.open(path) as db:
                if key not in db:
                    db[key] = fn(*args)
                return db[key]
        return wrapper
    return decorator

Usage:

@disk_cache("prices.db")
def get_price(symbol):
    return expensive_api_call(symbol)

SUBSCRIBE FOR NEW ARTICLES

@
comments powered by Disqus