Django Caching Strategies
A few years ago, I watched a Django application handle 10x its normal traffic during a product launch. No crashes. No slowdowns. Response times stayed under 200ms.
The secret? Aggressive caching.
Without caching, every request hits your database. Your database becomes the bottleneck. When traffic spikes, response times spike. Users leave. Sales drop.
With proper caching, you serve most requests from memory. Your database stays calm. Traffic spikes become non-events.
Today, I’ll show you how to implement caching in Django — from basic setup to advanced invalidation strategies. By the end, you’ll know exactly what to cache, how to cache it, and when to invalidate.
Let’s make your app fast.
Why Redis?
- Django supports multiple cache backends :
- Memcached — Fast, simple, memory-only
- Redis — Fast, feature-rich, persistent
- Database — Slow, but no additional infrastructure
- File-based — Slow, good for development
- Local memory — Fast, but not shared across processes
- For production, Redis is the clear winner:
- Persistence (survives restarts)
- Data structures (lists, sets, sorted sets)
- Pub/sub (for real-time features)
- Atomic operations
- Replication and clustering
- Used for caching AND Celery broker AND sessions
One service, multiple uses.
Setting Up Redis with Django
Install Dependencies
pip install redis django-redis
Configure Settings
# config/settings/base.py
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': config('REDIS_URL', default='redis://127.0.0.1:6379/1'),
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
'SOCKET_CONNECT_TIMEOUT': 5,
'SOCKET_TIMEOUT': 5,
'RETRY_ON_TIMEOUT': True,
'MAX_CONNECTIONS': 50,
'CONNECTION_POOL_KWARGS': {'max_connections': 50},
},
'KEY_PREFIX': 'myapp',
}
}
# Use Redis for sessions too
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
SESSION_CACHE_ALIAS = 'default'
Development vs Production Settings
# config/settings/development.py
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'unique-snowflake',
}
}
# config/settings/production.py
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': config('REDIS_URL'),
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
'COMPRESSOR': 'django_redis.compressors.zlib.ZlibCompressor',
'SERIALIZER': 'django_redis.serializers.json.JSONSerializer',
},
'KEY_PREFIX': 'prod',
'TIMEOUT': 60 * 15, # 15 minutes default
}
}
Verify Connection
# In Django shell
from django.core.cache import cache
# Test set/get
cache.set('test_key', 'test_value', timeout=30)
print(cache.get('test_key')) # 'test_value'
# Test deletion
cache.delete('test_key')
print(cache.get('test_key')) # None
The Low-Level Cache API
Django’s cache API is simple but powerful:
from django.core.cache import cache
# Basic operations
cache.set('key', 'value', timeout=300) # 5 minutes
value = cache.get('key')
value = cache.get('key', default='fallback')
cache.delete('key')
# Check existence
cache.has_key('key') # or 'key' in cache
# Multiple keys
cache.set_many({'key1': 'val1', 'key2': 'val2'}, timeout=300)
values = cache.get_many(['key1', 'key2']) # {'key1': 'val1', 'key2': 'val2'}
cache.delete_many(['key1', 'key2'])
# Atomic increment/decrement
cache.set('counter', 0)
cache.incr('counter') # 1
cache.incr('counter', 10) # 11
cache.decr('counter', 5) # 6
# Get or set pattern
def expensive_computation():
# ... slow operation
return result
value = cache.get_or_set('key', expensive_computation, timeout=300)
# Clear everything (use carefully!)
cache.clear()
Cache Key Best Practices
# ❌ BAD: Generic keys
cache.set('user', user_data) # ❌ Which user?
cache.set('products', products) # ❌ All products? Filtered?
# ✅ GOOD: Specific, namespaced keys
cache.set(f'user:{user_id}', user_data)
cache.set(f'user:{user_id}:profile', profile_data)
cache.set(f'products:category:{category_id}:page:{page}', products)
cache.set(f'stats:daily:{date.isoformat()}', stats)
Versioned Keys
Handle cache invalidation with versions:
from django.core.cache import cache
# Store version number
PRODUCT_CACHE_VERSION = cache.get('product_cache_version', 1)
def get_product(product_id):
key = f'product:{product_id}:v{PRODUCT_CACHE_VERSION}'
product = cache.get(key)
if product is None:
product = Product.objects.get(id=product_id)
cache.set(key, product, timeout=3600)
return product
def invalidate_all_products():
# Increment version instead of deleting keys
cache.incr('product_cache_version')
# Old keys will naturally expire
What to Cache/Cache These
1. Database query results
def get_featured_products():
key = 'products:featured'
products = cache.get(key)
if products is None:
products = list(
Product.objects
.filter(is_featured=True)
.select_related('category')
.order_by('-created_at')[:10]
)
cache.set(key, products, timeout=60 * 15) # 15 minutes
return products
2. Computed/aggregated data
def get_dashboard_stats(user):
key = f'dashboard:stats:{user.id}'
stats = cache.get(key)
if stats is None:
stats = {
'total_orders': Order.objects.filter(user=user).count(),
'total_spent': Order.objects.filter(user=user).aggregate(
total=Sum('total')
)['total'] or 0,
'pending_orders': Order.objects.filter(
user=user, status='pending'
).count(),
}
cache.set(key, stats, timeout=60 * 5) # 5 minutes
return stats
3. External API responses
def get_exchange_rate(from_currency, to_currency):
key = f'exchange_rate:{from_currency}:{to_currency}'
rate = cache.get(key)
if rate is None:
response = requests.get(f'https://api.exchangerate.host/latest?base={from_currency}')
rate = response.json()['rates'][to_currency]
cache.set(key, rate, timeout=60 * 60) # 1 hour
return rate
4. Rendered HTML fragments
def get_navigation_html(user):
key = f'nav:html:{user.id}:{user.updated_at.timestamp()}'
html = cache.get(key)
if html is None:
html = render_to_string('partials/navigation.html', {
'user': user,
'permissions': user.get_all_permissions(),
})
cache.set(key, html, timeout=60 * 30) # 30 minutes
return html
5. Configuration and settings
def get_site_settings():
key = 'site:settings'
settings = cache.get(key)
if settings is None:
settings = {s.key: s.value for s in SiteSetting.objects.all()}
cache.set(key, settings, timeout=60 * 60) # 1 hour
return settings
Don’t Cache These
1. User-specific data that changes frequently
# ❌ BAD: Caching data that changes every request
cache.set(f'user:{user.id}:last_seen', timezone.now()) # Pointless
2. Data that must be real-time
# ❌ BAD: Caching inventory when accuracy matters
# ❌ If you cache stock=5 and someone buys 6 items...
cache.set(f'product:{id}:stock', product.stock) # Dangerous
3. Large objects
# ❌ BAD: Caching entire querysets or large files
cache.set('all_users', list(User.objects.all())) # Memory explosion
4. Sensitive data without encryption
# ❌ BAD: Caching sensitive data in plain text
cache.set(f'user:{id}:ssn', user.ssn) # Security risk
View-Level Caching
cache_page Decorator
Cache entire view responses:
from django.views.decorators.cache import cache_page
# Cache for 15 minutes
@cache_page(60 * 15)
def product_list(request):
products = Product.objects.filter(is_active=True)
return render(request, 'products/list.html', {'products': products})
Problem: This caches the same response for ALL users. Not suitable for personalized content.
Vary Headers
Cache different versions based on headers:
from django.views.decorators.vary import vary_on_headers, vary_on_cookie
# Different cache for different Accept-Language
@cache_page(60 * 15)
@vary_on_headers('Accept-Language')
def product_list(request):
...
# Different cache per user (via session cookie)
@cache_page(60 * 15)
@vary_on_cookie
def dashboard(request):
...
Conditional Caching
Only cache for anonymous users:
from django.views.decorators.cache import cache_page
from functools import wraps
def cache_for_anonymous(timeout):
def decorator(view_func):
@wraps(view_func)
def wrapper(request, *args, **kwargs):
if request.user.is_authenticated:
return view_func(request, *args, **kwargs)
return cache_page(timeout)(view_func)(request, *args, **kwargs)
return wrapper
return decorator
@cache_for_anonymous(60 * 15)
def homepage(request):
...
Template Fragment Caching
Cache parts of templates:
{% load cache %}
<!-- Cache for 15 minutes -->
{% cache 900 sidebar %}
<div class="sidebar">
{% for item in expensive_sidebar_items %}
...
{% endfor %}
</div>
{% endcache %}
<!-- Cache with varying key -->
{% cache 900 user_sidebar user.id %}
<div class="sidebar">
{{ user.name }}'s sidebar
</div>
{% endcache %}
<!-- Cache with version -->
{% cache 900 product_list category.id category.updated_at %}
{% for product in products %}
...
{% endfor %}
{% endcache %}
Fragment Caching Best Practices
<!-- DON'T: Cache forms with CSRF tokens -->
{% cache 900 login_form %}
<form method="post">
{% csrf_token %} <!-- This will be stale! -->
...
</form>
{% endcache %}
<!-- DO: Cache static content -->
{% cache 3600 footer %}
<footer>
<p>© 2025 MyCompany</p>
<nav>...</nav>
</footer>
{% endcache %}
<!-- DO: Cache expensive computations with proper keys -->
{% cache 900 product_recommendations user.id product.id %}
{% for rec in recommendations %}
...
{% endfor %}
{% endcache %}
Cache Invalidation Strategies
“There are only two hard things in Computer Science: cache invalidation and naming things.” — Phil Karlton
Strategy 1: Time-Based Expiration
Simplest approach. Let caches expire naturally.
# Data refreshes every 15 minutes
cache.set('featured_products', products, timeout=60 * 15)
- Pros: Simple, no invalidation logic
- Cons: Stale data until expiration
Strategy 2: Event-Based Invalidation
Invalidate when data changes:
# signals.py
from django.db.models.signals import post_save, post_delete
from django.dispatch import receiver
from django.core.cache import cache
@receiver([post_save, post_delete], sender=Product)
def invalidate_product_cache(sender, instance, **kwargs):
# Invalidate specific product
cache.delete(f'product:{instance.id}')
# Invalidate product lists
cache.delete('products:featured')
cache.delete(f'products:category:{instance.category_id}')
# Invalidate with pattern (requires django-redis)
from django_redis import get_redis_connection
conn = get_redis_connection('default')
keys = conn.keys('myapp:products:*')
if keys:
conn.delete(*keys)
- Pros: Always fresh data
- Cons: Complex invalidation logic, potential race conditions
Strategy 3: Version-Based Invalidation
Change the cache key instead of deleting:
class Product(models.Model):
name = models.CharField(max_length=255)
cache_version = models.PositiveIntegerField(default=1)
def save(self, *args, **kwargs):
self.cache_version += 1
super().save(*args, **kwargs)
@property
def cache_key(self):
return f'product:{self.id}:v{self.cache_version}'
def get_product_data(product):
data = cache.get(product.cache_key)
if data is None:
data = {
'id': product.id,
'name': product.name,
'price': str(product.price),
# ... expensive computations
}
cache.set(product.cache_key, data, timeout=60 * 60)
return data
- Pros: No explicit invalidation needed, old keys expire naturally
- Cons: Requires model changes, multiple versions temporarily in cache
Strategy 4: Write-Through Cache
Update cache immediately when data changes:
class ProductService:
@staticmethod
def update_product(product, **data):
for key, value in data.items():
setattr(product, key, value)
product.save()
# Immediately update cache
cache_data = ProductService._build_cache_data(product)
cache.set(f'product:{product.id}', cache_data, timeout=60 * 60)
return product
@staticmethod
def _build_cache_data(product):
return {
'id': product.id,
'name': product.name,
'price': str(product.price),
'category': product.category.name,
}
- Pros: Cache always fresh immediately after write
- Cons: Write operations slower, complexity in service layer
Strategy 5: Cache Stampede Prevention
When cache expires, many requests hit the database simultaneously. Prevent this:
import time
from django.core.cache import cache
def get_with_lock(key, compute_func, timeout=300, lock_timeout=10):
"""Get from cache or compute with lock to prevent stampede."""
value = cache.get(key)
if value is not None:
return value
lock_key = f'{key}:lock'
# Try to acquire lock
if cache.add(lock_key, '1', lock_timeout):
try:
# We got the lock, compute value
value = compute_func()
cache.set(key, value, timeout)
return value
finally:
cache.delete(lock_key)
else:
# Another process is computing, wait and retry
for _ in range(lock_timeout * 10):
time.sleep(0.1)
value = cache.get(key)
if value is not None:
return value
# Timeout, compute anyway
return compute_func()
# Usage
def get_expensive_data():
return get_with_lock(
'expensive_data',
lambda: compute_expensive_data(),
timeout=300
)
Caching Patterns for Common Scenarios
Pattern 1: Cached Property with Timeout
from django.core.cache import cache
from functools import wraps
def cached_method(timeout=300, key_func=None):
def decorator(method):
@wraps(method)
def wrapper(self, *args, **kwargs):
if key_func:
cache_key = key_func(self, *args, **kwargs)
else:
cache_key = f'{self.__class__.__name__}:{self.pk}:{method.__name__}'
result = cache.get(cache_key)
if result is None:
result = method(self, *args, **kwargs)
cache.set(cache_key, result, timeout)
return result
return wrapper
return decorator
class User(models.Model):
@cached_method(timeout=300)
def get_order_stats(self):
return {
'total': self.orders.count(),
'total_value': self.orders.aggregate(Sum('total'))['total__sum'],
}
Pattern 2: Cached QuerySet
class CachedQuerySet:
def __init__(self, queryset, cache_key, timeout=300):
self.queryset = queryset
self.cache_key = cache_key
self.timeout = timeout
def get(self):
result = cache.get(self.cache_key)
if result is None:
result = list(self.queryset)
cache.set(self.cache_key, result, self.timeout)
return result
# Usage
def get_featured_products():
return CachedQuerySet(
Product.objects.filter(is_featured=True).select_related('category'),
'products:featured',
timeout=60 * 15
).get()
Pattern 3: Memoization with Cache
from django.core.cache import cache
from hashlib import md5
import json
def cache_result(timeout=300, prefix='func'):
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
# Create cache key from function name and arguments
key_data = json.dumps({'args': args, 'kwargs': kwargs}, sort_keys=True)
key_hash = md5(key_data.encode()).hexdigest()
cache_key = f'{prefix}:{func.__name__}:{key_hash}'
result = cache.get(cache_key)
if result is None:
result = func(*args, **kwargs)
cache.set(cache_key, result, timeout)
return result
return wrapper
return decorator
@cache_result(timeout=3600, prefix='calculations')
def calculate_shipping(weight, destination, carrier):
# Expensive API call or calculation
...
Monitoring Cache Performance
Cache Hit Rate
from django_redis import get_redis_connection
def get_cache_stats():
conn = get_redis_connection('default')
info = conn.info()
hits = info.get('keyspace_hits', 0)
misses = info.get('keyspace_misses', 0)
total = hits + misses
return {
'hits': hits,
'misses': misses,
'hit_rate': (hits / total * 100) if total > 0 else 0,
'memory_used': info.get('used_memory_human'),
'connected_clients': info.get('connected_clients'),
}
Add to Admin Dashboard
# apps/core/admin.py
from django.contrib import admin
from django.template.response import TemplateResponse
from django.urls import path
class CacheAdminView(admin.AdminSite):
def get_urls(self):
urls = super().get_urls()
custom_urls = [
path('cache-stats/', self.admin_view(self.cache_stats_view), name='cache-stats'),
]
return custom_urls + urls
def cache_stats_view(self, request):
context = {
**self.each_context(request),
'stats': get_cache_stats(),
}
return TemplateResponse(request, 'admin/cache_stats.html', context)
Key Takeaways
- Use Redis for production — It’s fast, persistent, and serves multiple purposes.
- Cache expensive operations — Database queries, API calls, computed data, rendered fragments.
- Don’t cache everything — Skip frequently-changing data, real-time requirements, and sensitive information.
- Use proper cache keys — Namespaced, specific, and include versions or timestamps.
- Choose the right invalidation strategy — Time-based for simple cases, event-based for accuracy.
- Prevent cache stampedes — Use locks or staggered expiration for high-traffic keys.
- Monitor hit rates — A cache with low hit rate isn’t helping.
Caching is the difference between an application that crumbles under load and one that handles traffic spikes effortlessly. Start with the high-impact, low-frequency-change data, measure your hit rates, and expand from there.