Response time is of utmost importance for any application. It not only makes the application more responsive but also enhances the user experience. In one of my previous post we utilized memcache to cache view response and improve the response time (performance) of the site by almost 10X. This was basically at server-side. However we can further improve this by making some smart client side (browser) caching. Django’s Conditional View Processing is very apt for this scenario.
Django http decorator (django.views.decorators.http) functions provide an easy way to set cache for conditional requests. Etags or content based caching can be very useful for cases where setting time based caching can be a challenge. When etag is set, an If-None-Match request header is sent with the etag value of the last requested version of the resource for all subsequent requests. If the current version has the same etag value, indicating its value is the same as the browser’s cached copy, then an HTTP status of 304 is returned and content is served from the browser cache, boosting the response time as content does not need to be fetched from server/server cache.
The advantage with etag is that, caching can be done based on the response content. Although this involves in generating the etag based on the content which can be a hash or any other string identifier. Web servers (like nginx) can also do this nowadays, however we will see in this post how to set this from application end in Django.
The @etag decorator really makes things simple in Django. You can read more about this decorator in the Django official doc, but here I’ll focus on the implementation.
In my scenario, I am caching the etag as well so that the same etag can be used and also it provides me a manual way to clear the cache and reset the etag (django-clearcache) if required without making any code change/server restarts.
The first thing is to write the get_etag function which returns the etag.
# Get etag (for client side caching)
def get_etag(request, **kwargs):
etag_key = request.path.split('/')[2]
return cache.get(etag_key, None)
The snippet above returns the etag from cache. The cache key is based on the request path, so that the etag decorator can use the same get_etag function for multiple view functions. Once this is done we just need to specify the decorator in our view function.
@etag(get_etag) def server_data(request, **kwargs): url = reverse('reinvent_rest_api:server_api-list') + '?format=datatables' # Check cache & get response response = cache_response(ServerViewSet.as_view({'get': 'list'}), url, 'server_cache', 'server_data') return HttpResponse(response) @etag(get_etag) def uri_data(request, **kwargs): url = reverse('reinvent_rest_api:uri_api-list') + '?format=datatables' # Check cache & get response response = cache_response(UriViewSet.as_view({'get': 'list'}), url, 'uri_cache', 'url_data') return HttpResponse(response)
It’s as simple as shown above. Now the etag is set along with the view’s response cache as below:
# Handle caching
def cache_response(view, url, cache_name, etag_cache_name):
# Get response from cache
response = cache.get(cache_name)
# Invoke POST call to get data from DRF if cache is not set
if not response:
status, response = send_api_request(url, view, None, None)
if status != 200:
raise Exception('Error fetching data from API: ' + response.content)
else:
# Set cache
cache.set(cache_name, response, None)
cache.set(etag_cache_name, str(datetime.datetime.now()), None)
return response
This would set the response as well as the etag in cache. Here am using the datetime stamp as the etag value.
Now, for etag cache invalidation, it should be the same process as followed for the response cache invalidation, i.e. whenever the model is changed the cache should be invalidated. It is not time/user based. Thus once the cache is set, all users would benefit from it and would get invalidated only when the model has been changed. Cache invalidation is a very crucial factor and is totally based on your application/requirements and how static/dynamic your content is. Be sure to spend some time understanding which approach would best fit before finalizing the design.
In my case, the post save signal of the model is where I put the invalidation logic, which would basically be triggered whenever the model is changed.
# Signal to handle cache invalidation
@receiver(post_create_historical_record)
def invalidate_cache(sender, **kwargs):
model_name = kwargs.get('instance').__class__.__name__
if model_name == 'Server' and cache.has_key('server_cache') and cache.has_key('server_data'):
cache.delete('server_cache')
cache.delete('server_data')
if model_name == 'Uri' and cache.has_key('uri_cache') and cache.has_key('uri_data'):
cache.delete('uri_cache')
cache.delete('uri_data')
I used the django_simple_history post save signal as I have history enabled for multiple models. Thus instead of using different signals for different models, the django_simple_history’s post save signal can be used.
Note – Incase you have deployed your application in Kubernetes and use Nginx Ingress Controller, ensure to set gzip to off, else Nginx will discard etag from response header. This can be done by the below annotation in your Ingress manifest:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: reinvent-ingress
annotations:
ingress.kubernetes.io/configuration-snippet: "gzip off;"
spec:
...
...
And that’s about it. Please find below the response time screenshots before and after this change 🙂
- First Request (No caching at all)


Timing – 17.1 s
2. Second Request (Memcached / without etag)

Timing – 474 ms
3. Third Request (Memcached / with etag)


Timing – 3 ms