I\'m trying to optimize my ORM queries in django. I use connection.queries to view the queries that django generate for me.
Assuming I have these models:
<
Book.objects.select_related("author")
is good enough. No need for Author.objects.all()
{{ book.author.name }}
won't hit the database, because book.author
has been prepopulated already.
Django doesn't know about other queries! Author.objects.all()
and Book.objects.all()
are totally different querysets. So if have both in your view and pass them to template context but in your template you do something like:
{% for book in books %} {{ book.author.name }} {% endfor %}
and have N books this will result to N extra database queries (beyond the queries to get all books and authors) !
If instead you had done Book.objects.all().select_related("author")
no extra queries will be done in the above template snippet.
Now, select_related()
of course adds some overhead to the queries. What happens is that when you do a Book.objects.all()
django will return the result of SELECT * FROM BOOKS
. If instead you do a Book.objects.all().select_related("author")
django will return the result of
SELECT * FROM BOOKS B LEFT JOIN AUTHORS A ON B.AUTHOR_ID = A.ID
. So for each book it will return both the columns of the book and its corresponding author. However, this overhead is really much smaller when compared to the overhead of hitting the database N times (as explained before).
So, even though select_related
creates a small performance overhead (each query returns more fields from the database) it will actually be beneficial to use it except when you are totally sure that you'll need only the columns of the specific model you are querying.
Finally, a great way to really see how many (and which exactly) queries are actuall exectuted in your database is to use django-debug-tooblar (https://github.com/django-debug-toolbar/django-debug-toolbar).
select_related
is an optional performance booster by which further access to the property of foreign_keys in a Queryset, won't hit the database.
Design philosophies
This is also why the select_related() QuerySet method exists. It’s an optional performance booster for the common case of selecting “every related object.”
Django official doc
Returns a QuerySet that will “follow” foreign-key relationships, selecting additional related-object data when it executes its query. This is a performance booster which results in a single more complex query but means later use of foreign-key relationships won’t require database queries.
As pointed in the definition, using select_related
only is allowed in foreign_key relationships. Ignoring this rule will face you with below exception:
In [21]: print(Book.objects.select_related('name').all().query)
FieldError: Non-relational field given in select_related: 'name'. Choices are: author
Here are my models.py
. (It's the same as Question asked)
from django.db import models
class Author(models.Model):
name = models.CharField(max_length=50)
def __str__(self):
return self.name
__repr__ = __str__
class Book(models.Model):
name = models.CharField(max_length=50)
author = models.ForeignKey(Author, related_name='books', on_delete=models.DO_NOTHING)
def __str__(self):
return self.name
__repr__ = __str__
relect_related
booster:In [25]: print(Book.objects.select_related('author').all().explain(verbose=True, analyze=True))
Hash Join (cost=328.50..548.39 rows=11000 width=54) (actual time=3.124..8.013 rows=11000 loops=1)
Output: library_book.id, library_book.name, library_book.author_id, library_author.id, library_author.name
Inner Unique: true
Hash Cond: (library_book.author_id = library_author.id)
-> Seq Scan on public.library_book (cost=0.00..191.00 rows=11000 width=29) (actual time=0.008..1.190 rows=11000 loops=1)
Output: library_book.id, library_book.name, library_book.author_id
-> Hash (cost=191.00..191.00 rows=11000 width=25) (actual time=3.086..3.086 rows=11000 loops=1)
Output: library_author.id, library_author.name
Buckets: 16384 Batches: 1 Memory Usage: 741kB
-> Seq Scan on public.library_author (cost=0.00..191.00 rows=11000 width=25) (actual time=0.007..1.239 rows=11000 loops=1)
Output: library_author.id, library_author.name
Planning Time: 0.234 ms
Execution Time: 8.562 ms
In [26]: print(Book.objects.select_related('author').all().query)
SELECT "library_book"."id", "library_book"."name", "library_book"."author_id", "library_author"."id", "library_author"."name" FROM "library_book" INNER JOIN "library_author" ON ("library_book"."author_id" = "library_author"."id")
As you can see, using select_related cause an INNER JOIN on the provided foreign keys(Here was author
).
The execution time which the time of:
Is 8.562 ms
On the other hand:
In [31]: print(Book.objects.all().explain(verbose=True, analyze=True))
Seq Scan on public.library_book (cost=0.00..191.00 rows=11000 width=29) (actual time=0.017..1.349 rows=11000 loops=1)
Output: id, name, author_id
Planning Time: 1.135 ms
Execution Time: 2.536 ms
In [32]: print(Book.objects.all().query)
SELECT "library_book"."id", "library_book"."name", "library_book"."author_id" FROM "library_book
As you can see, It's just a simple SELECT query on book models that only contains author_id. The execution time, in this case, is 2.536 ms.
As mentioned in the Django doc:
Further access to the foreign-key properties will cause another hit on the database: (CUZ we don't have them already)
In [33]: books = Book.objects.all()
In [34]: for book in books:
...: print(book.author) # Hit the database
See Also Database access optimization and explain() in QuerySet API reference
Django Database Caching:
Django comes with a robust cache system that lets you save dynamic pages so they don’t have to be calculated for each request. For convenience, Django offers different levels of cache granularity: You can cache the output of specific views, you can cache only the pieces that are difficult to produce, or you can cache your entire site.
Django also works well with “downstream” caches, such as Squid and browser-based caches. These are the types of caches that you don’t directly control but to which you can provide hints (via HTTP headers) about which parts of your site should be cached, and how.
You should read those docs to find out which of them suits you most.
PS1: for gaining further information about the planner and how it's work see Why Planing time and Execution time are so different Postgres? and Using EXPLAIN)
You are actually asking two different questions:
You should see documentation about Django Query Cache:
Understand QuerySet evaluation
To avoid performance problems, it is important to understand:
that QuerySets are lazy.
when they are evaluated.
how the data is held in memory.
So in summary, Django caches in memory results evaluated within the same QuerySet object, that is, if you do something like that:
books = Book.objects.all().select_related("author")
for book in books:
print(book.author.name) # Evaluates the query set, caches in memory results
first_book = books[1] # Does not hit db
print(first_book.author.name) # Does not hit db
Will only hit db once as you prefetched Authors in select_related, all this stuff will result in a single database query with INNER JOIN.
BUT this won't do any cache between querysets, nor even with the same query:
books = Book.objects.all().select_related("author")
books2 = Book.objects.all().select_related("author")
first_book = books[1] # Does hit db
first_book = books2[1] # Does hit db
This is actually pointed out in docs:
We will assume you have done the obvious things above. The rest of this document focuses on how to use Django in such a way that you are not doing unnecessary work. This document also does not address other optimization techniques that apply to all expensive operations, such as general purpose caching.
You are actually meaning if Django does ORM queries caching, which is a very different matter. ORM Queries caching, that is, if you do a query before and then you do the same query later, if database hasn't changed, the result is coming from a cache and not from an expensive database lookup.
The answer is not Django, not officially supported, but yes unofficially, yes through 3rd-party apps. The most relevant third-party apps that enables this type of caching are:
Take a look a those if you look for query caching and remember, first profile, find bottlenecks, and if they are causing a problem then optimize.
The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming. Donald Knuth.