In some Django views, I used a pattern like this to save changes to a model, and then to do some asynchronous updating (such as generating images, further altering the model) ba
As @dotz mentioned, it is hardly useful to spawn an async task and immediately block and keep waiting until it finishes.
Moreover, if you attach to it this way (the .get()
at the end), you can be sure that the mymodel
instance changes just made won't be seen by your worker because they won't be committed yet - remember you're still inside the atomic
block.
What you could do instead (from Django 1.9) is delay the task until after the current active transaction is committed, using django.db.transaction.on_commit
hook:
from django.db import transaction
with transaction.atomic():
mymodel.save()
transaction.on_commit(lambda:
mytask.delay(mymodel.id))
I use this pattern quite often in my post_save
signal handlers that trigger some processing of new model instances. For example:
from django.db import transaction
from django.db.models.signals import post_save
from django.dispatch import receiver
from . import models # Your models defining some Order model
from . import tasks # Your tasks defining a routine to process new instances
@receiver(post_save, sender=models.Order)
def new_order_callback(sender, instance, created, **kwargs):
""" Automatically triggers processing of a new Order. """
if created:
transaction.on_commit(lambda:
tasks.process_new_order.delay(instance.pk))
This way, however, your task won't be executed if the database transaction fails. It is usually the desired behavior, but keep it in mind.
Edit: It's actually nicer to register the on_commit celery task this way (w/o lambda):
transaction.on_commit(tasks.process_new_order.s(instance.pk).delay)