How to efficiently do many tasks a “little later” in Python?

后端 未结 10 899
心在旅途
心在旅途 2021-01-30 11:59

I have a process, that needs to perform a bunch of actions \"later\" (after 10-60 seconds usually). The problem is that those \"later\" actions can be a lot (1000s), so using a

相关标签:
10条回答
  • 2021-01-30 12:04

    Simple. You can inherit your class from Thread and create instance of your class with Param like timeout so for each instance of your class you can say timeout that will make your thread wait for that time

    0 讨论(0)
  • 2021-01-30 12:17

    If you have a bunch of tasks that need to get performed later, and you want them to persist even if you shut down the calling program or your workers, you should really look into Celery, which makes it super easy to create new tasks, have them executed on any machine you'd like, and wait for the results.

    From the Celery page, "This is a simple task adding two numbers:"

    from celery.task import task
    
    @task
    def add(x, y):
        return x + y
    

    You can execute the task in the background, or wait for it to finish:

    >>> result = add.delay(8, 8)
    >>> result.wait() # wait for and return the result
    16
    
    0 讨论(0)
  • 2021-01-30 12:24

    Another option is to use the Phyton GLib bindings, in particular its timeout functions.

    It's a good choice as long as you don't want to make use of multiple cores and as long as the dependency on GLib is no problem. It handles all events in the same thread which prevents synchronization issues. Additionally, its event framework can also be used to watch and handle IO-based (i.e. sockets) events.

    UPDATE:

    Here's a live session using GLib:

    >>> import time
    >>> import glib
    >>> 
    >>> def workon(thing):
    ...     print("%s: working on %s" % (time.time(), thing))
    ...     return True # use True for repetitive and False for one-time tasks
    ... 
    >>> ml = glib.MainLoop()
    >>> 
    >>> glib.timeout_add(1000, workon, "this")
    2
    >>> glib.timeout_add(2000, workon, "that")
    3
    >>> 
    >>> ml.run()
    1311343177.61: working on this
    1311343178.61: working on that
    1311343178.61: working on this
    1311343179.61: working on this
    1311343180.61: working on this
    1311343180.61: working on that
    1311343181.61: working on this
    1311343182.61: working on this
    1311343182.61: working on that
    1311343183.61: working on this
    
    0 讨论(0)
  • 2021-01-30 12:26

    Pyzmq has an ioloop implementation with a similar api to that of the tornado ioloop. It implements a DelayedCallback which may help you.

    0 讨论(0)
  • 2021-01-30 12:27

    Presuming your process has a run loop which can receive signals and the length of time of each action is within bounds of sequential operation, use signals and posix alarm()

        signal.alarm(time)
    If time is non-zero, this function requests that a 
    SIGALRM signal be sent to the process in time seconds. 
    

    This depends on what you mean by "those "later" actions can be a lot" and if your process already uses signals. Due to phrasing of the question it's unclear why an external python package would be needed.

    0 讨论(0)
  • 2021-01-30 12:28

    Have you looked at the multiprocessing module? It comes standard with Python. It is similar to the threading module, but runs each task in a process. You can use a Pool() object to set up a worker pool, then use the .map() method to call a function with the various queued task arguments.

    0 讨论(0)
提交回复
热议问题