In order to download files, I\'m creating a urlopen object (urllib2 class) and reading it in chunks.
I would like to connect to the server several times and download
Sounds like you want to use one of the flavors of HTTP Range that are available.
edit Updated link to point to the w3.org stored RFC
As to running parallel requests you might want to use urllib3 or requests.
I took some time to make a list of similar questions:
Looking for [python] +download +concurrent
gives these interesting ones:
Looking for [python] +http +concurrent
gives these:
Looking for [python] +urllib2 +slow
:
Looking for [python] +download +many
:
As we've been talking already I made such one using PycURL.
The one, and only one, thing I had to do was pycurl_instance.setopt(pycurl_instance.NOSIGNAL, 1)
to prevent crashes.
I did use APScheduler to fire requests in the separate threads. Thanks to your advices of changing busy waiting while True: pass
to while True: time.sleep(3)
in the main thread the code behaves quite nice and usage of Runner module from python-daemon package application is almost ready to be used as a typical UN*X daemon.