Urllib2 & BeautifulSoup : Nice couple but too slow - urllib3 & threads?

前端 未结 3 530
醉酒成梦
醉酒成梦 2021-01-31 06:38

I was looking to find a way to optimize my code when I heard some good things about threads and urllib3. Apparently, people disagree which solution is the best.

The pro

相关标签:
3条回答
  • 2021-01-31 07:15

    Hey Guys,

    Some news from the problem ! I've found this script, which might be useful ! I'm actually testing it and it's promising (6.03 to run the script below).

    My idea is to find a way to mix that with urllib3. In effet, I'm making request on the same host a lot of times.

    The PoolManager will take care of reusing connections for you whenever you request the same host. this should cover most scenarios without significant loss of efficiency, but you can always drop down to a lower level component for more granular control. (urrlib3 doc site)

    Anyway, it seems to be very interesting and if I can't see yet how to mix these two functionnalities (urllib3 and the threading script below), I guess it's doable ! :-)

    Thank you very much for taking the time to give me a hand with that, It smells good !

    import Queue
    import threading
    import urllib2
    import time
    from bs4 import BeautifulSoup as BeautifulSoup
    
    
    
    hosts = ["http://www.bulats.org//agents/find-an-agent?field_continent_tid=All&field_country_tid=All", "http://www.bulats.org//agents/find-an-agent?field_continent_tid=All&field_country_tid=All&page=1", "http://www.bulats.org//agents/find-an-agent?field_continent_tid=All&field_country_tid=All&page=2", "http://www.bulats.org//agents/find-an-agent?field_continent_tid=All&field_country_tid=All&page=3", "http://www.bulats.org//agents/find-an-agent?field_continent_tid=All&field_country_tid=All&page=4", "http://www.bulats.org//agents/find-an-agent?field_continent_tid=All&field_country_tid=All&page=5", "http://www.bulats.org//agents/find-an-agent?field_continent_tid=All&field_country_tid=All&page=6"]
    
    queue = Queue.Queue()
    out_queue = Queue.Queue()
    
    class ThreadUrl(threading.Thread):
        """Threaded Url Grab"""
        def __init__(self, queue, out_queue):
            threading.Thread.__init__(self)
            self.queue = queue
            self.out_queue = out_queue
    
        def run(self):
            while True:
                #grabs host from queue
                host = self.queue.get()
    
                #grabs urls of hosts and then grabs chunk of webpage
                url = urllib2.urlopen(host)
                chunk = url.read()
    
                #place chunk into out queue
                self.out_queue.put(chunk)
    
                #signals to queue job is done
                self.queue.task_done()
    
    class DatamineThread(threading.Thread):
        """Threaded Url Grab"""
        def __init__(self, out_queue):
            threading.Thread.__init__(self)
            self.out_queue = out_queue
    
        def run(self):
            while True:
                #grabs host from queue
                chunk = self.out_queue.get()
    
                #parse the chunk
                soup = BeautifulSoup(chunk)
                #print soup.findAll(['table'])
    
                tableau = soup.find('table')
            rows = tableau.findAll('tr')
            for tr in rows:
                cols = tr.findAll('td')
                for td in cols:
                        texte_bu = td.text
                        texte_bu = texte_bu.encode('utf-8')
                        print texte_bu
    
                #signals to queue job is done
                self.out_queue.task_done()
    
    start = time.time()
    def main():
    
        #spawn a pool of threads, and pass them queue instance
        for i in range(5):
            t = ThreadUrl(queue, out_queue)
            t.setDaemon(True)
            t.start()
    
        #populate queue with data
        for host in hosts:
            queue.put(host)
    
        for i in range(5):
            dt = DatamineThread(out_queue)
            dt.setDaemon(True)
            dt.start()
    
    
        #wait on the queue until everything has been processed
        queue.join()
        out_queue.join()
    
    main()
    print "Elapsed Time: %s" % (time.time() - start)
    
    0 讨论(0)
  • 2021-01-31 07:19

    Consider using something like workerpool. Referring to the Mass Downloader example, combined with urllib3 would look something like:

    import workerpool
    import urllib3
    
    URL_LIST = [] # Fill this from somewhere
    
    NUM_SOCKETS = 3
    NUM_WORKERS = 5
    
    # We want a few more workers than sockets so that they have extra
    # time to parse things and such.
    
    http = urllib3.PoolManager(maxsize=NUM_SOCKETS)
    workers = workerpool.WorkerPool(size=NUM_WORKERS)
    
    class MyJob(workerpool.Job):
        def __init__(self, url):
           self.url = url
    
        def run(self):
            r = http.request('GET', self.url)
            # ... do parsing stuff here
    
    
    for url in URL_LIST:
        workers.put(MyJob(url))
    
    # Send shutdown jobs to all threads, and wait until all the jobs have been completed
    # (If you don't do this, the script might hang due to a rogue undead thread.)
    workers.shutdown()
    workers.wait()
    

    You may note from the Mass Downloader examples that there are multiple ways of doing this. I chose this particular example just because it's less magical, but any of the other strategies are valid also.

    Disclaimer: I am the author of both, urllib3 and workerpool.

    0 讨论(0)
  • 2021-01-31 07:27

    I don't think urllib or BeautifulSoup is slow. I run your code in my local machine with a modified version ( removed the excel stuff ). It took around 100ms to open the connection, download the content, parse it , and print it to the console for a country.

    10ms is the total amount of time that BeautifulSoup spent to parse the content, and print to the console per country. That is fast enough.

    Neither I do believe using Scrappy or Threading is going to solve the problem. Because the problem is the expectation that it is going to be fast.

    Welcome to the world of HTTP. It is going to be slow sometimes, sometimes it will be very fast. Couple of slow connection reasons

    • because of the server handling your request( return 404 sometimes )
    • DNS resolve ,
    • HTTP handshake,
    • your ISP's connection stability,
    • your bandwidth rate,
    • packet loss rate

    etc..

    Don't forget, you are trying to make 121 HTTP Requests to a server consequently and you don't know what kind of servers do they have. They might also ban your IP address because of consequent calls.

    Take a look at Requests lib. Read their documentation. If you're doing this to learn Python more, don't jump into Scrapy directly.

    0 讨论(0)
提交回复
热议问题