Web Crawler To get Links From New Website

后端 未结 3 1607
忘了有多久
忘了有多久 2021-01-26 10:49

I am trying to get the links from a news website page(from one of its archives). I wrote the following lines of code in Python:

main.py contains :



        
3条回答
  •  余生分开走
    2021-01-26 11:32

    I believe you may want to try accessing the text inside the list item like so:

    for tag in soup.findAll('li', attrs={"data-section":"Business"}):
        articletext += tag.string
    

    Edited: General Comments on getting links from a page

    Probably the easiest data type to use to gather a bunch of links and retrieve them later is a dictionary.

    To get links from a page using BeautifulSoup, you could do something like the following:

    link_dictionary = {}
    with urlopen(url_source) as f:
        soup = BeautifulSoup(f)
        for link in soup.findAll('a'):
            link_dictionary[link.string] = link.get('href') 
    

    This will provide you with a dictionary named link_dictionary, where every key in the dictionary is a string that is simply the text contents between the tags and every value is the the value of the href attribute.


    How to combine this what your previous attempt

    Now, if we combine this with the problem you were having before, we could try something like the following:

    link_dictionary = {}
    for tag in soup.findAll('li', attrs={"data-section":"Business"}):
        for link in tag.findAll('a'):
            link_dictionary[link.string] = link.get('href') 
    

    If this doesn't make sense, or you have a lot more questions, you will need to experiment first and try to come up with a solution before asking another new, clearer question.

提交回复
热议问题