soup.select('.r a') in f'https://google.com/search?q={query}' brings back empty list in Python BeautifulSoup. **NOT A DUPLICATE**

前端 未结 3 895
失恋的感觉
失恋的感觉 2020-11-30 14:42

The \"I\'m Feeling Lucky!\" project in the \"Automate the boring stuff with Python\" ebook no longer works with the code he provided.

Specifically, the linkElems = s

相关标签:
3条回答
  • 2020-11-30 15:12

    Different websites (for instance Google) generate different HTML codes to different User-Agents (this is how the web browser is identified by the website). Another solution to your problem is to use a browser User-Agent to ensure that the HTML code you obtain from the website is the same you would get by using "view page source" in your browser. The following code just prints the list of google search result urls, not the same as the book you've referenced but it's still useful to show the point.

    #! python3
    # lucky.py - Opens several Google search results.
    
    import requests, sys, webbrowser, bs4
    print('Please enter your search term:')
    searchTerm = input()
    print('Googling...')    # display thext while downloading the Google page
    
    url = 'http://google.com/search?q=' + ' '.join(searchTerm)
    headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
    
    res = requests.get(url, headers=headers)
    res.raise_for_status()
    
    
    # Retrieve top search results links.
    soup = bs4.BeautifulSoup(res.content)
    
    # Open a browser tab for each result.
    linkElems = soup.select('.r > a')   # Used '.r > a' instead of '.r a' because
    numOpen = min(5, len(linkElems))    # there are many href after div class="r"
    for i in range(numOpen):
      # webbrowser.open('http://google.com' + linkElems[i].get('href'))
      print(linkElems[i].get('href'))
    
    0 讨论(0)
  • 2020-11-30 15:17

    I took a different route. I saved the HTML from the request and opened that page, then I inspected the elements. It turns out that the page is different if I open it natively in the Chrome browser compared to what my python request is served. I identified the div with the class that appears to denote a result and supplemented that for the .r - in my case it was .kCrYT

    #! python3
    
    # lucky.py - Opens several Google Search results.
    
    import requests, sys, webbrowser, bs4
    
    print('Googling...') # display text while the google page is downloading
    
    url= 'http://www.google.com.au/search?q=' + ' '.join(sys.argv[1:])
    url = url.replace(' ','+')
    
    
    res = requests.get(url)
    res.raise_for_status()
    
    
    # Retrieve top search result links.
    soup=bs4.BeautifulSoup(res.text, 'html.parser')
    
    
    # get all of the 'a' tags afer an element with the class 'kCrYT' (which are the results)
    linkElems = soup.select('.kCrYT > a') 
    
    # Open a browser tab for each result.
    numOpen = min(5, len(linkElems))
    for i in range(numOpen):
        webbrowser.open_new_tab('http://google.com.au' + linkElems[i].get('href'))
    
    0 讨论(0)
  • 2020-11-30 15:31

    I too had had the same problem while reading that book and found a solution for that problem.

    replacing

    soup.select('.r a')
    

    with

    soup.select('div#main > div > div > div > a')
    

    will solve that issue

    following is the code that will work

    import webbrowser, requests, bs4 , sys
    
    print('Googling...')
    res = requests.get('https://google.com/search?q=' + ' '.join(sys.argv[1:]))
    res.raise_for_status()
    
    soup = bs4.BeautifulSoup(res.text)
    
    linkElems = soup.select('div#main > div > div > div > a')  
    numOpen = min(5, len(linkElems))
    for i in range(numOpen):
        webbrowser.open('http://google.com' + linkElems[i].get("href"))
    

    the above code takes input from commandline arguments

    0 讨论(0)
提交回复
热议问题