Issue: when I try to execute the script, BeautifulSoup(html, ...)
gives the error message \"TypeError: object of type \'Response\' has no len(). I tried passing
If you're using requests.get('https://example.com')
to get the HTML, you should use requests.get('https://example.com').text
.
import requests
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
url = "https://fortnitetracker.com/profile/all/DakshRungta123"
html = requests.get(url)
soup = BeautifulSoup(html)
title = soup.text
print(title.text)
you are getting only response code in 'response' and always use browser header for security otherwise you will face many issues
Find header in debugger console network section 'header' UserAgent
Try
import requests
from bs4 import BeautifulSoup
from fake_useragent import UserAgent
url = 'http://www.google.com'
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'}
response = requests.get(quote_page, headers=headers).text
soup = BeautifulSoup(response, 'html.parser')
print(soup.prettify())
It worked for me:
soup = BeautifulSoup(requests.get("your_url").text)
Now, this code below is better (with lxml parser):
import requests
from bs4 import BeautifulSoup
soup = BeautifulSoup(requests.get("your_url").text, 'lxml')
you should use .text
to get content of response
import requests
url = 'http://www ... '
response = requests.get(url)
print(response.text)
or use with soap
import requests
from bs4 import BeautifulSoup
url = 'http://www ... '
response = requests.get(url)
msg = response.text
print(BeautifulSoup(msg,'html.parser'))
You are getting response.content
. But it return response body as bytes (docs). But you should pass str
to BeautifulSoup constructor (docs). So you need to use the response.text
instead of getting content.