I have two Python scripts. One uses the Urllib2 library and one uses the Requests library.
I have found Requests easier to implement, but I can\'t find an equivalent
If the response is in json you could do something like (python3):
import json
import requests as reqs
# Make the HTTP request.
response = reqs.get('http://demo.ckan.org/api/3/action/group_list')
# Use the json module to load CKAN's response into a dictionary.
response_dict = json.loads(response.text)
for i in response_dict:
print("key: ", i, "val: ", response_dict[i])
To see everything in the response you can use .__dict__
:
print(response.__dict__)
Requests doesn't have an equivalent to Urlib2's read()
.
>>> import requests
>>> response = requests.get("http://www.google.com")
>>> print response.content
'<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage"><head>....'
>>> print response.content == response.text
True
It looks like the POST request you are making is returning no content. Which is often the case with a POST request. Perhaps it set a cookie? The status code is telling you that the POST succeeded after all.
If you push for example image to some API and want the result address(response) back you could do:
import requests
url = 'https://uguu.se/api.php?d=upload-tool'
data = {"name": filename}
files = {'file': open(full_file_path, 'rb')}
response = requests.post(url, data=data, files=files)
current_url = response.text
print(response.text)