urllib2

Python: Login on a website

倾然丶 夕夏残阳落幕 提交于 2019-12-23 02:38:15
问题 I trying to login on a website and do automated clean-up jobs. The site where I need to login is : http://site.com/Account/LogOn I tried various codes that I found it on Stack, like Login to website using python (but Im stuck on this line session = requests.session(config={'verbose': sys.stderr}) where my JetBeans doesnt like 'verbose' telling me that i need to do something, but doesnt explain exactly what). I also tried this: Browser simulation - Python, but no luck with this too. Can anyone

unbuffered urllib2.urlopen

烂漫一生 提交于 2019-12-23 02:32:32
问题 I have client for web interface to long running process. I'd like to have output from that process to be displayed as it comes. Works great with urllib.urlopen() , but it doesn't have timeout parameter. On the other hand with urllib2.urlopen() the output is buffered. Is there a easy way to disable that buffer? 回答1: A quick hack that has occurred to me is to use urllib.urlopen() with threading.Timer() to emulate timeout. But that's only quick and dirty hack. 回答2: urllib2 is buffered when you

Urllib2 runs fine if i run the program independently but throws error when i add it to a cronjob

人走茶凉 提交于 2019-12-23 01:28:17
问题 url = "www.someurl.com" request = urllib2.Request(url,header={"User-agent" : "Mozilla/5.0"}) contentString = urllib2.url(request).read() contentFile = StringIO.StringIO(contentString) for i in range(0,2): html = contentFile.readline() print html The above code runs fine from commandline but if i add it to a cron job it throws the following error: File "/usr/lib64/python2.6/urllib2.py", line 409, in _open '_open', req) File "/usr/lib64/python2.6/urllib2.py", line 369, in _call_chain result =

Multiple requests using urllib2.urlopen() at the same time

浪尽此生 提交于 2019-12-22 14:55:09
问题 My question is this; Is it possible to request two different URLs at the same time? What I'm trying to do, is use a Python script to call requests from two different URLs at the same time. Making two PHP scripts run simultaneously (on different servers, running a terminal command). My issue is that I can't do them right after each other, because they each take a specific time to do something, and need to run at the same time, and end at the same time. Is this possible using urllib2.urlopen?

How to implement a timeout control for urlllib2.urlopen

元气小坏坏 提交于 2019-12-22 06:15:50
问题 How to implement a controlling for urlllib2.urlopen in Python ? I just wanna monitor that if in 5 seconds no xml data return, cut this connection and connect again? Should I use some timer? thx 回答1: urllib2.urlopen("http://www.example.com", timeout=5) 回答2: From the urllib2 documentation... The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if not specified, the global default timeout setting will be used). This actually only

Is there a library for urllib2 for python which we can download?

五迷三道 提交于 2019-12-22 05:38:39
问题 I need to use urllib2 with BeautifulSoup. I found the download file for BeautifulSoup and installed it, however, I couldn't find any download files for urllib2, is there another way to intall that module? 回答1: The module comes with Python, simply import it: import urllib2 If you're using Python3, the urllib was replaced by urllib.request. The Urllib PEP (Python3): http://www.python.org/dev/peps/pep-3108/#urllib-package. 来源: https://stackoverflow.com/questions/16597865/is-there-a-library-for

Pass a JSON object to an url with requests

蹲街弑〆低调 提交于 2019-12-22 05:28:19
问题 So, I want to use Kenneth' excellent requests module. Stumbled up this problem while trying to use the Freebase API. Basically, their API looks like that: https://www.googleapis.com/freebase/v1/mqlread?query=... as a query, they expect a JSON object, here's one that will return a list of wines with their country and percentage of alcohol: [{ "country": null, "name": null, "percentage_alcohol": null, "percentage_alcohol>": 0, "type": "/food/wine" }]​ Of course, we'll have to escape the hell

Fetching url in python with google app engine

三世轮回 提交于 2019-12-22 01:12:11
问题 I'm using this code to send Http request inside my app and then show the result: def get(self): url = "http://www.google.com/" try: result = urllib2.urlopen(url) self.response.out.write(result) except urllib2.URLError, e: I expect to get the html code of google.com page, but I get this sign ">", what the wrong with that ? 回答1: You need to call the read() method to read the response. Also good practice to check the HTTP status, and close when your done. Example: url = "http://www.google.com/"

Python, checking if a proxy is alive?

一曲冷凌霜 提交于 2019-12-22 00:29:01
问题 The code: for item in pxfile.readlines(): if is_OK(item): sys.stdout.write(item + "is not OK.") item = make(item) item = "#" + item resfile.write(item) else: sys.stdout.write(item) sys.stdout.write("is OK.") line = make(item) resfile.write(item) If is_OK is true it means that the proxy doesn't exist, should fix that. def is_OK(ip): try: proxy_handler = urllib2.ProxyHandler({'http': ip}) opener = urllib2.build_opener(proxy_handler) opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib2

Accepting File Argument in Python (from Send To context menu)

試著忘記壹切 提交于 2019-12-21 20:24:06
问题 I'm going to start of by noting that I have next to no python experience. alt text http://www.aquate.us/u/9986423875612301299.jpg As you may know, by simply dropping a shortcut in the Send To folder on your Windows PC, you can allow a program to take a file as an argument. How would I write a python program that takes this file as an argument? And, as a bonus if anyone gets a chance -- How would I integrate that with a urllib2 to POST the file to a PHP script on my server? Thanks in advance.