Proxies with Python 'Requests' module

后端 未结 10 809
傲寒
傲寒 2020-11-22 12:13

Just a short, simple one about the excellent Requests module for Python.

I can\'t seem to find in the documentation what the variable \'proxies\' should contain. Whe

相关标签:
10条回答
  • 2020-11-22 12:18

    I have found that urllib has some really good code to pick up the system's proxy settings and they happen to be in the correct form to use directly. You can use this like:

    import urllib
    
    ...
    r = requests.get('http://example.org', proxies=urllib.request.getproxies())
    

    It works really well and urllib knows about getting Mac OS X and Windows settings as well.

    0 讨论(0)
  • 2020-11-22 12:18

    The accepted answer was a good start for me, but I kept getting the following error:

    AssertionError: Not supported proxy scheme None
    

    Fix to this was to specify the http:// in the proxy url thus:

    http_proxy  = "http://194.62.145.248:8080"
    https_proxy  = "https://194.62.145.248:8080"
    ftp_proxy   = "10.10.1.10:3128"
    
    proxyDict = {
                  "http"  : http_proxy,
                  "https" : https_proxy,
                  "ftp"   : ftp_proxy
                }
    

    I'd be interested as to why the original works for some people but not me.

    Edit: I see the main answer is now updated to reflect this :)

    0 讨论(0)
  • 2020-11-22 12:19

    I share some code how to fetch proxies from the site "https://free-proxy-list.net" and store data to a file compatible with tools like "Elite Proxy Switcher"(format IP:PORT):

    ##PROXY_UPDATER - get free proxies from https://free-proxy-list.net/

    from lxml.html import fromstring
    import requests
    from itertools import cycle
    import traceback
    import re
    
    ######################FIND PROXIES#########################################
    def get_proxies():
        url = 'https://free-proxy-list.net/'
        response = requests.get(url)
        parser = fromstring(response.text)
        proxies = set()
        for i in parser.xpath('//tbody/tr')[:299]:   #299 proxies max
            proxy = ":".join([i.xpath('.//td[1]/text()') 
            [0],i.xpath('.//td[2]/text()')[0]])
            proxies.add(proxy)
        return proxies
    
    
    
    ######################write to file in format   IP:PORT######################
    try:
        proxies = get_proxies()
        f=open('proxy_list.txt','w')
        for proxy in proxies:
            f.write(proxy+'\n')
        f.close()
        print ("DONE")
    except:
        print ("MAJOR ERROR")
    
    0 讨论(0)
  • 2020-11-22 12:24

    It’s a bit late but here is a wrapper class that simplifies scraping proxies and then making an http POST or GET:

    ProxyRequests

    https://github.com/rootVIII/proxy_requests
    
    0 讨论(0)
  • 2020-11-22 12:27

    i just made a proxy graber and also can connect with same grabed proxy without any input here is :

    #Import Modules
    
    from termcolor import colored
    from selenium import webdriver
    import requests
    import os
    import sys
    import time
    
    #Proxy Grab
    
    options = webdriver.ChromeOptions()
    options.add_argument('headless')
    driver = webdriver.Chrome(chrome_options=options)
    driver.get("https://www.sslproxies.org/")
    tbody = driver.find_element_by_tag_name("tbody")
    cell = tbody.find_elements_by_tag_name("tr")
    for column in cell:
    
            column = column.text.split(" ")
            print(colored(column[0]+":"+column[1],'yellow'))
    driver.quit()
    print("")
    
    os.system('clear')
    os.system('cls')
    
    #Proxy Connection
    
    print(colored('Getting Proxies from graber...','green'))
    time.sleep(2)
    os.system('clear')
    os.system('cls')
    proxy = {"http": "http://"+ column[0]+":"+column[1]}
    url = 'https://mobile.facebook.com/login'
    r = requests.get(url,  proxies=proxy)
    print("")
    print(colored('Connecting using proxy' ,'green'))
    print("")
    sts = r.status_code
    
    0 讨论(0)
  • 2020-11-22 12:29

    If you'd like to persisist cookies and session data, you'd best do it like this:

    import requests
    
    proxies = {
        'http': 'http://user:pass@10.10.1.0:3128',
        'https': 'https://user:pass@10.10.1.0:3128',
    }
    
    # Create the session and set the proxies.
    s = requests.Session()
    s.proxies = proxies
    
    # Make the HTTP request through the session.
    r = s.get('http://www.showmemyip.com/')
    
    0 讨论(0)
提交回复
热议问题