Wait until page is loaded with Selenium WebDriver for Python

后端 未结 12 861
借酒劲吻你
借酒劲吻你 2020-11-22 00:26

I want to scrape all the data of a page implemented by a infinite scroll. The following python code works.

for i in range(100):
    driver.execute_script(\"w         


        
12条回答
  •  悲&欢浪女
    2020-11-22 00:45

    The webdriver will wait for a page to load by default via .get() method.

    As you may be looking for some specific element as @user227215 said, you should use WebDriverWait to wait for an element located in your page:

    from selenium import webdriver
    from selenium.webdriver.support.ui import WebDriverWait
    from selenium.webdriver.support import expected_conditions as EC
    from selenium.webdriver.common.by import By
    from selenium.common.exceptions import TimeoutException
    
    browser = webdriver.Firefox()
    browser.get("url")
    delay = 3 # seconds
    try:
        myElem = WebDriverWait(browser, delay).until(EC.presence_of_element_located((By.ID, 'IdOfMyElement')))
        print "Page is ready!"
    except TimeoutException:
        print "Loading took too much time!"
    

    I have used it for checking alerts. You can use any other type methods to find the locator.

    EDIT 1:

    I should mention that the webdriver will wait for a page to load by default. It does not wait for loading inside frames or for ajax requests. It means when you use .get('url'), your browser will wait until the page is completely loaded and then go to the next command in the code. But when you are posting an ajax request, webdriver does not wait and it's your responsibility to wait an appropriate amount of time for the page or a part of page to load; so there is a module named expected_conditions.

提交回复
热议问题