Selenium driver.get (url)
wait till full page load. But a scraping page try to load some dead JS script. So my Python script wait for it and doesn\'t works few
When Selenium loads a page/url by default it follows a default configuration with pageLoadStrategy
set to normal
. To make Selenium not to wait for full page load we can configure the pageLoadStrategy
. pageLoadStrategy
supports 3 different values as follows:
normal
(full page load)eager
(interactive)none
Here is the code block to configure the pageLoadStrategy
:
Firefox :
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
caps = DesiredCapabilities().FIREFOX
caps["pageLoadStrategy"] = "normal" # complete
#caps["pageLoadStrategy"] = "eager" # interactive
#caps["pageLoadStrategy"] = "none"
driver = webdriver.Firefox(desired_capabilities=caps, executable_path=r'C:\path\to\geckodriver.exe')
driver.get("http://google.com")
Chrome :
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
caps = DesiredCapabilities().CHROME
caps["pageLoadStrategy"] = "normal" # complete
#caps["pageLoadStrategy"] = "eager" # interactive
#caps["pageLoadStrategy"] = "none"
driver = webdriver.Chrome(desired_capabilities=caps, executable_path=r'C:\path\to\chromedriver.exe')
driver.get("http://google.com")
Note :
pageLoadStrategy
valuesnormal
,eager
andnone
is a requirement as per WebDriver W3C Editor's Draft butpageLoadStrategy
value aseager
is still a WIP (Work In Progress) within ChromeDriver implementation. You can find a detailed discussion in “Eager” Page Load Strategy workaround for Chromedriver Selenium in Python