I\'ve setup a simple webscraping script in Python w/ Selenium and PhantomJS. I\'ve got about 200 URLs in total to scrape. The script runs fine at first then after about 20-30 UR
I had the same problem to fix it I installed phantomjs from source.
For Linux (Debian):
sudo apt-get update
sudo apt-get install build-essential chrpath git-core libssl-dev libfontconfig1-dev libxft-dev
git clone git://github.com/ariya/phantomjs.git
cd phantomjs
git checkout 1.9
./build.sh
For Mac os:
git clone git://github.com/ariya/phantomjs.git
cd phantomjs
git checkout 1.9
./build.sh
For other systems check the following link http://phantomjs.org/build.html
Optional :
cd bin
chmod +x phantomjs
cp phantomjs /usr/bin/
I figured it out because when I read my ghostdriver.log file it said.
[ERROR - 2014-09-04T19:33:30.842Z] GhostDriver - main.fail - {"message":"Could not start Ghost Driver","line":82,"sourceId":140145669488128,"sourceURL":":/ghostdriver/main.js","stack":"Error: Could not start Ghost Driver\n at :/ghostdriver/main.js:82","stackArray":[{"sourceURL":":/ghostdriver/main.js","line":82}]}
I was sure that there must be some missing files which, it must be using for some edge cases. So I decided to build from source and its working fine now.