I\'m trying to extract US states from wiki URL, and for which I\'m using Python Pandas.
import pandas as pd
import html5lib
f_states = pd.read_html(\'https://si
Running Python 3.4 on a mac
New pyvenv
pip install pandas
pip install lxml
pip install html5lib
pip install BeautifulSoup4
Then run your example and it should work:
import pandas as pd
import html5lib
f_states= pd.read_html('https://simple.wikipedia.org/wiki/List_of_U.S._states')
Also consider conda installing your required packages at https://www.continuum.io/downloads. Instead of pip installing, you would conda install your packages.
$ conda install html5lib
You need to install lxml using pip.
pip install lxml
this worked for me.
For that you just need to install
pip install pandas
pip install lxml
and then you have to import those and run your program
import pandas as pd
f_states=pd.read_html('https://simple.wikipedia.org/wiki/List_of_U.S._states')
if your environment is Anaconda Jupiter notebook.
you need another set of install comment:
conda install lxml
conda install html5lib
conda install BeautifulSoup4
then run the python code in Jupiter notebook.
import pandas as pd
f_states= pd.read_html('https://simple.wikipedia.org/wiki/List_of_U.S._states')