Are there any free APIs for retrieving the S&P 500's component symbols? [closed]

≯℡__Kan透↙ 提交于 2019-12-05 08:23:25

I found http://finviz.com/export.ashx?v=152&f=idx_sp500&ft=1&ta=1&p=d&r=1&c=1

:-)

But I haven't found Finviz API documentation.

:-(

Bloomberg seems to have an open api. Might find the data you need if you dig around.

Also had a similar need. You could use Wikipedia API or parse html to get the list of symbols in S&P 500 http://en.wikipedia.org/wiki/List_of_S%26P_500_companies

You can now install and use module by

pip install finsymbols

I currently obtain the list of symbols via Wikipedia. It is not rest but can be easily made into a rest API. It is written in python

>>import sys
>>sys.path.append('/home/skillachie/Desktop/')
>>import finsymbols

sp500 = finsymbols.get_sp500_symbols()

pprint.pprint(sp500)

{'company': u'Xcel Energy Inc',
  'headquaters': u'Minneapolis, Minnesota',
  'industry': u'Multi-Utilities & Unregulated Power',
  'sector': u'Utilities',
  'symbol': u'XEL'},
 {'company': u'Xerox Corp.',
  'headquaters': u'Norwalk, Connecticut',
  'industry': u'IT Consulting & Services',
  'sector': u'Information Technology',
  'symbol': u'XRX'},
 {'company': u'Xilinx Inc',
  'headquaters': u'San Jose, California',
  'industry': u'Semiconductors',
  'sector': u'Information Technology',
  'symbol': u'XLNX'},
 {'company': u'XL Capital',
  'headquaters': u'Hamilton, Bermuda',
  'industry': u'Property & Casualty Insurance',
  'sector': u'Financials',
  'symbol': u'XL'},

If interested you can get more information here http://skillachie.github.io/finsymbols/

with python, you can try out this snippet i just wrote (a bit ugly, but that works). (it actually returns 502 tickers. and it is right)

 from urllib import request
    from bs4 import BeautifulSoup
    import datetime
    import dateutil.relativedelta as dr
    import pandas as pd


    def get_constituents():
        # URL request, URL opener, read content
        req = request.Request('http://en.wikipedia.org/wiki/List_of_S%26P_500_companies')
        opener = request.urlopen(req)
        content = opener.read().decode() # Convert bytes to UTF-8

        soup = BeautifulSoup(content)
        tables = soup.find_all('table') # HTML table we actually need is tables[0] 

        external_class = tables[0].findAll('a', {'class':'external text'})

        tickers = []

        for ext in external_class:
            if not 'reports' in ext:
                tickers.append(ext.string)

        return tickers
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!