I have been looking through the Github V4 API docs and I cannot seem to find a way to query total contributions for the year (as displayed on your github profile). Has anyone ma
The ContributionsCollection object provides total contributions for each contribution type between two dates.
Note: from
and to
can be a maximum of one year apart, for a longer timeframe make multiple requests.
query ContributionsView($username: String!, $from: DateTime!, $to: DateTime!) {
user(login: $username) {
contributionsCollection(from: $from, to: $to) {
totalCommitContributions
totalIssueContributions
totalPullRequestContributions
totalPullRequestReviewContributions
}
}
}
There is no API for this as such. So there are two ways to go about it. Simple data scraping the user url or looping through each repo user has forked and then count the contribution. The later one will be more time consuming. The first one is much more reliable as it is cached by github. Below is a python approach to fetch the same
import json
import requests
from bs4 import BeautifulSoup
GITHUB_URL = 'https://github.com/'
def get_contributions(usernames):
"""
Get a github user's public contributions.
:param usernames: A string or sequence of github usernames.
"""
contributions = {'users': [], 'total': 0}
if isinstance(usernames, str) or isinstance(usernames, unicode):
usernames = [usernames]
for username in usernames:
response = requests.get('{0}{1}'.format(GITHUB_URL, username))
if not response.ok:
contributions['users'].append({username: dict(total=0)})
continue
bs = BeautifulSoup(response.content, "html.parser")
total = bs.find('div', {'class': 'js-contribution-graph'}).findNext('h2')
contributions['users'].append({username: dict(total=int(total.text.split()[0].replace(',', '')))})
contributions['total'] += int(total.text.split()[0].replace(',', ''))
return json.dumps(contributions, indent=4)
PS: Taken from https://github.com/garnertb/github-contributions
For later approach there is a npm package
https://www.npmjs.com/package/github-user-contributions
But I would recommend using the scraping approach only