google-api-python-client

How do you save a Google Sheets file as CSV from Python 3 (or 2)?

Deadly 提交于 2019-12-30 04:45:07
问题 I am looking for a simple way to save a csv file originating from a published Google Sheets document? Since it's published, it's accessible through a direct link (modified on purpose in the example below). All my browsers will prompt me to save the csv file as soon as I launch the link. Neither: DOC_URL = 'https://docs.google.com/spreadsheet/ccc?key=0AoOWveO-dNo5dFNrWThhYmdYW9UT1lQQkE&output=csv' f = urllib.request.urlopen(DOC_URL) cont = f.read(SIZE) f.close() cont = str(cont, 'utf-8') print

Specifying valid “ranges” for batch values requests

梦想与她 提交于 2019-12-25 08:57:34
问题 I'm using the Google Sheets API quickstart for Python. I'm trying to pull multiple cells, just one cell at a time, from the google sheets api and plug each value into a text document. I've been doing this with spreadsheets().values().get() , but I'm hitting the API too much and the batchGet() method seems like it would be better. I read through the Google Sheets API v4 but was unable to find the correct formatting for the ranges parameter on spreadsheets().values().batchGet() . According to

App engine call to Google API python client return 403 with @oauth_required

风流意气都作罢 提交于 2019-12-25 08:29:36
问题 It should be a trivial job but i cannot get it out. I need to call the Google Calendar API from Gae; thus I set all up as per Google docs and examples: I've an /auth.py : CLIENT_SECRETS = os.path.join(os.path.dirname(__file__), 'client_secrets.json') SCOPES = [ 'https://www.googleapis.com/auth/calendar', ] decorator = appengine.OAuth2DecoratorFromClientSecrets( filename=CLIENT_SECRETS, scope=SCOPES, cache=memcache, prompt='consent', ) called by main.py functions: class Landing(webapp2

Troubles downloading csv files from Google Drive

非 Y 不嫁゛ 提交于 2019-12-25 02:55:42
问题 I think i'm failing to understand what i'm doing wrong. I have connected to the Google Drive API and now I would like to read a few documents on my Google drive with the objective of analysing the data contained in these csv files. This is the code I have: from __future__ import print_function import pickle import os.path from httplib2 import Http from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import

GA Management API - Custom Dimensions list() - Error 403: Insufficient Permission

若如初见. 提交于 2019-12-24 23:24:06
问题 I'm using Management API (using PY Client Library) to get list of Custom Dimensions as described here - Custom Dimensions: list link = analytics.management().customDimensions().list(accountId = ACCOUNT_ID, webPropertyId = PROPERTY_ID) dimensions = link.execute() but the API keeps returning Error Code: 403, Insufficient Permission I'm pretty sure the service account email address I'm using to build credentials object has sufficient Edit, Read & Analyse level access at the GA Account Level!. I

Google Drive API How can I find the path of a file?

混江龙づ霸主 提交于 2019-12-24 12:32:35
问题 I'm trying to find the path of a file when fetching a file list with the Google Drive API. Right now, I'm able to fetch file properties (currently only fetching checksum, id, name, and mimeType): results = globalShares.service.files().list(pageSize=1000,corpora='user',fields='nextPageToken, files(md5Checksum, id, name, mimeType)').execute() items = results.get('files',[]) nextPageToken = results.get('nextPageToken',False) for file in items: print("=============================================

Write the results of the Google Api to a data lake with Databricks

a 夏天 提交于 2019-12-24 10:55:14
问题 I am getting back user usage data from the Google Admin Report User Usage Api via the Python SDK on Databricks. The data size is around 100 000 records per day which I do a night via a batch process. The api returns a max page size of 1000 so I call it 1000 roughly to get the data I need for the day. This is working fine. My ultimate aim is to store the data in its raw format in a data lake (Azure Gen2, but irrelevant to this question). Later on, I will transform the data using Databricks

Google Drive API on upload--where are these extra blank lines coming from?

删除回忆录丶 提交于 2019-12-24 01:55:27
问题 To summarize the program: I'm downloading a file from my Google Drive, I'm then opening and reading a file("file_a.txt") in my local machine, then I'm opening another file in my machine("file_b.txt") in append mode, and I'm appending "file_a.txt" into "file_b.txt" before updating my Google Drive with this new "file_b". For some reason I'm getting these blank lines. And they expand in a very peculiar way. Here's the final output of my program with the ghost lines: == *** name class ==== ***

Google BigQuery Incomplete Query Replies on Odd Attempts

試著忘記壹切 提交于 2019-12-23 17:59:48
问题 When querying BigQuery through the python api using: service.jobs().getQueryResults We're finding that the first attempt works fine - all expected results are included in the response. However, if the query is run a second time shortly after the first (roughly within 5 minutes) only a small subset of the results are returned (in powers of 2) nearly instantly, with no errors. See our complete code at: https://github.com/sean-schaefer/pandas/blob/master/pandas/io/gbq.py Any thoughts on what

How would one convert/wrap a HTTPLib2 instance as a Session?

守給你的承諾、 提交于 2019-12-23 04:51:43
问题 I know the title is a big wonky and I apologize for that. The dilemma I have is that gspread uses Session and the Google APIs client library for Python uses HTTPLib2. I have a service account that I have working with the Google API client and want to take the authenticated httplib2.Http() instance and wrap it so that gspread can use it like a Session object. UPDATE: Fixed with update 103 to gspread. Based on Jay Lee's awesome answer below, here's how to initialize the gspread Client with a