tweepy

Tweepy error with install involving parse requirements

南笙酒味 提交于 2019-12-31 02:42:30
问题 I have been trying to install tweepy on a Windows, and it returns an error. Specifically it says: TypeError: parse_requirements <> got an unexpected keyword argument 'session' My install code was: pip install tweepy Any help would be greatly appreciated! 回答1: Am not sure if you solved the issue. Try upgrading pip to pip6.0+, this resolved the same issue i had on my Ubuntu Machine. You can give it a try pip install --upgrade pip 来源: https://stackoverflow.com/questions/29079152/tweepy-error

Tweepy error with install involving parse requirements

Deadly 提交于 2019-12-31 02:42:04
问题 I have been trying to install tweepy on a Windows, and it returns an error. Specifically it says: TypeError: parse_requirements <> got an unexpected keyword argument 'session' My install code was: pip install tweepy Any help would be greatly appreciated! 回答1: Am not sure if you solved the issue. Try upgrading pip to pip6.0+, this resolved the same issue i had on my Ubuntu Machine. You can give it a try pip install --upgrade pip 来源: https://stackoverflow.com/questions/29079152/tweepy-error

Unraised exception using Tweepy and MySQL

橙三吉。 提交于 2019-12-30 11:54:27
问题 I am trying to use Tweepy to store tweets in a MySQL DB. The code works fine, with the exception of once I try to execute the SQL command to insert the data into the database. Code is as follows: #MySQL connection attempt try: cnx = mysql.connector.connect(**config) cursor = cnx.cursor() except mysql.connector.Error as err: if err.errno == errorcode.ER_ACCESS_DENIED_ERROR: print("Something is wrong with your user name or password") elif err.errno == errorcode.ER_BAD_DB_ERROR: print("Database

how to get the whole tweet instead of a part of tweet with links

限于喜欢 提交于 2019-12-25 17:43:02
问题 I am using Twython library for tweets acquisition. but most of the tweets are not complete and end with a short URL where the whole tweet is present. Is there any way that I can get through it. here is the sample code: results=twitter.search(q="python") all_tweets=results['statuses'] for tweet in all_tweets: print(tweet['text']) 回答1: In order to see the extended tweet you just need to supply this parameter to your search query: tweet_mode=extended . Then, you will find the extended tweet in

Tweepy: How can I look up more than 100 user screen names

只愿长相守 提交于 2019-12-25 09:41:04
问题 You can only retrieve 100 user objects per request with the api.lookup_users() method. Is there an easy way to retrieve more than 100 using Tweepy and Python? I have read this post: User ID to Username tweepy but it does not help with the more than 100 problem. I am pretty novice in Python so I cannot come up with a solution myself. What I have tried is this: users = [] i = 0 num_pages = 2 while i < num_pages: try: # Look up a collection of ids users.append(api.lookup_users(user_ids=ids[100*i

Tweepy Search API Writing to File Error

爱⌒轻易说出口 提交于 2019-12-25 04:24:22
问题 Noob python user: I've created file that extracts 10 tweets based on the api.search (not streaming api). I get a screen results, but cannot figure how to parse the output to save to csv. My error is TypeError: expected a character buffer object. I have tried using .join(str(x) and get other errors. My code is import tweepy import time from tweepy import OAuthHandler from tweepy import Cursor #Consumer keys and access tokens, used for Twitter OAuth consumer_key = '' consumer_secret = '' atoken

Tweets streamed using tweepy, reading json file in python

≡放荡痞女 提交于 2019-12-25 03:22:49
问题 I streamed tweets using the following code class CustomStreamListener(tweepy.StreamListener): def on_data(self, data): try: with open('brasil.json', 'a') as f: f.write(data) return True except BaseException as e: print("Error on_data: %s" % str(e)) return True Now I have a json file (brasil.json). I want to open it on python to do sentiment analysis but I can't find a way. I managed to open the first tweet using this: with open('brasil.json') as f: for line in f: tweets.append(json.loads(line

Tweets streamed using tweepy, reading json file in python

一世执手 提交于 2019-12-25 03:22:16
问题 I streamed tweets using the following code class CustomStreamListener(tweepy.StreamListener): def on_data(self, data): try: with open('brasil.json', 'a') as f: f.write(data) return True except BaseException as e: print("Error on_data: %s" % str(e)) return True Now I have a json file (brasil.json). I want to open it on python to do sentiment analysis but I can't find a way. I managed to open the first tweet using this: with open('brasil.json') as f: for line in f: tweets.append(json.loads(line

How do I filter tweets using location AND keyword?

孤街浪徒 提交于 2019-12-25 01:42:52
问题 I'm a new Python user and have been experimenting with tweepy. I understand the twitter API does not allow for filtering on both location and keywords. To get around this, I've adapted the code from here: How to add a location filter to tweepy module. While it works fine when there are only a few keywords, it ceases to print out statuses when I increase the number of keywords. I think it's probably because iterating over the keyword list is not the best way to do it. Does anyone have any

Tweepy: crawl live streaming tweets and save in to a .csv file

限于喜欢 提交于 2019-12-24 23:58:47
问题 After reading streaming with Tweepy and going through this example. I tried to write a tweepy app to crawl live stream data with the tweepy Api and save it to .csv file. When I run my code, it returns empty csv file ('OutputStreaming.csv') with column names['Date', 'Text', 'Location','Number_Follower','User_Name', 'Friends_count','Hash_Tag], not the stream tweets. I also tried to do it in this way also this one, but I am getting the same out put with my code:- def on_status(self, status):