How do I clean twitter data in R?

醉酒当歌 提交于 2019-12-17 22:43:11

问题


I extracted tweets from twitter using the twitteR package and saved them into a text file.

I have carried out the following on the corpus

xx<-tm_map(xx,removeNumbers, lazy=TRUE, 'mc.cores=1')
xx<-tm_map(xx,stripWhitespace, lazy=TRUE, 'mc.cores=1')
xx<-tm_map(xx,removePunctuation, lazy=TRUE, 'mc.cores=1')
xx<-tm_map(xx,strip_retweets, lazy=TRUE, 'mc.cores=1')
xx<-tm_map(xx,removeWords,stopwords(english), lazy=TRUE, 'mc.cores=1')

(using mc.cores=1 and lazy=True as otherwise R on mac is running into errors)

tdm<-TermDocumentMatrix(xx)

But this term document matrix has a lot of strange symbols, meaningless words and the like. If a tweet is

 RT @Foxtel: One man stands between us and annihilation: @IanZiering.
 Sharknado‚Äã 3: OH HELL NO! - July 23 on Foxtel @SyfyAU

After cleaning the tweet I want only proper complete english words to be left , i.e a sentence/phrase void of everything else (user names, shortened words, urls)

example:

One man stands between us and annihilation oh hell no on 

(Note: The transformation commands in the tm package are only able to remove stop words, punctuation whitespaces and also conversion to lowercase)


回答1:


Using gsub and

stringr package

I have figured out part of the solution for removing retweets, references to screen names, hashtags, spaces, numbers, punctuations, urls .

  clean_tweet = gsub("&amp", "", unclean_tweet)
  clean_tweet = gsub("(RT|via)((?:\\b\\W*@\\w+)+)", "", clean_tweet)
  clean_tweet = gsub("@\\w+", "", clean_tweet)
  clean_tweet = gsub("[[:punct:]]", "", clean_tweet)
  clean_tweet = gsub("[[:digit:]]", "", clean_tweet)
  clean_tweet = gsub("http\\w+", "", clean_tweet)
  clean_tweet = gsub("[ \t]{2,}", "", clean_tweet)
  clean_tweet = gsub("^\\s+|\\s+$", "", clean_tweet) 

ref: ( Hicks , 2014) After the above I did the below.

 #get rid of unnecessary spaces
clean_tweet <- str_replace_all(clean_tweet," "," ")
# Get rid of URLs
clean_tweet <- str_replace_all(clean_tweet, "http://t.co/[a-z,A-Z,0-9]*{8}","")
# Take out retweet header, there is only one
clean_tweet <- str_replace(clean_tweet,"RT @[a-z,A-Z]*: ","")
# Get rid of hashtags
clean_tweet <- str_replace_all(clean_tweet,"#[a-z,A-Z]*","")
# Get rid of references to other screennames
clean_tweet <- str_replace_all(clean_tweet,"@[a-z,A-Z]*","")   

ref: (Stanton 2013)

Before doing any of the above I collapsed the whole string into a single long character using the below.

paste(mytweets, collapse=" ")

This cleaning process has worked for me quite well as opposed to the tm_map transforms.

All that I am left with now is a set of proper words and a very few improper words. Now, I only have to figure out how to remove the non proper english words. Probably i will have to subtract my set of words from a dictionary of words.




回答2:


To remove the URLs you could try the following:

removeURL <- function(x) gsub("http[[:alnum:]]*", "", x)
xx <- tm_map(xx, removeURL)

Possibly you could define similar functions to further transform the text.




回答3:


For me, this code did not work, for some reason-

# Get rid of URLs
clean_tweet <- str_replace_all(clean_tweet, "http://t.co/[a-z,A-Z,0-9]*{8}","")

Error was-

Error in stri_replace_all_regex(string, pattern, fix_replacement(replacement),  : 
 Syntax error in regexp pattern. (U_REGEX_RULE_SYNTAX)

So, instead, I used

clean_tweet4 <- str_replace_all(clean_tweet3, "https://t.co/[a-z,A-Z,0-9]*","")
clean_tweet5 <- str_replace_all(clean_tweet4, "http://t.co/[a-z,A-Z,0-9]*","")

to get rid of URLs




回答4:


The code do some basic cleaning

Converts into lowercase

df <- tm_map(df, tolower)  

Removing Special characters

df <- tm_map(df, removePunctuation)

Removing Special characters

df <- tm_map(df, removeNumbers)

Removing common words

df <- tm_map(df, removeWords, stopwords('english'))

Removing URL

removeURL <- function(x) gsub('http[[:alnum;]]*', '', x)


来源:https://stackoverflow.com/questions/31348453/how-do-i-clean-twitter-data-in-r

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!