Troubles with encoding, pattern matching and noisy texts in R

谁说胖子不能爱 提交于 2021-02-10 14:36:56

问题


We are experiencing problems with encoding, pattern matching using texts automatically downloaded from the web.

We need some help to understand where the problem lies and how to fix it. Personally, I must confess that after having read so many posts on the topic, I am completely confused :-)

Our texts sometimes include: 1) disturbing Unicode (I have read this already (Automatically escape unicode characters ), but I am not sure in which way it can help with regular expressions)

2) weird quotes (such as ticks and double ticks than we do not manage to identify automatically (this page is useful https://www.cl.cam.ac.uk/~mgk25/ucs/quotes.html, but how shall we apply this codes within our code?)

I have already applied the suggestions given in this post: How to change the locale of R in RStudio?

To make a long story short, I will provide an example (out of many) that shows the kind of problems we are experiencing.

Read the code below (inelegant, but easy to read): the goal is to search the pattern “CURRENT URL http://[\S]*” in the input file “_ansiktstics_corpus.txt” and save the matches onto an output file.

I provide a snippet of an input file at the bottom.

Language used in our texts is Swedish. I am working on Windows 10.

----- start code

library(stringr)
rm(list=ls(all=TRUE)) # clear memory

setwd("yourPath”)

seeds_eCare01 <- dir(pattern = "_ansiktstics_corpus.txt") # we have a file list, but for the sake of this example I specify only a file that I attach to allow the reproducibility of the experiment

cat(file="yourPath/urls.csv", append =FALSE)

urlPattern<-"CURRENT URL http://[\\S]*" # all the lines containing the following pattern (we are interested in extracting the URLs)

for (f in seeds_eCare01) {

  tempData = scan( f, what="character", encoding = "UTF-8", sep="",quote = NULL) 
  urlList<-grep("urlPattern", tempData, value = TRUE, perl = TRUE) 
# we tried also with “gsub”, we get the same error

cat(urlList,file="yourPath/urls.csv", sep="\n",append = TRUE)
}

----- end code

The console output is the following:

---start console output

Read 13354 items
Warning messages:
1: In grep("urlPattern", tempData, value = TRUE, perl = TRUE) :
  input string 18 is invalid UTF-8
2: In grep("urlPattern", tempData, value = TRUE, perl = TRUE) :
  input string 19 is invalid UTF-8
3: In grep("urlPattern", tempData, value = TRUE, perl = TRUE) :
  input string 4590 is invalid UTF-8
4: In grep("urlPattern", tempData, value = TRUE, perl = TRUE) :
  input string 4591 is invalid UTF-8
5: In grep("urlPattern", tempData, value = TRUE, perl = TRUE) :
  input string 4593 is invalid UTF-8

---edn console output

No pattern was found in the file, although “CURRENT URL” is there.

My default locale is:

> Sys.getlocale()
[1] "LC_COLLATE=Swedish_Sweden.1252;LC_CTYPE=Swedish_Sweden.1252;LC_MONETARY=Swedish_Sweden.1252;LC_NUMERIC=C;LC_TIME=Swedish_Sweden.1252"

Now, let’s get to the problem:

The file that I want to read and search is uploaded using encoding=UTF-8

scan( f, what="character", encoding = "UTF-8", sep="",quote = NULL)

but when I run the following checks (to check whether the tempData is UTF-8) I get FALSE

all(stri_enc_isutf8(tempData))
[1] FALSE
> stri_enc_mark(tempData)
   [1] "ASCII" "ASCII" "ASCII" "ASCII" "ASCII" "ASCII" "ASCII" "ASCII" "ASCII" "ASCII" "UTF-8" "ASCII" "UTF-8" "ASCII" "ASCII" "UTF-8" "UTF-8"
  [18] "UTF-8" "UTF-8" "ASCII" "ASCII" "ASCII" "ASCII" "UTF-8" "UTF-8" "ASCII" "ASCII" "ASCII" "ASCII" "ASCII" "ASCII" "ASCII" "ASCII" "ASCII"

I also tried with “readlines” and a connection (as suggested by Gries 2017 )but I experienced similar problems:

(tempData<-readLines(con<-file(f, encoding="UTF-8"), warn=FALSE)); close(con)

when I run the following I get a TRUE, but the grep fails exactly as with “scan”

all(stri_enc_isutf8(tempData))
[1] TRUE

However, when I run the following (to check the encoding), I get a mixture of ascii and uft-8

stri_enc_mark(tempData)
  [1] "ASCII" "ASCII" "ASCII" "ASCII" "ASCII" "ASCII" "UTF-8" "UTF-8" "UTF-8" "ASCII" "ASCII" "UTF-8" "UTF-8" "UTF-8" "UTF-8" "UTF-8" "UTF-8"

Questions 1) Pattern matching does not work: Why? 2) The encoding is not consistently UTF-8 although we force it: why? 3) How can we get rid or convert disturbing characters ? Any suggestions, hints or insights are greatly appreciated.

Thanks in advance

Cheers, Marina

input file: “_ansiktstics_corpus.txt"
<>
<header>
<usergroup>lay/specialized</usergroup>
<annotator's comments>[...]</annotator's comments>
</header>
</>CURRENT URL http://www.aftonbladet.se/halsa/article11873667.ab
”Men det är inte som på film”
 Miriam Jaakola, 30, var 26 år när hon fick sin diagnos. Foto: tomas bergman
Hon varken svär eller säger fula ord — men hon drabbas ofta av tics och har svårt att tygla sina
impulser.
Miriam Jaakola, 30, har Tourettes syndrom.
”Jag har haft tics så länge jag kan minnas. När jag var barn hade jag mycket ansiktstics. Jag blinkade, gapade och gjorde grimaser. Jag brukade kalla det för mina ovanor och tänkte egentligen inte så mycket på det. Det var så mycket annat i mitt liv som var rörigt. Jag har en ganska mild form av Tourettes syndrom, så mina tics är inte så tydliga. Det är både på gott och på ont. Folk som inte vet om det märker oftast inte att jag har tourette. Själv förstod jag det inte heller förrän jag var 26 år och fick min diagnos.
Svär inte
Den vanligaste fördomen är att alla som har Tourettes syndrom svär eller säger fula ord. Men ticsen kan se väldigt olika ut. De ändrar sig också över tid. Det är bara en tredjedel av alla med tourette som säger fula ord under någon period i sitt liv. Jag har aldrig gjort det.
Tourette är en sorts känsla – som ett tryck eller en frustration – som byggs upp inom en och till slut bryter ut i tics. Ungefär som när man nyser eller känner att en nysning är på väg. Jag kan hålla tillbaka mina tics om jag är ute bland folk, men då blir de oftast
värre efteråt.Det fi nns en stark energi i tourette. I dag när jag vet hur jag ska hantera det, kan jag vända den energin till något
positivt, till exempel i mitt jobb. Men tourette kan också ställa till problem. Jag har svårt att koncentrera mig och kontrollera impulser. När jag var yngre kunde jag få blixtsnabba utbrott. Jag minns fortfarande första gången jag reflekterade över det. Jag var runt tio år och stod och pillade med något. Plötsligt kastade jag iväg alla grejer. Hade jag haft en allvarligare form av tourette hade jag säkert skrikit eller fått en spasmurladdning. Jag minns att jag tänkte: Undrar om alla har det så här, eller om det bara är jag?
Skoltiden jobbig
Skoltiden var jättejobbig.

回答1:


Here "readlines" is ok, not sure what happens with encoding, but I get NO errors or warnings :-)

library(stringr)
rm(list=ls(all=TRUE)) # clear memory
setwd("path")

seeds_eCare01 <- dir(pattern = "_ansiktstics_corpus.txt")# see snippet above

cat("seed;NumOfWebDoc;CumulativeSum",file="outputFile",  sep="\n", append =FALSE)


urlPattern<-"<\\/>CURRENT URL" 

totURLs<-0

for (f in seeds_eCare01) {

  (tempData<-readLines(con<-file(f, encoding="UTF-8"), warn=FALSE)); close(con)

  urlList<-grep(urlPattern, tempData, value = TRUE, perl = TRUE) # 

  countURLsPerSeed<-length(urlList)
  totURLs<-totURLs + countURLsPerSeed
  out1<-c(f, countURLsPerSeed,totURLs)
  out2<-paste(out1,collapse=";")

  cat(out2,file="outputFile", sep="\n",append = TRUE)
} 



回答2:


Wiktor helped me with this code. The code converts a noisy text corpus to a clean dataset of strings(.csv)

rm(list=ls(all=TRUE))
library(NLP)
library(tm)

# Settings
kNonAnnotatedTextsPath <- "path"  # The folder path for the text content.
kAnnotatedTextsPath <- "path"  # The folder path for the categories. 
kOutputPath <- "path"  # The destination for the output file.
kOutputFileName <- "output.csv"  # The name and format of the output file
kOverwriteOutputFile <- TRUE  # Overwrite or append the content to the output file? TRUE = Overwrite, FALSE = Append.
kWarn <- TRUE  # Warn the user of uncommon categories. 

# Patterns
kStartPattern <- "CURRENT URL"  # The text the occur on the line before the text content.
kBreakPattern <- "<>"  # The text that occur on the line after the text content.
kCategoryPattern <- ".*<usergroup>(.*)</usergroup>.*"  # The text that surrounds the category: (.*)


ExtractCategories <- function(file.name){
  # Extracts the categories for a given file. Returns in form of a list vector.
  con <- file(paste(kAnnotatedTextsPath, "/", file.name, sep = ""), encoding="UTF-8")
  document.sections <- readLines(con, warn=FALSE)
  close(con)
  document.categories <- vector(mode = "list")
  document.names <- c()

  for(section in document.sections){
    if (grepl(kCategoryPattern, section)){
      document.categories <- c(document.categories, gsub(kCategoryPattern, "\\1", section))
    }
    if (grepl(kStartPattern, section)){
      document.names <- c(document.names, section)
    }
  }
  names(document.categories) <- document.names
  return(document.categories)
}

ExtractDocuments <- function(file, provided.categories){
  # Extracts the text content from a given file, and appends the correct category.
  # Returns a list of two, one with a list the text content and one list with the corresponding categories.
  collect <- FALSE
  con <- file(paste(kNonAnnotatedTextsPath, "/", file, sep = ""), encoding="UTF-8")
  document.sections <- readLines(con, warn=FALSE)
  close(con)
  document.string <- ""
  document.list <- c()
  document.categories <- c()
  document.name <- ""

  for(section in document.sections){
    if(grepl(kStartPattern, section)){
      document.name <- section
      collect <- TRUE
    } else if(collect == TRUE && grepl(kBreakPattern, section)){
      document.categories <- c(document.categories, get(document.name, provided.categories))
      document.list <- c(document.list, document.string)
      document.name <- ""
      document.string <- ""
      collect <- FALSE
    } else if(collect){
      document.string <- paste(document.string, section, sep = " ")
    }
  }
  if(nchar(document.string) != 0){
    document.categories <- c(document.categories, get(document.name, provided.categories))
    document.list <- c(document.list, document.string)
  }
  return(cbind(c(document.list), c(document.categories)))
}

RemoveMisc <- function(string){
  # Removes the following characters: ”, —, –, '
  gsub(pattern = "[\\x{201d}\\x{2014}\\x{2013}\\x{02B9}]", replacement = "", string, perl = TRUE)
}

RemoveStartSpace <- function(string){
  # Removes space at the start of a paragraph.
  gsub(pattern = "^[ ]", replacement = "", string, perl = TRUE)
} 

RemoveMultiSpace <- function(string){
  # Removes multiple occurances of space in a row,
  gsub(pattern = "[ ]{2,}", replacement = " ", string, perl = TRUE)
} 

RemoveWebsites <- function(string){
  # Removes the common webpage formates from the text.
  gsub(pattern = "(?:(?:(?:http[s]*://)|(?:www\\.))+[\\S]*)", replacement = "", string, perl = TRUE)
} 

CleanDocuments <- function(documents){
  # Cleans the documents of unwanted (combinations of) signs, and replaces uppcarse letters with lowercase.
  # Returns the documents as a corpus object.
  corpus <- Corpus(VectorSource(documents[, 1]))
  meta(corpus, type="indexed", tag="Category") <- documents[, 2]
  corpus <- tm_map(corpus, RemoveWebsites)
  corpus <- tm_map(corpus, removeNumbers)
  corpus <- tm_map(corpus, removePunctuation)
  corpus <- tm_map(corpus, RemoveMisc)
  corpus <- tm_map(corpus, RemoveStartSpace)
  corpus <- tm_map(corpus, RemoveMultiSpace)
  corpus <- tm_map(corpus, tolower)
  return(corpus)
}

SaveDocuments <- function(corpus, output.file, warn = FALSE){
  # Saves the documents to a csv file in the format: '<text>',<category>
  counter = 1
  while (counter <= length(corpus)){
    text <- as.character(corpus[[counter]])
    category <- as.character(meta(corpus)[[1]][counter])

    if(warn && !(category %in% c("lay", "specialized"))){
      print("Warning!")
      print(paste("Unusual classification '", category, "'", ", in the following text:", sep = ""))
      print(text)
    }

    padded_text <- paste("'", text, "',", category, sep = "")
    write.table(x = padded_text, file = output.file, append = TRUE, sep = "", row.names = FALSE, col.names = FALSE, quote = FALSE)
    counter <- counter + 1
  }
}

CreateCorpus <- function(overwrite = FALSE){
  # Iterates the files and creates the corpus, which is saved as a csv file.
  output.file <- paste(kOutputPath, "/", kOutputFileName, sep = "")
  seeds <- dir(path = kAnnotatedTextsPath, pattern = "*.txt")

  if (overwrite) {
    close(file(output.file, open="w"))
  }

  for (seed in seeds){
    document.categories <- ExtractCategories(seed)
    document.texts <- ExtractDocuments(seed, document.categories)
    corpus <- CleanDocuments(document.texts)
    SaveDocuments(corpus, output.file, kWarn)
  }
}

CreateCorpus(kOverwriteOutputFile)


来源:https://stackoverflow.com/questions/43157115/troubles-with-encoding-pattern-matching-and-noisy-texts-in-r

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!