I entered a text string in .csv
file , which includes unicode symbols as: \\U00B5
g/dL.
In .csv
file as well as read in R data frame:
Well, first understand that certain characters in R must be escaped if they are outside the standard ASCII-characters. Typically this is done with a "\" character. That's why you need to escape this character when you write a string in R:
a <- "\" # error
a <- "\\" # ok.
The "\U" is a special indicator for unicode escaping. Note that there are no slashes or U's in the string itself when you use this escaping. It is just a shortcut to a specific character. Note:
a <- "\U00B5"
cat(a)
# µ
grep("U",a)
# integer(0)
nchar(a)
# [1] 1
This is very different than the string
a <- "\\U00B5"
cat(a)
# \U00B5
grep("U",a)
# [1] 1
nchar(a)
# [1] 6
Normally when you import a text file, you would encode non-ASCII character in whatever encoding is used by the file (UTF-8, or Latin-1 are the most common). They have special bytes to represent these characters. It's not "normal" for a text file to have an ASCII escape sequence for unicode characters. This is why R doesn't attempt to convert "\U00B5" to a unicode character because it assumes that if you had wanted a unicode character, you would have just used it directly.
The easiest way to re-interpet your ASCII character values would be to use the stringi
package. For example
library(stringi)
a <- "\\U00B5"
stri_unescape_unicode(gsub("\\U","\\u",a, fixed=TRUE))
(the only catch is that we needed to convert "\U" to the more common "\u" so the function properly recognized the escape). You can do this to your imported data with
test$label <- stri_unescape_unicode(gsub("\\U","\\u",test$label, fixed=TRUE))