What is the difference between UTF-8 and Unicode?

前端 未结 15 1001
独厮守ぢ
独厮守ぢ 2020-11-22 17:08

I have heard conflicting opinions from people - according to the Wikipedia UTF-8 page.

They are the same thing, aren\'t they? Can someone clarify?

相关标签:
15条回答
  • 2020-11-22 17:14

    They're not the same thing - UTF-8 is a particular way of encoding Unicode.

    There are lots of different encodings you can choose from depending on your application and the data you intend to use. The most common are UTF-8, UTF-16 and UTF-32 s far as I know.

    0 讨论(0)
  • 2020-11-22 17:14

    The existing answers already explain a lot of details, but here's a very short answer with the most direct explanation and example.

    Unicode is the standard that maps characters to codepoints.
    Each character has a unique codepoint (identification number), which is a number like 9731.

    UTF-8 is an the encoding of the codepoints.
    In order to store all characters on disk (in a file), UTF-8 splits characters into up to 4 octets (8-bit sequences) - bytes. UTF-8 is one of several encodings (methods of representing data). For example, in Unicode, the (decimal) codepoint 9731 represents a snowman (), which consists of 3 bytes in UTF-8: E2 98 83

    Here's a sorted list with some random examples.

    0 讨论(0)
  • 2020-11-22 17:15

    This article explains all the details http://kunststube.net/encoding/

    WRITING TO BUFFER

    if you write to a 4 byte buffer, symbol with UTF8 encoding, your binary will look like this:

    00000000 11100011 10000001 10000010

    if you write to a 4 byte buffer, symbol with UTF16 encoding, your binary will look like this:

    00000000 00000000 00110000 01000010

    As you can see, depending on what language you would use in your content this will effect your memory accordingly.

    e.g. For this particular symbol: UTF16 encoding is more efficient since we have 2 spare bytes to use for the next symbol. But it doesn't mean that you must use UTF16 for Japan alphabet.

    READING FROM BUFFER

    Now if you want to read the above bytes, you have to know in what encoding it was written to and decode it back correctly.

    e.g. If you decode this : 00000000 11100011 10000001 10000010 into UTF16 encoding, you will end up with not

    Note: Encoding and Unicode are two different things. Unicode is the big (table) with each symbol mapped to a unique code point. e.g. symbol (letter) has a (code point): 30 42 (hex). Encoding on the other hand, is an algorithm that converts symbols to more appropriate way, when storing to hardware.

    30 42 (hex) - > UTF8 encoding - > E3 81 82 (hex), which is above result in binary.
    
    30 42 (hex) - > UTF16 encoding - > 30 42 (hex), which is above result in binary.
    

    0 讨论(0)
  • 2020-11-22 17:16

    To expand on the answers others have given:

    We've got lots of languages with lots of characters that computers should ideally display. Unicode assigns each character a unique number, or code point.

    Computers deal with such numbers as bytes... skipping a bit of history here and ignoring memory addressing issues, 8-bit computers would treat an 8-bit byte as the largest numerical unit easily represented on the hardware, 16-bit computers would expand that to two bytes, and so forth.

    Old character encodings such as ASCII are from the (pre-) 8-bit era, and try to cram the dominant language in computing at the time, i.e. English, into numbers ranging from 0 to 127 (7 bits). With 26 letters in the alphabet, both in capital and non-capital form, numbers and punctuation signs, that worked pretty well. ASCII got extended by an 8th bit for other, non-English languages, but the additional 128 numbers/code points made available by this expansion would be mapped to different characters depending on the language being displayed. The ISO-8859 standards are the most common forms of this mapping; ISO-8859-1 and ISO-8859-15 (also known as ISO-Latin-1, latin1, and yes there are two different versions of the 8859 ISO standard as well).

    But that's not enough when you want to represent characters from more than one language, so cramming all available characters into a single byte just won't work.

    There are essentially two different types of encodings: one expands the value range by adding more bits. Examples of these encodings would be UCS2 (2 bytes = 16 bits) and UCS4 (4 bytes = 32 bits). They suffer from inherently the same problem as the ASCII and ISO-8859 standards, as their value range is still limited, even if the limit is vastly higher.

    The other type of encoding uses a variable number of bytes per character, and the most commonly known encodings for this are the UTF encodings. All UTF encodings work in roughly the same manner: you choose a unit size, which for UTF-8 is 8 bits, for UTF-16 is 16 bits, and for UTF-32 is 32 bits. The standard then defines a few of these bits as flags: if they're set, then the next unit in a sequence of units is to be considered part of the same character. If they're not set, this unit represents one character fully. Thus the most common (English) characters only occupy one byte in UTF-8 (two in UTF-16, 4 in UTF-32), but other language characters can occupy six bytes or more.

    Multi-byte encodings (I should say multi-unit after the above explanation) have the advantage that they are relatively space-efficient, but the downside that operations such as finding substrings, comparisons, etc. all have to decode the characters to unicode code points before such operations can be performed (there are some shortcuts, though).

    Both the UCS standards and the UTF standards encode the code points as defined in Unicode. In theory, those encodings could be used to encode any number (within the range the encoding supports) - but of course these encodings were made to encode Unicode code points. And that's your relationship between them.

    Windows handles so-called "Unicode" strings as UTF-16 strings, while most UNIXes default to UTF-8 these days. Communications protocols such as HTTP tend to work best with UTF-8, as the unit size in UTF-8 is the same as in ASCII, and most such protocols were designed in the ASCII era. On the other hand, UTF-16 gives the best average space/processing performance when representing all living languages.

    The Unicode standard defines fewer code points than can be represented in 32 bits. Thus for all practical purposes, UTF-32 and UCS4 became the same encoding, as you're unlikely to have to deal with multi-unit characters in UTF-32.

    Hope that fills in some details.

    0 讨论(0)
  • 2020-11-22 17:16

    Unicode is just a standard that defines a character set (UCS) and encodings (UTF) to encode this character set. But in general, Unicode is refered to the character set and not the standard.

    Read The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) and Unicode In 5 Minutes.

    0 讨论(0)
  • 2020-11-22 17:17

    Unicode is a standard that defines, along with ISO/IEC 10646, Universal Character Set (UCS) which is a superset of all existing characters required to represent practically all known languages.

    Unicode assigns a Name and a Number (Character Code, or Code-Point) to each character in its repertoire.

    UTF-8 encoding, is a way to represent these characters digitally in computer memory. UTF-8 maps each code-point into a sequence of octets (8-bit bytes)

    For e.g.,

    UCS Character = Unicode Han Character

    UCS code-point = U+24B62

    UTF-8 encoding = F0 A4 AD A2 (hex) = 11110000 10100100 10101101 10100010 (bin)

    0 讨论(0)
提交回复
热议问题