问题
I'm currently studying the Abstract Syntax Notation One and reading the ITU-T Recommendation X.690.
On page 15 in paragraph 8.3.2, there is written:
If the contents octets of an integer value encoding consist of more than one octet, then the bits of the first octet and bit 8 of the second octet:
- shall not all be ones; and
- shall not all be zero.
NOTE – These rules ensure that an integer value is always encoded in the smallest possible number of octets.
I understand that for the integer to be always encoded in the smallest possible number of octet, the first octet shall not be zero.
But what about ones? If I want to encode the value 65408 (1111 1111 1000 0000) using the Basic Encoding Rules, how should I do it?
回答1:
I understand that for the integer to be always encoded in the smallest possible number of octet, the first octet shall not be zero.
Not necessary. If the highest bit of the integer is set to 1, then the value is considered negative (in the case of signed integers). In order to denote the integer positive -- a zero (0) leading octet is added. It is in general.
Here is a good article about Integer encoding: http://msdn.microsoft.com/en-us/library/windows/desktop/bb540806(v=vs.85).aspx
回答2:
The encoding is 2's complement. You need a leading octet of 0000 0000. Note that this will not violate the rule you quote, as bit 8 of the second octet will be a 1.
来源:https://stackoverflow.com/questions/25617796/asn-basic-encoding-rule-of-an-integer