twos-complement

How to get the 2's complement value of a BigInteger of arbitrary length

落爺英雄遲暮 提交于 2019-12-23 06:14:37
问题 Is there a method in BigInteger to get the 2's complement value? For eg: if there is a BigInteger with a negative value BigInteger a = new BigInteger("-173B8EC504479C3E95DEB0460411962F9EF2ECE0D3AACD749BE39E1006FC87B8", 16); then I want to get the 2's complement in a BigInteger form BigInteger b = E8C4713AFBB863C16A214FB9FBEE69D0610D131F2C55328B641D61EFF9037848 I can subtract the first BigInteger from 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF to get the second

How do I make BigInteger see the binary representation of this Hex string correctly?

旧时模样 提交于 2019-12-23 03:05:29
问题 The problem I have a byte[] that is converted to a hex string, and then that string is parsed like this BigInteger.Parse(thatString,NumberSyles.Hexnumber) . This seems wasteful since BigInteger is able to accept a byte[], as long as the two's complement is accounted for. An working (inefficient) example According to MSDN the most significant bit of the last byte should be zero in order for the following hex number be a positive one. The following is an example of a hex number that has this

Python - Most effective way to implement two's complement? [duplicate]

*爱你&永不变心* 提交于 2019-12-23 02:58:21
问题 This question already has answers here : Two's Complement in Python (16 answers) Closed 6 years ago . Two's complement is when you inverse bits then add a binary 1 digit. So for example... 0011001 apply two's complement 1. inverse the bits, 1100110 2. add a binary digit, 1100110 + 1 = 1100111 Another example to show overflow situation... 1001100 apply two's complement 1. inverse the bits, 0110011 2. add a binary digit, 0110011 + 1 = 0110100 What would be the best way to implement this in

Convert a signed decimal to hex encoded with two's complement

一个人想着一个人 提交于 2019-12-22 03:33:09
问题 I need to encode a signed integer as hexadecimal using via the two's complement notation. For example I would like to convert e.g. -24375 to 0xffffa0c9. So far I have been working on the following lines: parseInt(-24375).toString(2) > "-101111100110111" This matches what Wolfram Alpha displays, but I am not sure how to get to the signed 24bit int representation of the number (ffffa0c9). I've worked out how to take the unsigned binary number and represent this as two's complement: ~ parseInt(

Two's complement binary form

天大地大妈咪最大 提交于 2019-12-22 01:39:56
问题 In a TC++ compiler, the binary representation of 5 is (00000000000000101) . I know that negative numbers are stored as 2's complement, thus -5 in binary is (111111111111011) . The most significant bit (sign bit) is 1 which tells that it is a negative number. So how does the compiler know that it is -5 ? If we interpret the binary value given above (111111111111011) as an unsigned number, it will turn out completely different? Also, why is the 1's compliment of 5 -6 (1111111111111010) ? 回答1:

Binary Subtraction with 2's Complement

本秂侑毒 提交于 2019-12-21 05:32:11
问题 I need help subtracting with binary using 2's representation and using 5 bits for each number: 1) -9 -7 = ? Is there overflow? -9 = 01001 (2's complement = 10111) and -7 = 00111 (2's complement = 11001) Now we need to add because we're using 2's complement 10111 +11001 = 100000 But this answer doesn't make sense. Also, I'm assuming there's overflow because there are more than 5 bits in the answer. 2) 6 - 10, same process as before. Negative binary numbers don't make sense to me 回答1: 1) -9 - 7

How do I detect overflow while multiplying two 2's complement integers?

久未见 提交于 2019-12-21 03:18:20
问题 I want to multiply two numbers, and detect if there was an overflow. What is the simplest way to do that? 回答1: Multiplying two 32 bit numbers results in a 64 bit answer, two 8s give a 16, etc. binary multiplication is simply shifting and adding. so if you had say two 32 bit operands and bit 17 set in operand A and any of the bits above 15 or 16 set in operand b you will overflow a 32 bit result. bit 17 shifted left 16 is bit 33 added to a 32. So the question again is what are the size of your

Interpreting signed and unsigned numbers

痞子三分冷 提交于 2019-12-20 05:52:39
问题 the teacher told us that a binary number, for examle 1000 0001, have 2 meanings. one is represent -127 (signed), which is from -127 to 127 and the another is a unsigned number , from 0 to 256 If I have a number in binary, for example 1000 0001 , the calculator shows only the signed number (-127). how can I know what is the unsigned number that this binary number represent? 回答1: A signed and an unsigned number have exactly the same bits ! In your calculator, you can display as hex (0xff). It's

sra(shift right arithmetic) vs srl (shift right logical)

柔情痞子 提交于 2019-12-19 08:29:18
问题 Please take a look at these two pieces of pseudo-assembly code: 1) li $t0,53 sll $t1,$t0,2 srl $t2,$t0,2 sra $t3,$t0,2 print $t1 print $t2 print $t3 2) li $t0,-53 sll $t1,$t0,2 srl $t2,$t0,2 sra $t3,$t0,2 print $t1 print $t2 print $t3 in the first case the output is: 212 13 13 in the latter is: -212 107374... -14 But shouldn't : sra (-53) = - (srl 53) ? 回答1: -53 = 1111111111001011 sra 2 1111111111110010(11) = -14 ^^ ^^ sign dropped extension Because the extra bits are simply dropped for both

longitude reading measured in degrees with a 1x10^-7 degree lsb, signed 2’s complement

a 夏天 提交于 2019-12-18 17:37:10
问题 I am receiving data from a gps unit via a udp packet. Lat/Lng values are in hex. Example Data 13BF71A8 = Latitude (33.1313576) BA18A506 = Longitude (-117.2790010) The documentation explains that longitude/latitude readings are measured in degrees with a 1x10^-7 degree lsb, signed 2’s complement. For the Latitude I can convert using this formula: 13BF71A8 = 331313576 * 0.0000001 = 33.1313576 This code works for Lat but not for Lng: function convertLat(h){ var latdec = parseInt(h,16); var lat =