twos-complement

sign changes when going from int to float and back

谁说胖子不能爱 提交于 2019-11-27 11:53:55
问题 Consider the following code, which is an SSCCE of my actual problem: #include <iostream> int roundtrip(int x) { return int(float(x)); } int main() { int a = 2147483583; int b = 2147483584; std::cout << a << " -> " << roundtrip(a) << '\n'; std::cout << b << " -> " << roundtrip(b) << '\n'; } The output on my computer (Xubuntu 12.04.3 LTS) is: 2147483583 -> 2147483520 2147483584 -> -2147483648 Note how the positive number b ends up negative after the roundtrip. Is this behavior well-specified? I

How memset initializes an array of integers by -1?

孤人 提交于 2019-11-27 11:38:18
问题 The manpage says about memset : #include <string.h> void *memset(void *s, int c, size_t n) The memset() function fills the first n bytes of the memory area pointed to by s with the constant byte c . It is obvious that memset can't be used to initialize int array as shown below: int a[10]; memset(a, 1, sizeof(a)); it is because int is represented by 4 bytes (say) and one can not get the desired value for the integers in array a . But I often see the programmers use memset to set the int array

two's complement

你说的曾经没有我的故事 提交于 2019-11-27 08:35:31
问题 As far as I know, the two's complement algo is: 1.Represent the decimal in binary. 2.Inverse all bits. 3.Add 1 to the last bit. For the number 3 , which its representation is: 0000000000000011 the result of the two's complement would be 1111111111111101 which is -3 . So far so good. But for the number 2 which its representation is 0000000000000010 the result of the two's complement would be 1111111111111101 , which isn't 2 but -3. What am I doing wrong? 回答1: 0...0010 // 2 1...1101 // Flip the

two's complement of numbers in python

為{幸葍}努か 提交于 2019-11-27 05:56:03
问题 I am writing code that will have negative and positive numbers all 16 bits long with the MSB being the sign aka two's complement. This means the smallest number I can have is -32768 which is 1000 0000 0000 0000 in two's complement form. The largest number I can have is 32767 which is 0111 1111 1111 1111 . The issue I am having is python is representing the negative numbers with the same binary notation as positive numbers just putting a minus sign out the front i.e. -16384 is displayed as

How to prove that the C statement -x, ~x+1, and ~(x-1) yield the same results?

╄→гoц情女王★ 提交于 2019-11-27 04:06:40
问题 I want to know the logic behind this statement, the proof. The C expression -x, ~x+1, and ~(x-1) all yield the same results for any x. I can show this is true for specific examples. I think the way to prove this has something to do with the properties of two's complement. Any ideas? 回答1: Consider what you get when you add a number to its bitwise complement. The bitwise complement of an n-bit integer x has a 1 everywhere x has a 0, and vice versa. So it's clear to see: x + ~x = 0b11...11 (n

How is overflow detected in two's complement?

*爱你&永不变心* 提交于 2019-11-27 02:27:27
问题 I see that when I subtract positive and negative number using two's complement I get overflows. For example, if I subtract 1 from 2 I get: 2 = 0010 1 = 0001 -> -1 = 1111 2 + (-1) -> 0010 + 1111 = 10001 So here the result has fifth left bit 10001 - is it overflow? I've found these rules for detected overflows with two's complement: If the sum of two positive numbers yields a negative result, the sum has overflowed. If the sum of two negative numbers yields a positive result, the sum has

Bitwise operations and shifts

无人久伴 提交于 2019-11-27 02:18:28
问题 Im having some trouble understanding how and why this code works the way it does. My partner in this assignment finished this part and I cant get ahold of him to find out how and why this works. I've tried a few different things to understand it, but any help would be much appreciated. This code is using 2's complement and a 32-bit representation. /* * fitsBits - return 1 if x can be represented as an * n-bit, two's complement integer. * 1 <= n <= 32 * Examples: fitsBits(5,3) = 0, fitsBits(-4

2's complement hex number to decimal in java

元气小坏坏 提交于 2019-11-27 02:14:59
问题 I have a hex string that represents a 2's complement number. Is there an easy way (libraries/functions) to translate the hex into a decimal without working directly with its bits?? E.G. This is the expected output given the hex on the left: "0000" => 0 "7FFF" => 32767 (max positive number) "8000" => -32768 (max negative number) "FFFF" => -1 Thanks! 回答1: This seems to trick java into converting the number without forcing a positive result: Integer.valueOf("FFFF",16).shortValue(); // evaluates

What is 2's Complement Number? [closed]

做~自己de王妃 提交于 2019-11-26 23:41:29
问题 What is 2's Complement Number? Why do we take 1's Complement and add 1 to it? Why don't we subtract 1 after taking 1's Complement? Why do computers use 2's Complement? 回答1: What is 2's Complement Number? Complementary number system is used to represent negative numbers. So, 2's Complement number system is used to represent negative numbers. UPDATE Q: What “2’s Complement System” says? A: The negative equivalent of binary number is its 2’s complement. (1’s Complement + 1) Note: 1 extra bit is

How does the NEG instruction affect the flags on x86?

时间秒杀一切 提交于 2019-11-26 23:28:36
问题 The Intel Software Development Manual says this about the neg instruction: The CF flag set to 0 if the source operand is 0; otherwise it is set to 1. The OF, SF, ZF, AF, and PF flags are set according to the result. I thought that AF and CF would be set as if neg %eax were replaced by, not %eax # bitwise negation add $1, %eax But that's not the case, negating 0x6ffffef5 on a real CPU sets AF and CF. 回答1: neg sets all flags identically to what you'd get with a sub from 0. This sequence of