integer-overflow

Warning : overflow in implicit constant conversion

…衆ロ難τιáo~ 提交于 2019-11-30 01:07:33
问题 In the following program, the line 5 does give overflow warning as expected, but surprisingly the line 4 doesn't give any warning in GCC: http://www.ideone.com/U0BXn int main() { int i = 256; char c1 = i; //line 4 char c2 = 256; //line 5 return 0; } I was thinking both lines should give overflow warning. Or is there something I'm missing? The topic which led me to do this experiment is this: typedef type checking? There I said the following(which I deleted from my answer, because when I run

Is this a JVM bug or “expected behavior”?

ぐ巨炮叔叔 提交于 2019-11-29 20:14:30
I noticed some unexpected behavior (unexpected relative to my personal expectations), and I'm wondering if something if there is a bug in the JVM or if perhaps this is a fringe case where I don't understand some of the details of what exactly is supposed to happen. Suppose we had the following code in a main method by itself: int i; int count = 0; for(i=0; i < Integer.MAX_VALUE; i+=2){ count++; } System.out.println(i++); A naive expectation would be that this would print Integer.MAX_VALUE-1 , the largest even representable int . However, I believe integer arithmetic is supposed to "rollover"

Incrementing an integer value beyond its integer limit - C#

我们两清 提交于 2019-11-29 17:26:02
问题 I've a for loop which keeps incrementing an integer value till the loop completes. So if the limit n is a double variable and the incremented variable 'i' is an integer, i gets incremented beyond its limits. double total = 0; double number = hugetValue; for (int i = 1; i <= number; i++) { total = total + i; } return total; What happens to 'i' if it exceeds its capacity? How the value of i changes? Will i get a runtime error? Thanks NLV 回答1: Similar to the behaviour in some implentations of C

On-purpose int overflow

本秂侑毒 提交于 2019-11-29 15:01:31
I'm using the hash function murmur2 which returns me an uint64 . I want then to store it in PostgreSQL, which only support BIGINT (signed 64 bits). As I'm not interested in the number itself, but just the binary value (as I use it as an id for detecting uniqueness (my set of values being of ~1000 values, a 64bit hash is enough for me) I would like to convert it into int64 by "just" changing the type. How does one do that in a way that pleases the compiler? You can simply use a type conversion : i := uint64(0xffffffffffffffff) i2 := int64(i) fmt.Println(i, i2) Output: 18446744073709551615 -1

Difference between two large numbers C#

牧云@^-^@ 提交于 2019-11-29 14:29:33
There are already solutions to this problem for small numbers : Here: Difference between 2 numbers Here: C# function to find the delta of two numbers Here: How can I find the difference between 2 values in C#? I'll summarise the answer to them all: Math.Abs(a - b) The problem is when the numbers are large this gives the wrong answer (by means of an overflow). Worse still, if (a - b) = Int32.MinValue then Math.Abs crashes with an exception (because Int32.MaxValue = Int32.MinValue - 1 ): System.OverflowException occurred HResult=0x80131516 Message= Negating the minimum value of a twos complement

what's wrong with golang constant overflows uint64

你离开我真会死。 提交于 2019-11-29 11:57:33
userid := 12345 did := (userid & ^(0xFFFF << 48)) when compiling this code, I got: ./xxxx.go:511: constant -18446462598732840961 overflows int Do you know what is the matter with this and how to solve it ? Thanks. ^(0xFFFF << 48) is an untyped constant, which in go is an arbitrarily large value. 0xffff << 48 is 0xffff000000000000 . When you negate it, you get -0xffff000000000001 (since with two's complement, -x = ^x + 1, or ^x = -(x + 1)). When you write userid := 12345 , userid gets the type int . Then when you try to and ( & ) it with the untyped constant -0xffff000000000001 the compiler

Clojure - Calculate with big numbers

孤者浪人 提交于 2019-11-29 09:36:45
I want to calculate !1000 in clojure, how can I do this without getting a integer-overflow exception? My factorial code is right now: (reduce * (range 1 1001)) . You could use the *' operator which supports arbitrary precision by automatically promoting the result to BigInt in case it would overflow: (reduce *' (range 1 1001)) Put N at the end of the number which makes it a bigint, (reduce * (range 1N 1001N)) Coerce the parameters to clojure.lang.BigInt (reduce * (range (bigint 1) (bigint 1001))) I.e. if you are working with an third-party library that doesn't use *' (defn factorial' [n]

BCrypt says long, similar passwords are equivalent - problem with me, the gem, or the field of cryptography?

 ̄綄美尐妖づ 提交于 2019-11-29 07:55:02
I've been experimenting with BCrypt, and found the following. If it matters, I'm running ruby 1.9.2dev (2010-04-30 trunk 27557) [i686-linux] require 'bcrypt' # bcrypt-ruby gem, version 2.1.2 @long_string_1 = 'f287ed6548e91475d06688b481ae8612fa060b2d402fdde8f79b7d0181d6a27d8feede46b833ecd9633b10824259ebac13b077efb7c24563fce0000670834215' @long_string_2 = 'f6ebeea9b99bcae4340670360674482773a12fd5ef5e94c7db0a42800813d2587063b70660294736fded10217d80ce7d3b27c568a1237e2ca1fecbf40be5eab8' def salted(string) @long_string_1 + string + @long_string_2 end encrypted_password = BCrypt::Password.create

Do C99 signed integer types defined in stdint.h exhibit well-defined behaviour in case of an overflow?

时光怂恿深爱的人放手 提交于 2019-11-29 06:20:38
All operations on "standard" signed integer types in C (short, int, long, etc) exhibit undefined behaviour if they yield a result outside of the [TYPE_MIN, TYPE_MAX] interval (where TYPE_MIN, TYPE_MAX are the minimum and the maximum integer value respectively. that can be stored by the specific integer type. According to the C99 standard, however, all intN_t types are required to have a two's complement representation: 7.8.11.1 Exact-width integer types 1. The typedef name intN_t designates a signed integer type with width N , no padding bits, and a two’s complement representation. Thus, int8

Is there some meaningful statistical data to justify keeping signed integer arithmetic overflow undefined?

我是研究僧i 提交于 2019-11-29 01:32:01
问题 The C Standard explicitly specifies signed integer overflow as having undefined behavior . Yet most CPUs implement signed arithmetics with defined semantics for overflow (except maybe for division overflow: x / 0 and INT_MIN / -1 ). Compilers writers have been taking advantage of the undefinedness of such overflows to add more aggressive optimisations that tend to break legacy code in very subtle ways. For example this code may have worked on older compilers but does not anymore on current