perl6: Cannot unbox 65536 bit wide bigint into native integer

前端 未结 2 1402
广开言路
广开言路 2021-01-18 03:50

I try some examples from Rosettacode and encounter an issue with the provided Ackermann example: When running it \"unmodified\" (I replaced the utf-8 variable names by latin

相关标签:
2条回答
  • 2021-01-18 04:24

    Please read JJ's answer first. It's breezy and led to this answer which is effectively an elaboration of it.

    TL;DR A(4,3) is a very big number, one that cannot be computed in this universe. But raku(do) will try. As it does you will blow past reasonable limits related to memory allocation and indexing if you use the caching version and limits related to numeric calculations if you don't.


    I try some examples from Rosettacode and encounter an issue with the provided Ackermann example

    Quoting the task description with some added emphasis:

    Arbitrary precision is preferred (since the function grows so quickly)

    raku's standard integer type Int is arbitrary precision. The raku solution uses them to compute the most advanced answer possible. It only fails when you make it try to do the impossible.

    When running it "unmodified" (I replaced the utf-8 variable names by latin-1 ones)

    Replacing the variable names is not a significant change.

    But adding the A(4,3) line shifted the code from being computable in reality to not being computable in reality.

    The example you modified has just one explanatory comment:

    Here's a caching version of that ... to make A(4,2) possible

    Note that the A(4,2) solution is nearly 20,000 digits long.

    If you look at the other solutions on that page most don't even try to reach A(4,2). There are comments like this one on the Phix version:

    optimised. still no bignum library, so ack(4,2), which is power(2,65536)-3, which is apparently 19729 digits, and any above, are beyond (the CPU/FPU hardware) and this [code].

    A solution for A(4,2) is the most advanced possible.

    A(4,3) is not computable in practice

    To quote Academic Kids: Ackermann function:

    Even for small inputs (4,3, say) the values of the Ackermann function become so large that they cannot be feasibly computed, and in fact their decimal expansions cannot even be stored in the entire physical universe.

    So computing A(4,3).say is impossible (in this universe).

    It must inevitably lead to an overflow of even arbitrary precision integer arithmetic. It's just a matter of when and how.

    Cannot unbox 65536 bit wide bigint into native integer

    The first error message mentions this line of code:

    proto A(Int \m, Int \n) { (state @)[m][n] //= {*} }
    

    The state @ is an anonymous state array variable.

    By default @ variables use the default concrete type for raku's abstract array type. This default array type provides a balance between implementation complexity and decent performance.

    While computing A(4,2) the indexes (m and n) remain small enough that the computation completes without overflowing the default array's indexing limit.

    This limit is a "native" integer (note: not a "natural" integer). A "native" integer is what raku calls the fixed width integers supported by the hardware it's running on, typically a long long which in turn is typically 64 bits.

    A 64 bit wide index can handle indices up to 9,223,372,036,854,775,807.

    But in trying to compute A(4,3) the algorithm generates a 65536 bits (8192 bytes) wide integer index. Such an integer could be as big as 265536, a 20,032 decimal digit number. But the biggest index allowed is a 64 bit native integer. So unless you comment out the caching line that uses an array, then for A(4,3) the program ends up throwing the exception:

    Cannot unbox 65536 bit wide bigint into native integer

    Limits to allocations and indexing of the default array type

    As already explained, there is no array that could be big enough to help fully compute A(4,3). In addition, a 64 bit integer is already a pretty big index (9,223,372,036,854,775,807).

    That said, raku can accommodate other array implementations such as Array::Sparse so I'll discuss that briefly below because such possibilities might be of interest for other problems.

    But before discussing bigger arrays, running the code below on tio.run shows the practical limits for the default array type on that platform:

    my @array;
    @array[2**29]++; # works
    @array[2**30]++; # could not allocate 8589967360 bytes
    @array[2**60]++; # Unable to allocate ... 1152921504606846977 elements
    @array[2**63]++; # Cannot unbox 64 bit wide bigint into native integer
    

    (Comment out error lines to see later/greater errors.)

    The "could not allocate 8589967360 bytes" error is a MoarVM panic. It's a result of tio.run refusing a memory allocation request.

    I think the "Unable to allocate ... elements" error is a raku level exception that's thrown as a result of exceeding some internal Rakudo implementation limit.

    The last error message shows the indexing limit for the default array type even if vast amounts of memory were made available to programs.

    What if someone wanted to do larger indexing?

    It's possible to create/use other @ (does Positional) data types that support things like sparse arrays etc.

    And, using this mechanism, it's possible that someone could write an array implementation that supports larger integer indexing than is supported by the default array type (presumably by layering logic on top of the underlying platform's instructions; perhaps the Array::Sparse I linked above does).

    If such an alternative were called BigArray then the cache line could be replaced with:

    my @array is BigArray;
    proto A(Int \                                                                    
    0 讨论(0)
  • 2021-01-18 04:31

    Array subscripts use native ints; that's why you get the error in line 3, when you use the big ints as array subscripts. You might have to define a new BigArray that uses Ints as array subscripts.

    The second problem arises in the ** operator: the result is a Real, and when the low-level operations returns a Num, it throws an exception. https://github.com/rakudo/rakudo/blob/master/src/core/Int.pm6#L391-L401

    So creating a BigArray might not be helpful anyway. You'll have to create your own ** too, that always works with Int, but you seem to have hit the (not so infinite) limit of the infinite precision Ints.

    0 讨论(0)
提交回复
热议问题