Why is Erlang crashing on large sequences?

前端 未结 4 1642
失恋的感觉
失恋的感觉 2021-02-06 13:50

I have just started learning Erlang and am trying out some Project Euler problems to get started. However, I seem to be able to do any operations on large sequences without cra

相关标签:
4条回答
  • 2021-02-06 13:58

    Also, both windows and linux have limits on the maximum amount of memory an image can occupy As I recall on linux it is half a gigabyte.

    The real question is why these operations aren't being done lazily ;)

    0 讨论(0)
  • 2021-02-06 14:15

    Your OS may have a default limit on the size of a user process. On Linux you can change this with ulimit.

    You probably want to iterate over these 64000000 numbers without needing them all in memory at once. Lazy lists let you write code similar in style to the list-all-at-once code:

    -module(lazy).
    -export([seq/2]).
    
    seq(M, N) when M =< N ->
        fun() -> [M | seq(M+1, N)] end;
    seq(_, _) ->
        fun () -> [] end.
    
    1> Ns = lazy:seq(1, 64000000).
    #Fun<lazy.0.26378159>
    2> hd(Ns()).
    1
    3> Ns2 = tl(Ns()).
    #Fun<lazy.0.26378159>
    4> hd(Ns2()).
    2
    
    0 讨论(0)
  • 2021-02-06 14:15

    Possibly a noob answer (I'm a Java dev), but the JVM artificially limits the amount of memory to help detect memory leaks more easily. Perhaps erlang has similar restrictions in place?

    0 讨论(0)
  • 2021-02-06 14:16

    This is a feature. We do not want one processes to consume all memory. It like the fuse box in your house. For the safety of us all.

    You have to know erlangs recovery model to understand way they let the process just die.

    0 讨论(0)
提交回复
热议问题