问题
When I evaluate the following expression every time I get the value 10.
(((lambda (x) (lambda () (set! x (+ x 10)) x)) 0))
However I just modify by abstracting the above procedure with a name and call foo every time the value increments by 10!!
(define foo ((lambda (x) (lambda () (set! x (+ x 10)) x)) 0))
Can anybody please explain this?
回答1:
The function you are calling is a counter that returns a number 10 higher every time it's called.
In the first case, every time, you are creating a new function and then immediately calling it once and then discarding the function. So every time, you are calling a new instance of this counter for the first time, so it should return 10.
In the second case, you create the function once and assign it to a variable and call that same function repeatedly. Since you are calling the same function, it should return 10, 20, ...
回答2:
newacct is correct, but I would like to go into (a lot) more detail, since this is something that just blew my mind pretty recently.
I'm going to use the terms 'environment' and 'scope' pretty loosely and to mean essentially the same thing. Remember that scheme is a lexical scope language.
When scheme evaluates an expression it will look in its current environment for the values of any variables in the expression. If it doesn't find any in the current environment, it will look in the parent environment. If the value is not in the parent environment then it will look in the next level up and so on until it reaches the top (global) level where it will either find the value or throw an "unbound variable" error.
Anytime you call define
you associate a symbol with a value on that environments symbol table. So if you call define
on the top-level an entry will be added to the global symbol table. If you call define
in the body of a procedure, then an entry will be added to the symbol table of that procedure.
A good way to think about calling define
on a procedure is that you are creating an entry in the symbol table that consists of the parameters, body, and environment of that procedure. For example the procedure square
would have an entry something like this:
(define a 3)
(define (square x)
(* x x))
GLOBAL
=================
a-|-3
|
square-|-{x}
| {(* x x)}
| {GLOBAL} ---> All the things defined on the global table
Then if I were to call (square a)
the interpreter would first look in the environment in which square
is defined and it would find that a
is associated with the value 3. Then x -> 3 within the body of square and the procedure returns 9. Cool, makes sense.
Things get a little screwier when we start defining helper procedures within procedures, but all you really need to remember is that if it can't find anything associated with a symbol in the current environment, it will move up levels of scope until it does. Also, it will always stop on the first 'match'. So if there is a local x
it will prefer it over the global x
(rather it will use the local x
without ever looking for a global one).
Next, remember that define
just adds names to the symbol table, but set!
is a mutator that actually changes the values with which a symbol is associated.
So (define b "blah")
puts an entry in the symbol table. b => "blah"
. Nothing crazy. set!
will change the actual value:
(set! b "foo")
b => "foo"
but set!
can't add anything to the table. (set! c "bar") => UNBOUND VARIABLE C
.
This is the most important difference: set!
acts like any other procedure in that if it doesn't find the variable in the current scope, it will check progressively higher levels until it finds a match (or throws an error), but define
always adds a binding to the scope in which it is called.
Alright, so you understand the difference between define
and set!
. Good. Now on to the question.
The expression (((lambda (x) (lambda () (set! x (+ x 10)) x)) 0))
, as newacct pointed out, is going to return the same value each time because you are calling a new procedure each time. However if you name it you can keep track of the environment created by calling the procedure.
(define foo <--- associated name on the symbol table
(lambda (x) <--- scope where x is defined
(lambda () \
(set! x (+ x 10)) |--- body
x)) /
0) <--- initial value of x
So the inner lambda
exists inside the environment created by the first one where the symbol x
exists at an initial value of 0. Then set!
looks for an entry in the symbol table for x
and finds one in the next level up. Once it finds the entry it changes it, in this case adding 10 to whatever value it finds there. The really cool part is that since you associated the whole thing to a name in the global symbol table, that environment continues to exist after each call! This is why we can do cool things like implement message passing objects to keep track of and manipulate data!
Also, the let
special form was created for this purpose, and may be a more intuitive way to structure this. It would look like this:
(define foo <--- associated name
(let ((x 0)) <--- scope where x is defined & initial x value
(lambda () \
(set! x (+ x 10)) |--- body
x))) /
来源:https://stackoverflow.com/questions/9955074/scheme-assignment