I\'ve never quite gotten my head around nesting functions and passing arguments by reference. My strategy is typically to do something like get(\'variabletopassbyrefer
Also, pass the environment that the variables are located in. Note that parent.frame()
refers to the environment in the currently running instance of the caller.
test1 <- function(a1, b1, env = parent.frame()) {
a <- get(a1, env)
b <- get(b1, env)
c <- get('c', env)
testvalue <- c * a * b
testvalue
}
c <- 3
test2() # test2 as in question
## 6
Here a
and b
are in env
c
is not in env
but it is in an ancestor of env
and get
looks through ancenstors as well.
ADDED Note that R formulas can be used to pass variable names with environments:
test1a <- function(formula) {
v <- all.vars(formula)
values <- sapply(v, get, environment(formula))
prod(values)
}
test2a <- function() {
a <- 1
b <- 2
test1a(~ a + b + c)
}
c <- 3
test2a()
## 6
REVISION: Corrected. Added comment. Added info on formulas.
Since you are asking, this definitely looks like a bad design to me. The recommended approach is to stick to R's way of pass-by-value. And as much as possible, make every function take everything it uses as arguments:
test1 <- function(a1, b1, c1 = 1) {
testvalue <- c1 * a1 * b1
testvalue
}
test2 <- function(cc = 1) {
a <- 1
b <- 2
test1(a1 = a, b1 = b, c1 = cc)
}
cc <- 3
test2(cc = cc)
(I replaced c
with cc
since it is the name of a function, hence a bad idea to use as variable name.)
A less acceptable but maybe closer approach to what you have is to not pass all arguments to your functions and let R look for them in the calling stack:
test1 <- function(a1, b1) {
testvalue <- cc * a1 * b1
testvalue
}
test2 <- function() {
a <- 1
b <- 2
test1(a, b)
}
cc <- 3
test2()
If for some reason the first approach does not work for you, please explain why so I get a chance to maybe convince you otherwise. It is the recommended way of programming in R.
Following on the discussion and your edit, I'll recommend you look at the proto
package as an alternative to get
and assign
. Essentially, proto objects are environments so it's nothing you can't do with base
R but it helps make things a bit cleaner:
test1 <- function(x) {
testvalue <- x$c * x$a * x$b
x$a <- 3.5
testvalue
}
test2 <- function(x) {
x$a <- 1
x$b <- 2
cat(x$a, '\n')
test1(x)
cat(x$a, '\n')
}
library(proto)
x <- proto(c = 3)
test2(x)
From a programming point of view, test1
and test2
are functions with side-effects (they modify the object x
). Beware that its a risky practice.
Or maybe a better approach is to make test1
and test2
be methods of a class, then it is acceptable if they modify the instance they are running on:
x <- proto() # defines a class
x$test1 <- function(.) {
testvalue <- .$c * .$a * .$b
.$a <- 3.5
testvalue
}
x$test2 <- function(.) {
.$a <- 1
.$b <- 2
cat(.$a, '\n')
.$test1()
cat(.$a, '\n')
}
library(proto)
y <- x$proto(c = 3) # an instance of the class
y$test2()
If you are not interested in using a third-party package (proto
), then look at R's support for building classes (setClass
, setRefClass
). I do believe using an object-oriented design is the right approach given your specs.