问题
I'm not very familiar with the optim function, and I wanted to get these informations from its results: a) how many iterations were needed for achieving the result? and b) to plot the sequence of partial solutions, that is, the solution obtained in the end of each iteration.
My code until now looks like this:
f1 <- function(x) {
x1 <- x[1]
x2 <- x[2]
x1^2 + 3*x2^2
}
res <- optim(c(1,1), f1, method="CG")
How can I improve it to get further information?
Thanks in advance
回答1:
You could modify your function to store the values that are passed into it into a global list.
i <- 0
vals <- list()
f1 <- function(x) {
i <<- i+1
vals[[i]] <<- x
x1 <- x[1]
x2 <- x[2]
x1^2 + 3*x2^2
}
res <- optim(c(1,1), f1, method="CG")
Now if you examine i and vals after you run the function you can see what happened. If you want to see the values while optim is running throw a print statement into the function as well.
回答2:
Passing trace=1
as a control parameter to optim
gives you more detailed information about the progress of the optimization:
res <- optim(c(1,1), f1, method="CG", control=list(trace=1))
# Conjugate gradients function minimizer
# Method: Fletcher Reeves
# tolerance used in gradient test=3.63798e-12
# 0 1 4.000000
# parameters 1.00000 1.00000
# * i> 1 4 0.480000
# parameters 0.60000 -0.20000
# i> 2 6 0.031667
# ......
# * i> 13 34 0.000000
# parameters -0.00000 0.00000
# 14 34 0.000000
# parameters -0.00000 0.00000
# Exiting from conjugate gradients minimizer
# 34 function evaluations used
# 15 gradient evaluations used
However, it seems like the information is only written to standard output, so you will have to use sink
to pipe the output to a text file, and then do some editing to get the parameter values for plotting.
回答3:
If all you wanted was the number of function evaluations, see the $counts
element of the result:
counts: A two-element integer vector giving the number of calls to
‘fn’ and ‘gr’ respectively. This excludes those calls needed
to compute the Hessian, if requested, and any calls to ‘fn’
to compute a finite-difference approximation to the gradient.
For the partial solutions you'll need @Dason's solution or something like it.
来源:https://stackoverflow.com/questions/23975101/getting-more-details-from-optim-function-from-r