I\'m using nlsLM()
to make a model of a power function, but I need to summarize my data within the function call to find the appropriate coefficient and exponen
What you want isn't clear from the question. Do you want to fit a separate model for every unique month in your data? Or do you want to fit one model for all of the data and then take monthly averages of the value of MW^b
?
Here's one way on how to do the latter case.
require(minpack.lm)
require(tidyverse)
require(broom)
dat <- structure(...) # provided in the question
predictions <-
dat %>%
ungroup %>%
mutate(row = row_number()) %>%
do(augment(nlsLM(
formula = value ~ a * MW^b + 0*row,
data = .,
start = list(a = 100000, b=1/3),
upper = c(Inf, 1),
lower = c(0, 1/5)
)
)
)
joined <-
dat %>%
mutate(row = row_number()) %>%
left_join(predictions, by=c('MW', 'value', 'row')) %>%
select(-row)
joined %>%
group_by(mon) %>%
mutate(monthly_avg_prediction = mean(.fitted))
Notes:
lm
, nls
, or nlsLM
etc. into dataframes. So you don't have to memorize or re-lookup the function-specific structure of the model object (e.g. model$params[['estimate']][[1]])
or similar stuff; the model results are already in R-standard dataframe format. row_number()
and left_join()
are in there. Otherwise, in the general case, augment
will throw away data from the original dataframe that is not used in the model prediction, and it will not work well if there are repeated values in the data that is used..fitted
column is generated by broom's augment
function. It is the model prediction at the indicated datapoint.monthly_avg_prediction
column of the joined
dataframe. But that represents a single, global model, fit on all the data, and predictions from that model are averaged by month.