rstan

How to efficiently parallelize brms::brm?

女生的网名这么多〃 提交于 2019-12-07 19:46:12
问题 Problem summary I am fitting a brms::brm_multiple() model to a large dataset where missing data has been imputed using the mice package. The size of the dataset makes the use of parallel processing very desirable. However, it isn't clear to me how to best use the compute resources because I am unclear about how brms divides sampling on the imputed dataset among cores. How can I choose the following to maximize efficient use of compute resources? number of imputations ( m ) number of chains (

Developing Hierarchical Version of Nonlinear Growth Curve Model in Stan

ぐ巨炮叔叔 提交于 2019-12-06 19:52:29
The following model is model 1 of Preece and Baines (1978, Annals of Human Biology ), and is used to describe human growth. My Stan code for this model is as follows: ```{stan output.var="test"} data { int<lower=1> n; ordered[n] t; // age ordered[n] y; // height of human } parameters { positive_ordered[2] h; real<lower=0, upper=t[n-1]>theta; positive_ordered[2] s; real<lower=0>sigma; } model { h[1] ~ uniform(0, y[n]); h[2] ~ normal(180, 20); sigma ~ student_t(2, 0, 1); s[1] ~ normal(5, 5); s[2] ~ normal(5, 5); theta ~ normal(10, 5); y ~ normal(h[2] - (2*(h[2] - h[1]) * inv(exp(s[1]*(t - theta)

Calculating marginal effects in binomial logit using rstanarm

久未见 提交于 2019-12-06 13:15:47
I am trying to get the marginal effects, according to this post: http://andrewgelman.com/2016/01/14/rstanarm-and-more/ td <- readRDS("some data") CHAINS <- 1 CORES <- 1 SEED <- 42 ITERATIONS <- 2000 MAX_TREEDEPTH <- 9 md <- td[,.(y,x1,x2)] # selection the columns i need. y is binary glm1 <- stan_glm(y~x1+x2, data = md, family = binomial(link="logit"), prior = NULL, prior_intercept = NULL, chains = CHAINS, cores = CORES, seed = SEED, iter = ITERATIONS, control=list(max_treedepth=MAX_TREEDEPTH) ) # launch_shinystan(glm1) tmp <- posterior_predict(glm1,newdata=md[,.(x1,x2)]) Issue After running

Sampling from prior without running a separate model

独自空忆成欢 提交于 2019-12-05 21:25:35
I want to graph the histograms of parameter estimates from a stan model against the priors for those parameters. I have tried doing this by running a model in stan, graphing it with ggplot2, then overlaying an approximation of the prior distribution using R's random generator function (e.g. rnorm() , rbinom() ) but I have run into many scaling issues that make the graphs impossible to get looking right. I was thinking a better way to do it would be simply to sample directly from the prior distribution and then graph those samples against the parameter estimates, but running a whole separate

How to report with APA style a Bayesian Linear (Mixed) Models using rstanarm?

守給你的承諾、 提交于 2019-12-05 02:27:59
问题 I'm currently struggling with how to report, following APA-6 recommendations, the output of rstanarm::stan_lmer() . First, I'll fit a mixed model within the frequentist approach, then will try to do the same using the bayesian framework. Here's the reproducible code to get the data: library(tidyverse) library(neuropsychology) library(rstanarm) library(lmerTest) df <- neuropsychology::personality %>% select(Study_Level, Sex, Negative_Affect) %>% mutate(Study_Level=as.factor(Study_Level),

How do I get standard errors of maximum-likelihood estimates in STAN?

老子叫甜甜 提交于 2019-12-04 08:38:03
问题 I am using maximum-likelihood optimization in Stan, but unfortunately the optimizing() function doesn't report standard errors: > MLb4c <- optimizing(get_stanmodel(fitb4c), data = win.data, init = inits) STAN OPTIMIZATION COMMAND (LBFGS) init = user save_iterations = 1 init_alpha = 0.001 tol_obj = 1e-012 tol_grad = 1e-008 tol_param = 1e-008 tol_rel_obj = 10000 tol_rel_grad = 1e+007 history_size = 5 seed = 292156286 initial log joint probability = -4038.66 Iter log prob ||dx|| ||grad|| alpha

How do I get standard errors of maximum-likelihood estimates in STAN?

邮差的信 提交于 2019-12-03 00:14:57
I am using maximum-likelihood optimization in Stan, but unfortunately the optimizing() function doesn't report standard errors: > MLb4c <- optimizing(get_stanmodel(fitb4c), data = win.data, init = inits) STAN OPTIMIZATION COMMAND (LBFGS) init = user save_iterations = 1 init_alpha = 0.001 tol_obj = 1e-012 tol_grad = 1e-008 tol_param = 1e-008 tol_rel_obj = 10000 tol_rel_grad = 1e+007 history_size = 5 seed = 292156286 initial log joint probability = -4038.66 Iter log prob ||dx|| ||grad|| alpha alpha0 # evals Notes 13 -2772.49 9.21091e-005 0.0135987 0.07606 0.9845 15 Optimization terminated