diff --git a/vignettes/profiling.Rmd b/vignettes/profiling.Rmd index 1f9eb0dc..d2161e04 100644 --- a/vignettes/profiling.Rmd +++ b/vignettes/profiling.Rmd @@ -29,7 +29,7 @@ However, be aware that the statistical assumptions that go into a model are the most important factors in overall model performance. It is often not possible to make up for model problems with just brute force computation. For ideas on how to address performance of your model from a statistical -perspective, see Gelman (2020). +perspective, see Gelman et al. (2020). ```{r library, message=FALSE} library(cmdstanr) @@ -66,11 +66,11 @@ calculations with `profile` statements. ``` profile("priors") { - target += std_normal_lpdf(beta); - target += std_normal_lpdf(alpha); + beta ~ std_normal(); + alpha ~ std_normal(); } profile("likelihood") { - target += bernoulli_logit_lpmf(y | X * beta + alpha); + y ~ bernoulli_logit(X * beta + alpha); } ``` @@ -92,11 +92,11 @@ parameters { } model { profile("priors") { - target += std_normal_lpdf(beta); - target += std_normal_lpdf(alpha); + beta ~ std_normal(); + alpha ~ std_normal(); } profile("likelihood") { - target += bernoulli_logit_lpmf(y | X * beta + alpha); + y ~ bernoulli_logit(X * beta + alpha); } } ') @@ -145,7 +145,7 @@ Stan's specialized glm functions can be used to make models like this faster. In this case the likelihood can be replaced with ``` -target += bernoulli_logit_glm_lpmf(y | X, alpha, beta); +y ~ bernoulli_logit_glm(X, alpha, beta); ``` We'll keep the same `profile()` statements so that the profiling information for @@ -165,11 +165,11 @@ parameters { } model { profile("priors") { - target += std_normal_lpdf(beta); - target += std_normal_lpdf(alpha); + beta ~ std_normal(); + alpha ~ std_normal(); } profile("likelihood") { - target += bernoulli_logit_glm_lpmf(y | X, alpha, beta); + y ~ bernoulli_logit_glm(X, alpha, beta); } } ') @@ -184,8 +184,8 @@ fit_glm <- model_glm$sample(data = stan_data, chains = 1) fit_glm$profiles() ``` -We can see from the `total_time` column that this is much faster than the -previous model. +We can see from the `total_time` column that the likelihood computation is +faster than in the previous model. ## Per-gradient timings, and memory usage