Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 13 additions & 13 deletions vignettes/profiling.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ However, be aware that the statistical assumptions that go into a model are
the most important factors in overall model performance. It is often not
possible to make up for model problems with just brute force computation. For
ideas on how to address performance of your model from a statistical
perspective, see Gelman (2020).
perspective, see Gelman et al. (2020).

```{r library, message=FALSE}
library(cmdstanr)
Expand Down Expand Up @@ -66,11 +66,11 @@ calculations with `profile` statements.

```
profile("priors") {
target += std_normal_lpdf(beta);
target += std_normal_lpdf(alpha);
beta ~ std_normal();
alpha ~ std_normal();
}
profile("likelihood") {
target += bernoulli_logit_lpmf(y | X * beta + alpha);
y ~ bernoulli_logit(X * beta + alpha);
}
```

Expand All @@ -92,11 +92,11 @@ parameters {
}
model {
profile("priors") {
target += std_normal_lpdf(beta);
target += std_normal_lpdf(alpha);
beta ~ std_normal();
alpha ~ std_normal();
}
profile("likelihood") {
target += bernoulli_logit_lpmf(y | X * beta + alpha);
y ~ bernoulli_logit(X * beta + alpha);
}
}
')
Expand Down Expand Up @@ -145,7 +145,7 @@ Stan's specialized glm functions can be used to make models like this faster. In
this case the likelihood can be replaced with

```
target += bernoulli_logit_glm_lpmf(y | X, alpha, beta);
y ~ bernoulli_logit_glm(X, alpha, beta);
```

We'll keep the same `profile()` statements so that the profiling information for
Expand All @@ -165,11 +165,11 @@ parameters {
}
model {
profile("priors") {
target += std_normal_lpdf(beta);
target += std_normal_lpdf(alpha);
beta ~ std_normal();
alpha ~ std_normal();
}
profile("likelihood") {
target += bernoulli_logit_glm_lpmf(y | X, alpha, beta);
y ~ bernoulli_logit_glm(X, alpha, beta);
}
}
')
Expand All @@ -184,8 +184,8 @@ fit_glm <- model_glm$sample(data = stan_data, chains = 1)
fit_glm$profiles()
```

We can see from the `total_time` column that this is much faster than the
previous model.
We can see from the `total_time` column that the likelihood computation is
faster than in the previous model.

## Per-gradient timings, and memory usage

Expand Down
Loading