Est. | SE | z | p | σ_subj | |
(Intercept) | 251.4051 | 6.6323 | 37.91 | <1e-99 | 23.7805 |
days | 10.4673 | 1.5022 | 6.97 | <1e-11 | 5.7168 |
Residual | 25.5918 |
More precisely, it’s a computational trick to efficiently compute a particular type of nested model comparison under additional assumptions.
It wasn’t a bad idea back in the slide-rules and mechanical adding machine days. But all the extra assumptions (e.g. sphericity) and shortcuts aren’t worth it with today’s computing hardware.
We stop learning statistics after we finish our doctorates.
But we use statistics like our mentors. And they learned ANOVA. So even when we advance the methods, we don’t advance the thinking and the explaining.
ANOVA encourages thinking in a weird omnibus + posthoc test framework
Instead, we should be thinking of specific comparisons that we care about.
Moreover, we can do all of this in a single step instead of in two stages.
Est. | SE | z | p | σ_subj | |
(Intercept) | 251.4051 | 6.6323 | 37.91 | <1e-99 | 23.7805 |
days | 10.4673 | 1.5022 | 6.97 | <1e-11 | 5.7168 |
Residual | 25.5918 |
Row | subj | (Intercept) | days |
---|---|---|---|
String | Float64 | Float64 | |
1 | S308 | 2.81582 | 9.07551 |
2 | S309 | -40.0484 | -8.64408 |
3 | S310 | -38.4331 | -5.5134 |
4 | S330 | 22.8321 | -4.65872 |
5 | S331 | 21.5498 | -2.94449 |
6 | S332 | 8.81554 | -0.235201 |
7 | S333 | 16.4419 | -0.158809 |
8 | S334 | -6.99667 | 1.03273 |
9 | S335 | -1.03759 | -10.5994 |
10 | S337 | 34.6663 | 8.63238 |
11 | S349 | -24.558 | 1.06438 |
12 | S350 | -12.3345 | 6.47168 |
13 | S351 | 4.274 | -2.95533 |
14 | S352 | 20.6222 | 3.56171 |
15 | S369 | 3.25854 | 0.871711 |
16 | S370 | -24.7101 | 4.6597 |
17 | S371 | 0.723262 | -0.971053 |
18 | S372 | 12.1189 | 1.3107 |
thank you for attention
any questions?
https://embraceuncertaintybook.com/