Code
= false progress
= false progress
using MixedModels
using Random
For the examples here, we’ll be once again using a model of the insteval
dataset.
= MixedModels.dataset("insteval")
insteval = fit(MixedModel,
ie1 @formula(y ~ 1 + studage + lectage + service + (1|s) + (1|d) + (1|dept)),
insteval; progress)
Est. | SE | z | p | σ_s | σ_d | σ_dept | |
(Intercept) | 3.2908 | 0.0324 | 101.45 | <1e-99 | 0.3264 | 0.5106 | 0.0787 |
studage: 4 | 0.0519 | 0.0232 | 2.24 | 0.0249 | |||
studage: 6 | 0.0721 | 0.0240 | 3.01 | 0.0026 | |||
studage: 8 | 0.1363 | 0.0264 | 5.17 | <1e-06 | |||
lectage: 2 | -0.0808 | 0.0154 | -5.25 | <1e-06 | |||
lectage: 3 | -0.1102 | 0.0167 | -6.59 | <1e-10 | |||
lectage: 4 | -0.1892 | 0.0196 | -9.65 | <1e-21 | |||
lectage: 5 | -0.1644 | 0.0214 | -7.68 | <1e-13 | |||
lectage: 6 | -0.2460 | 0.0205 | -12.01 | <1e-32 | |||
service: Y | -0.0727 | 0.0135 | -5.40 | <1e-07 | |||
Residual | 1.1762 |
One of the advantages of MixedModels.jl compared to its predecessors is its speed, which means that techniques that require fitting many different models are more viable. One such technique is the parametric bootstrap, which is implemented in the function parametricbootstrap
:
@time pb1 = parametricbootstrap(MersenneTwister(42), 100, ie1; progress)
209.008750 seconds (4.35 M allocations: 309.641 MiB, 0.06% gc time, 0.70% compilation time)
MixedModelBootstrap with 100 samples
parameter min q25 median mean q75 ⋯
┌─────────────────────────────────────────────────────────────────────────
1 │ β01 3.21783 3.26358 3.28986 3.29053 3.31213 ⋯
2 │ β02 -0.0139569 0.0386959 0.0548509 0.0543033 0.069148 ⋯
3 │ β03 0.0195404 0.0601319 0.0744778 0.0737247 0.0880096 ⋯
4 │ β04 0.0660658 0.123001 0.141956 0.143059 0.161691 ⋯
5 │ β05 -0.125172 -0.0900465 -0.0821355 -0.0829556 -0.0736901 ⋯
6 │ β06 -0.158731 -0.126321 -0.114572 -0.114729 -0.104069 ⋯
7 │ β07 -0.248297 -0.203881 -0.189177 -0.192144 -0.177192 ⋯
8 │ β08 -0.229774 -0.180099 -0.168366 -0.169054 -0.156477 ⋯
9 │ β09 -0.300539 -0.26488 -0.249537 -0.247633 -0.229222 ⋯
10 │ β10 -0.105656 -0.0818338 -0.0744016 -0.0750081 -0.0670843 ⋯
11 │ σ 1.16957 1.17411 1.17636 1.17617 1.17837 ⋯
12 │ σ1 0.303495 0.322678 0.326416 0.326132 0.329608 ⋯
13 │ σ2 0.474792 0.50008 0.508224 0.508321 0.51609 ⋯
14 │ σ3 0.0 0.054114 0.0704681 0.0702903 0.0876644 ⋯
15 │ θ1 0.257715 0.274375 0.277286 0.277288 0.2808 ⋯
16 │ θ2 0.404013 0.425368 0.432304 0.43219 0.438201 ⋯
17 │ θ3 0.0 0.0458687 0.0597957 0.0597669 0.0744103 ⋯
The bootstrap object has several properties defined, perhaps the most relevant are:
# row table of fixed effect coefficient estimates, errors and p values
pb1.coefpvalues
1000-element Vector{@NamedTuple{iter::Int64, coefname::Symbol, β::Float64, se::Float64, z::Float64, p::Float64}}:
(iter = 1, coefname = Symbol("(Intercept)"), β = 3.284493190376238, se = 0.032524356049923465, z = 100.9856485808569, p = 0.0)
(iter = 1, coefname = Symbol("studage: 4"), β = 0.06545761800835158, se = 0.023497319783523526, z = 2.7857482730541414, p = 0.005340432600180254)
(iter = 1, coefname = Symbol("studage: 6"), β = 0.07436678269492078, se = 0.024299697132984736, z = 3.0603995715639725, p = 0.002210418778698702)
(iter = 1, coefname = Symbol("studage: 8"), β = 0.13441279564664554, se = 0.02674134761946562, z = 5.026403214952544, p = 4.99764527884585e-7)
(iter = 1, coefname = Symbol("lectage: 2"), β = -0.0868996632908928, se = 0.015507343974755044, z = -5.603774794211042, p = 2.0973318036586454e-8)
(iter = 1, coefname = Symbol("lectage: 3"), β = -0.11271181582296878, se = 0.01685962313429071, z = -6.685310515258478, p = 2.3043550719035788e-11)
(iter = 1, coefname = Symbol("lectage: 4"), β = -0.19020616801325693, se = 0.019766294929206236, z = -9.622752705779602, p = 6.4092559865334075e-22)
(iter = 1, coefname = Symbol("lectage: 5"), β = -0.1607378454925726, se = 0.021585081487335164, z = -7.446710154274098, p = 9.569654528260931e-14)
(iter = 1, coefname = Symbol("lectage: 6"), β = -0.24335103811670392, se = 0.020669173197583813, z = -11.773622282344181, p = 5.338212236432387e-32)
(iter = 1, coefname = Symbol("service: Y"), β = -0.05602420766813321, se = 0.013588893357126238, z = -4.122793975622246, p = 3.743044551204773e-5)
⋮
(iter = 100, coefname = Symbol("studage: 4"), β = 0.025227359263288812, se = 0.023200806904968868, z = 1.0873483567455544, p = 0.27688288524577925)
(iter = 100, coefname = Symbol("studage: 6"), β = 0.07088411189272113, se = 0.02400554628349315, z = 2.952822279302301, p = 0.0031488318995912448)
(iter = 100, coefname = Symbol("studage: 8"), β = 0.10704600246843128, se = 0.026433306479203472, z = 4.049663728321321, p = 5.129128222578269e-5)
(iter = 100, coefname = Symbol("lectage: 2"), β = -0.07319274390132172, se = 0.015423994812457901, z = -4.745381776334897, p = 2.08113437180995e-6)
(iter = 100, coefname = Symbol("lectage: 3"), β = -0.08389108316851346, se = 0.016760436930178597, z = -5.005304069219126, p = 5.577391837933923e-7)
(iter = 100, coefname = Symbol("lectage: 4"), β = -0.1817964173833714, se = 0.019656007595517798, z = -9.248898409299905, p = 2.2681815915554146e-20)
(iter = 100, coefname = Symbol("lectage: 5"), β = -0.1475554055735154, se = 0.021457355472192553, z = -6.876681787032814, p = 6.126274505141886e-12)
(iter = 100, coefname = Symbol("lectage: 6"), β = -0.22400748754566294, se = 0.02054864437010676, z = -10.9013267985473, p = 1.1358871761517386e-27)
(iter = 100, coefname = Symbol("service: Y"), β = -0.07461314138783286, se = 0.013528184936484909, z = -5.515384490834728, p = 3.480182596228577e-8)
# row table of all parameter estimates
pb1.allpars
(iter = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1 … 100, 100, 100, 100, 100, 100, 100, 100, 100, 100], type = ["β", "β", "β", "β", "β", "β", "β", "β", "β", "β" … "β", "β", "β", "β", "β", "β", "σ", "σ", "σ", "σ"], group = Union{Missing, String}[missing, missing, missing, missing, missing, missing, missing, missing, missing, missing … missing, missing, missing, missing, missing, missing, "s", "d", "dept", "residual"], names = Union{Missing, String}["(Intercept)", "studage: 4", "studage: 6", "studage: 8", "lectage: 2", "lectage: 3", "lectage: 4", "lectage: 5", "lectage: 6", "service: Y" … "lectage: 2", "lectage: 3", "lectage: 4", "lectage: 5", "lectage: 6", "service: Y", "(Intercept)", "(Intercept)", "(Intercept)", missing], value = [3.284493190376238, 0.06545761800835158, 0.07436678269492078, 0.13441279564664554, -0.0868996632908928, -0.11271181582296878, -0.19020616801325693, -0.1607378454925726, -0.24335103811670392, -0.05602420766813321 … -0.07319274390132172, -0.08389108316851346, -0.1817964173833714, -0.1475554055735154, -0.22400748754566294, -0.07461314138783286, 0.32664793547692816, 0.5286534736920326, 0.09096380721039783, 1.1778773576202244])
# row table of fixed effect estimates
pb1.β
1000-element Vector{@NamedTuple{iter::Int64, coefname::Symbol, β::Float64}}:
(iter = 1, coefname = Symbol("(Intercept)"), β = 3.284493190376238)
(iter = 1, coefname = Symbol("studage: 4"), β = 0.06545761800835158)
(iter = 1, coefname = Symbol("studage: 6"), β = 0.07436678269492078)
(iter = 1, coefname = Symbol("studage: 8"), β = 0.13441279564664554)
(iter = 1, coefname = Symbol("lectage: 2"), β = -0.0868996632908928)
(iter = 1, coefname = Symbol("lectage: 3"), β = -0.11271181582296878)
(iter = 1, coefname = Symbol("lectage: 4"), β = -0.19020616801325693)
(iter = 1, coefname = Symbol("lectage: 5"), β = -0.1607378454925726)
(iter = 1, coefname = Symbol("lectage: 6"), β = -0.24335103811670392)
(iter = 1, coefname = Symbol("service: Y"), β = -0.05602420766813321)
⋮
(iter = 100, coefname = Symbol("studage: 4"), β = 0.025227359263288812)
(iter = 100, coefname = Symbol("studage: 6"), β = 0.07088411189272113)
(iter = 100, coefname = Symbol("studage: 8"), β = 0.10704600246843128)
(iter = 100, coefname = Symbol("lectage: 2"), β = -0.07319274390132172)
(iter = 100, coefname = Symbol("lectage: 3"), β = -0.08389108316851346)
(iter = 100, coefname = Symbol("lectage: 4"), β = -0.1817964173833714)
(iter = 100, coefname = Symbol("lectage: 5"), β = -0.1475554055735154)
(iter = 100, coefname = Symbol("lectage: 6"), β = -0.22400748754566294)
(iter = 100, coefname = Symbol("service: Y"), β = -0.07461314138783286)
# summary table in wide format
pb1.tbl
Table with 18 columns and 100 rows:
obj β01 β02 β03 β04 β05 ⋯
┌─────────────────────────────────────────────────────────────────────
1 │ 2.38696e5 3.28449 0.0654576 0.0743668 0.134413 -0.0868997 ⋯
2 │ 2.37312e5 3.32203 0.0517903 0.0670179 0.140742 -0.0950906 ⋯
3 │ 2.37357e5 3.23378 0.116201 0.12476 0.207755 -0.0790354 ⋯
4 │ 2.36992e5 3.37017 0.0338394 0.045001 0.0898786 -0.0679428 ⋯
5 │ 2.37374e5 3.29213 0.0747803 0.0736114 0.123649 -0.0853712 ⋯
6 │ 2.38354e5 3.23251 0.0557169 0.0965588 0.168224 -0.110282 ⋯
7 │ 2.38595e5 3.32446 0.0599131 0.0739456 0.144132 -0.100117 ⋯
8 │ 237394.0 3.33964 0.000101607 0.0394911 0.111735 -0.0740082 ⋯
9 │ 2.37038e5 3.23416 0.0288773 0.0792364 0.143171 -0.0754491 ⋯
10 │ 2.37538e5 3.32298 0.0418344 0.0479324 0.117366 -0.0843251 ⋯
11 │ 2.37919e5 3.34143 0.00910537 0.0340326 0.109248 -0.087165 ⋯
12 │ 2.37707e5 3.24528 0.0830967 0.110315 0.193995 -0.0996083 ⋯
13 │ 2.37413e5 3.28979 0.0533019 0.0813591 0.134691 -0.0804611 ⋯
14 │ 2.37776e5 3.2454 0.00786687 0.0846663 0.11677 -0.0655534 ⋯
15 │ 2.37429e5 3.2731 0.0656087 0.0609475 0.12065 -0.0701581 ⋯
16 │ 2.37344e5 3.26359 0.0708762 0.0925205 0.148036 -0.0778408 ⋯
17 │ 237108.0 3.28631 0.0531506 0.0802238 0.136806 -0.0839233 ⋯
⋮ │ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱
We can speed up the bootstrap even further by loosening the convergence criteria for the individual fits. parametricbootstrap
allows passing in a NamedTuple of modifiers to the optimisation process, called optsum_overrides
(the internal structure for the optimization configuration and results in called OptSummary
and is stored in the optsum
field). The parameter ftol_rel
controls the tolerance for the relative change in the objective between optimizer iterations before the model is considered converged. If we set ftol_rel=0.8
, then this is approximately equivalent to doing the comparison in single precision. More directly, lowering the fit quality for each replicate will reduce the quality of each replicate, but this may be more than compensated for by the ability to fit a much larger number of replicates in the same time.
# would generally recommend something like 1e-8, which is approximately single precision, this is set here to speed up things for the course
= (; ftol_rel=1e-4)
optsum_overrides @time pb1a = parametricbootstrap(MersenneTwister(42), 500, ie1; optsum_overrides, progress)
213.172458 seconds (927.39 k allocations: 82.110 MiB, 0.01% gc time, 0.05% compilation time)
MixedModelBootstrap with 500 samples
parameter min q25 median mean q75 ⋯
┌─────────────────────────────────────────────────────────────────────────
1 │ β01 3.20299 3.26494 3.2899 3.29007 3.31293 ⋯
2 │ β02 -0.0283391 0.0378083 0.0532709 0.052406 0.0682966 ⋯
3 │ β03 0.00720732 0.0565692 0.0732217 0.0730987 0.0885592 ⋯
4 │ β04 0.0638129 0.121894 0.139394 0.138752 0.156194 ⋯
5 │ β05 -0.122304 -0.0912163 -0.0803581 -0.0807668 -0.0708919 ⋯
6 │ β06 -0.169496 -0.12282 -0.110649 -0.11077 -0.0987061 ⋯
7 │ β07 -0.250192 -0.204949 -0.189636 -0.190551 -0.176709 ⋯
8 │ β08 -0.225879 -0.180139 -0.165735 -0.165589 -0.150571 ⋯
9 │ β09 -0.305527 -0.262256 -0.246845 -0.246078 -0.228991 ⋯
10 │ β10 -0.111312 -0.0811709 -0.0725221 -0.0725579 -0.0623461 ⋯
11 │ σ 1.15547 1.16376 1.16765 1.1689 1.17438 ⋯
12 │ σ1 0.220703 0.325048 0.383176 0.367196 0.421881 ⋯
13 │ σ2 0.364385 0.577211 0.846798 0.789761 0.91278 ⋯
14 │ σ3 0.842896 1.18134 1.26922 1.26313 1.30106 ⋯
15 │ θ1 0.186652 0.275806 0.328033 0.314377 0.362649 ⋯
16 │ θ2 0.307893 0.491475 0.727428 0.676103 0.782392 ⋯
17 │ θ3 0.721849 1.01366 1.08493 1.08058 1.11704 ⋯
using CairoMakie
We can create a custom display of the bootstrap densities for the fixed effects and variance components. We’ll build this plot piecewise using AlgebraOfGraphics.
We start by grabbing all the parameter estimates and placing them in a dataframe for easier manipulation.
using AlgebraOfGraphics
using AlgebraOfGraphics: density # override Makie.density
using DataFrames
= DataFrame(pb1a.allpars) df
Row | iter | type | group | names | value |
---|---|---|---|---|---|
Int64 | String | String? | String? | Float64 | |
1 | 1 | β | missing | (Intercept) | 3.28408 |
2 | 1 | β | missing | studage: 4 | 0.0638135 |
3 | 1 | β | missing | studage: 6 | 0.0697498 |
4 | 1 | β | missing | studage: 8 | 0.130544 |
5 | 1 | β | missing | lectage: 2 | -0.0869565 |
6 | 1 | β | missing | lectage: 3 | -0.110721 |
7 | 1 | β | missing | lectage: 4 | -0.187548 |
8 | 1 | β | missing | lectage: 5 | -0.159629 |
9 | 1 | β | missing | lectage: 6 | -0.241548 |
10 | 1 | β | missing | service: Y | -0.0578332 |
11 | 1 | σ | s | (Intercept) | 0.350072 |
12 | 1 | σ | d | (Intercept) | 1.01893 |
13 | 1 | σ | dept | (Intercept) | 1.24209 |
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ |
6989 | 500 | β | missing | studage: 6 | 0.0732302 |
6990 | 500 | β | missing | studage: 8 | 0.128215 |
6991 | 500 | β | missing | lectage: 2 | -0.0675102 |
6992 | 500 | β | missing | lectage: 3 | -0.10197 |
6993 | 500 | β | missing | lectage: 4 | -0.175881 |
6994 | 500 | β | missing | lectage: 5 | -0.158475 |
6995 | 500 | β | missing | lectage: 6 | -0.255234 |
6996 | 500 | β | missing | service: Y | -0.0724982 |
6997 | 500 | σ | s | (Intercept) | 0.378383 |
6998 | 500 | σ | d | (Intercept) | 0.447545 |
6999 | 500 | σ | dept | (Intercept) | 1.06067 |
7000 | 500 | σ | residual | missing | 1.17374 |
We then split the parameters up into the fixed effects, random effects and the residual standard deviation.
= subset(df, :group => ByRow(ismissing))
fe = subset(df, :group => ByRow(g -> !ismissing(g) && g != "residual"))
re = subset(df, :group => ByRow(g -> !ismissing(g) && g == "residual")) resid
Row | iter | type | group | names | value |
---|---|---|---|---|---|
Int64 | String | String? | String? | Float64 | |
1 | 1 | σ | residual | missing | 1.17659 |
2 | 2 | σ | residual | missing | 1.16269 |
3 | 3 | σ | residual | missing | 1.16128 |
4 | 4 | σ | residual | missing | 1.16787 |
5 | 5 | σ | residual | missing | 1.16338 |
6 | 6 | σ | residual | missing | 1.16883 |
7 | 7 | σ | residual | missing | 1.17022 |
8 | 8 | σ | residual | missing | 1.16208 |
9 | 9 | σ | residual | missing | 1.17053 |
10 | 10 | σ | residual | missing | 1.16385 |
11 | 11 | σ | residual | missing | 1.17993 |
12 | 12 | σ | residual | missing | 1.17563 |
13 | 13 | σ | residual | missing | 1.16173 |
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ |
489 | 489 | σ | residual | missing | 1.16205 |
490 | 490 | σ | residual | missing | 1.16089 |
491 | 491 | σ | residual | missing | 1.1758 |
492 | 492 | σ | residual | missing | 1.17573 |
493 | 493 | σ | residual | missing | 1.17036 |
494 | 494 | σ | residual | missing | 1.17232 |
495 | 495 | σ | residual | missing | 1.16437 |
496 | 496 | σ | residual | missing | 1.16701 |
497 | 497 | σ | residual | missing | 1.16876 |
498 | 498 | σ | residual | missing | 1.16375 |
499 | 499 | σ | residual | missing | 1.16358 |
500 | 500 | σ | residual | missing | 1.17374 |
We plot the fixed effects:
= data(fe) * mapping(:value; layout=:names) * density()
plt_fe draw(plt_fe;
=(;linkxaxes=:none, linkyaxes=:none)) facet
and then tweak the layout a little:
= data(fe) * mapping(:value; layout=:names) * density()
plt_fe = [(1, 1),
layout 2, 1), (2, 2), (2, 3),
(3, 1), (3, 2), (3, 3),
(4, 1), (4, 2), (4, 3)]
(draw(plt_fe;
=(;linkxaxes=:none, linkyaxes=:none),
facet=(;layout)) palettes
Next, we plot the random effects:
= data(re) * mapping(:value; row=:group, col=:names) * density()
plt_re draw(plt_re; facet=(;linkxaxes=:none, linkyaxes=:none))
and the residual SD:
= data(resid) * mapping(:value) * density()
plt_resid draw(plt_resid; axis=(;title="Residual SD"))
Finally, we put all the plots together into a single figure.
let f, facet, layout, axis
= Figure(; size=(800, 600))
f = (;linkxaxes=:none, linkyaxes=:none)
facet =(; xlabel="estimate")
axis= [(1, 1),
layout 2, 1), (2, 2), (2, 3),
(3, 1), (3, 2), (3, 3),
(4, 1), (4, 2), (4, 3)]
(Label(f[0, 1], "Fixed effects"; tellwidth=false, fontsize=20)
draw!(f[1:5, 1], plt_fe; facet, axis, palettes=(;layout))
Label(f[0, 2], "Variance components"; tellwidth=false, fontsize=20)
draw!(f[1:4, 2], plt_re; facet, axis)
draw!(f[5, 2], plt_resid; facet, axis)
Label(f[end+1, :], "Density of bootstrapped estimates", fontsize=30)
fend
MixedModelsMakie.ridgeplot
MixedModelsMakie defines coefplot
for bootstrap objects:
using MixedModelsMakie
coefplot(pb1a; show_intercept=false)
The bootstrap hower provides a much richer estimate of uncertainty, which we can see with ridgeplot
:
ridgeplot(pb1a)
ridgeplot
supports most of the same options as coefplot
(and also has a mutating variant ridgeplot!
):
ridgeplot(pb1a; show_intercept=false)
Ridge plots are sometimes also called joy plots in other languages because they look like a certain Joy Division album cover.
MixedModels.jl uses a Wald confidence interval by default:
confint(ie1)
DictTable with 2 columns and 10 rows:
coef lower upper
────────────┬───────────────────────
(Intercept) │ 3.22721 3.35436
studage: 4 │ 0.00654109 0.0973544
studage: 6 │ 0.0251567 0.119089
studage: 8 │ 0.0845873 0.187982
lectage: 2 │ -0.110898 -0.0506082
lectage: 3 │ -0.142964 -0.0774094
lectage: 4 │ -0.227587 -0.15075
lectage: 5 │ -0.206382 -0.122478
lectage: 6 │ -0.286144 -0.205849
service: Y │ -0.099123 -0.0463103
The critical values for a given confidence level are obtained from the standard normal, i.e. treating the \(t\)-values as \(z\)-values. This is a reasonable approximation for models fit to more than a few dozen observations, because the \(t(\nu)\) rapidly converges to \(z\) as \(\nu\rightarrow\infty\) and is nigh instinguishable for \(\nu > 30\).
A more precise definition of the residual (i.e. denominator) degrees of freedom for mixed model is somewhat challenging, but for even moderately sized datasets, the point is largely moot.
MixedModels.jl also supports computing confidence intervals from the bootstrapped values.
There is a third way to compute confidence intervals in MixedModels.jl: via the likelihoood profile. However, this code is substantially newer and less well tested and may fail for some models (including some of the examples in this course).
The default method for the a bootstrapped confidence interval is the shortest (contiguous) coverage interval. Per definition, the shortest coverage interval is also the highest density interval. Note that the confidence interval is always a single interval and never the union of disjoint intervals, which may or may not be desirable for multimodal distribution. However, multimodal distributions should not generally arise in the parametric bootstrap.
confint(pb1a)
DictTable with 2 columns and 14 rows:
par lower upper
────┬───────────────────────
β01 │ 3.2257 3.35708
β02 │ 0.00862032 0.0960707
β03 │ 0.0257096 0.117076
β04 │ 0.0913162 0.197407
β05 │ -0.111489 -0.0507315
β06 │ -0.142169 -0.0771449
β07 │ -0.235839 -0.156591
β08 │ -0.21702 -0.12986
β09 │ -0.286667 -0.204071
β10 │ -0.101496 -0.0489063
σ │ 1.15818 1.18166
σ1 │ 0.25292 0.438999
σ2 │ 0.481224 1.02709
σ3 │ 1.01486 1.62506
confint(pb1a; method=:shortest)
DictTable with 2 columns and 14 rows:
par lower upper
────┬───────────────────────
β01 │ 3.2257 3.35708
β02 │ 0.00862032 0.0960707
β03 │ 0.0257096 0.117076
β04 │ 0.0913162 0.197407
β05 │ -0.111489 -0.0507315
β06 │ -0.142169 -0.0771449
β07 │ -0.235839 -0.156591
β08 │ -0.21702 -0.12986
β09 │ -0.286667 -0.204071
β10 │ -0.101496 -0.0489063
σ │ 1.15818 1.18166
σ1 │ 0.25292 0.438999
σ2 │ 0.481224 1.02709
σ3 │ 1.01486 1.62506
The :equaltail
method constructs the confidence that has equal tail probability, which is equivalent to the quantile-based interval. This interval is most comparable to the Wald and profile-based intervals.
confint(pb1a; method=:equaltail)
DictTable with 2 columns and 14 rows:
par lower upper
────┬───────────────────────
β01 │ 3.22657 3.35984
β02 │ 0.0033087 0.0949412
β03 │ 0.0279016 0.119836
β04 │ 0.0785099 0.19206
β05 │ -0.110169 -0.049009
β06 │ -0.144158 -0.0787607
β07 │ -0.232093 -0.149827
β08 │ -0.212549 -0.125297
β09 │ -0.288305 -0.205441
β10 │ -0.0997842 -0.0457336
σ │ 1.15829 1.18212
σ1 │ 0.251927 0.438777
σ2 │ 0.460668 1.01502
σ3 │ 1.01837 1.63487