Estimator
Estimator exposes mle, map, and mcmc on top of a compiled DSGE model + Kalman likelihood.
Recommended Entry
Most users should call DSGESolver.estimate or DSGESolver.estimate_and_solve instead of constructing Estimator directly.
Constructor
Estimator(
*,
solver: DSGESolver, # (1)!
compiled: CompiledModel, # (2)!
y: np.ndarray | pd.DataFrame, # (3)!
observables: list[str] | None = None,
estimated_params: Sequence[str] | None = None,
priors: Mapping[str, Prior] | None = None, # (4)!
steady_state: np.ndarray | dict[str, float] | None = None,
log_linear: bool = False,
x0: np.ndarray | None = None,
p0_mode: str | None = None,
p0_scale: float | None = None,
jitter: float | None = None,
symmetrize: bool | None = None,
R: np.ndarray | None = None, # (5)!
)
- Existing solver instance.
- Compiled model from
DSGESolver.compile(...). - Measurement data for Kalman likelihood.
- Required for
map(...)andmcmc(...). - Optional observation covariance override. If omitted through solver entrypoints,
Rcan be inferred before estimation.
Utility
Estimator.make_prior(
*,
distribution: str, # (1)!
parameters: dict[str, Any],
transform: str, # (2)!
transform_kwargs: dict[str, Any] | None = None,
) -> Prior
- Distribution family string.
- Transform method string.
Equivalent to SymbolicDSGE.bayesian.make_prior(...).
Likelihood / Posterior Evaluation
Estimator.theta0() -> np.ndarray
Estimator.loglik(theta: np.ndarray) -> float
Estimator.logprior(theta: np.ndarray) -> float
Estimator.logpost(theta: np.ndarray) -> float
Optimization Space
theta is unconstrained internal space. Estimator applies prior transforms to map between unconstrained theta and constrained model parameters.
MLE
Estimator.mle(
*,
theta0: np.ndarray | None = None, # (1)!
bounds: Sequence[tuple[float, float]] | None = None,
method: str = "L-BFGS-B",
options: Mapping[str, Any] | None = None,
) -> OptimizationResult
- If
None, uses transformed calibration defaults.
MAP
Estimator.map(
*,
theta0: np.ndarray | None = None, # (1)!
bounds: Sequence[tuple[float, float]] | None = None,
method: str = "L-BFGS-B",
options: Mapping[str, Any] | None = None,
) -> OptimizationResult
- Requires non-
Nonepriors at estimator construction.
MCMC
Estimator.mcmc(
*,
n_draws: int, # (1)!
burn_in: int = 1000, # (2)!
thin: int = 1, # (3)!
theta0: np.ndarray | None = None,
random_state: int | np.random.Generator | None = None,
adapt: bool = True, # (4)!
adapt_start: int = 100,
adapt_interval: int = 25,
proposal_scale: float = 0.1,
adapt_epsilon: float = 1e-8,
update_R_in_iterations: bool = False, # (5)!
) -> MCMCResult
- Number of retained posterior draws.
- Number of initial iterations discarded.
- Retain every
thin-th iteration after burn-in. - Adaptive covariance updates are performed during burn-in only.
- Rebuild
Rfrom current parameter draw when symbolicRmetadata is available and relevant parameters are being estimated.
Thinning Semantics
Thinning is applied after burn-in using (t - burn_in) % thin == 0.
MCMC Sample Space
MCMCResult.samples are returned in constrained parameter space (parameter names), not raw unconstrained theta.
Result Objects
OptimizationResult
| Field | Type | Description |
|---|---|---|
| kind | str |
"mle" or "map" |
| x | np.ndarray |
Optimized unconstrained vector |
| theta | dict[str, float] |
Optimized constrained parameters |
| success | bool |
Optimizer convergence flag |
| message | str |
Optimizer status message |
| fun | float |
Objective value at optimum |
| loglik | float |
Log-likelihood at optimum |
| logprior | float |
Log-prior at optimum |
| logpost | float |
Log-posterior at optimum |
| nfev | int |
Objective evaluations |
| nit | int | None |
Iterations |
| raw | scipy.optimize.OptimizeResult |
Raw scipy output |
MCMCResult
| Field | Type | Description |
|---|---|---|
| param_names | list[str] |
Parameter order for samples |
| samples | np.ndarray |
Retained posterior samples |
| logpost_trace | np.ndarray |
Posterior trace for retained samples |
| accept_rate | float |
Acceptance ratio |
| n_draws | int |
Retained draw count |
| burn_in | int |
Burn-in iterations |
| thin | int |
Thinning interval |
Methods:
- Significance level. Must satisfy
0 <= alpha < 1; each interval covers approximately1 - alphaof the retained marginal draws.
Compute marginal highest-posterior-density (HPD) intervals for each parameter column.
Inputs:
| Name | Description |
|---|---|
| alpha | Significance level used to determine the empirical HPD coverage. |
Returns:
| Type | Description |
|---|---|
dict[str, tuple[float, float]] |
Mapping from parameter name to the shortest empirical marginal interval containing approximately 1 - alpha of the retained posterior draws. |
MCMCResult.joint_hpd_set(
alpha: float = 0.05, # (1)!
) -> tuple[np.ndarray, np.ndarray, float, np.ndarray]
- Significance level. Must satisfy
0 <= alpha < 1; the returned set covers at least1 - alphaof the retained joint draws.
Compute an empirical joint HPD set for the full parameter vector.
Finite-Sample Joint HPD Approximation
Retained draws are ranked by logpost_trace and all draws at or above the cutoff are included in the set. If multiple draws are tied at the boundary log-posterior, they are all retained, so the realized coverage can be slightly larger than 1 - alpha.
Inputs:
| Name | Description |
|---|---|
| alpha | Significance level used to determine the empirical HPD coverage. |
Returns:
| Type | Description |
|---|---|
tuple[np.ndarray, np.ndarray, float, np.ndarray] |
Tuple (samples, logpost, threshold, indices) where samples are the retained parameter vectors in the joint HPD set, logpost are their posterior values, threshold is the cutoff log-posterior, and indices are positions of the retained draws in the original stored chain. |
Plot marginal posterior kernel-density estimates for each retained parameter column.
This is a quick visual diagnostic for posterior shape. It is useful for checking skewness, heavy tails, and obvious multimodality in the retained draws. A separate subplot is produced for each parameter and displayed immediately with matplotlib.pyplot.show().
Inputs:
| Name | Description |
|---|---|
| None | This method uses the retained samples already stored on the result object. |
Returns:
| Type | Description |
|---|---|
None |
Displays a Matplotlib figure of marginal KDE curves and returns nothing. |
Plot retained posterior draws for each parameter as trace diagnostics.
Trace plots are useful for checking whether the retained chain appears to mix well, whether it still shows drift, and whether particular parameters exhibit unusually persistent autocorrelation or regime changes. A separate subplot is produced for each parameter and displayed immediately with matplotlib.pyplot.show().
Inputs:
| Name | Description |
|---|---|
| None | This method uses the retained samples already stored on the result object. |
Returns:
| Type | Description |
|---|---|
None |
Displays a Matplotlib figure of per-parameter trace plots and returns nothing. |
Plot the retained log-posterior sequence across MCMC iterations.
This diagnostic helps identify abrupt changes in posterior fit, long stretches of poor exploration, or chains that remain unstable even after burn-in and thinning have been applied. The plot is generated from MCMCResult.logpost_trace, which stores one log-posterior value per retained draw, and is displayed immediately with matplotlib.pyplot.show().
Inputs:
| Name | Description |
|---|---|
| None | This method uses the retained log-posterior trace already stored on the result object. |
Returns:
| Type | Description |
|---|---|
None |
Displays a Matplotlib figure of the retained log-posterior trace and returns nothing. |