v1.5.0¶
Release date: 2026-03-26
v1.5.0 prepares the SDK for a broader public release by adding a realism-review layer to the local AI workflow and tightening the public-facing release guidance.
Highlights¶
- new
iints ai reviewcommand for realism-oriented feedback - prepared run bundles now include
review_payload.json - imported CareLink workspaces also support realism review directly
- new public release checklist for fresh-machine install, demo, AI, and security verification
What changed¶
Local AI can now critique results¶
The local Ministral integration no longer stops at explanation and summary. It can now also:
- judge whether a run or imported dataset looks physiologically plausible
- call out questionable patterns
- produce concrete feedback points to improve the simulation or workflow
- save that critique as a reusable markdown artifact
When you point iints ai review at a prepared run directory, it writes:
results/<run_id>/ai/realism_review.md
Run and CareLink preparation¶
Prepared AI bundles now include:
report_payload.jsonreview_payload.jsontrends_payload.jsonanomalies_payload.jsonstep_riskiest.json
That keeps the review flow consistent across:
- synthetic simulation runs
- booth/demo artifacts
- imported personal CareLink data
Public-release readiness¶
This release also adds:
docs/PUBLIC_RELEASE_CHECKLIST.md- clearer public guidance for demo, install, AI, and prerelease verification
Example¶
iints ai prepare results/<run_id>
iints ai report results/<run_id>
iints ai review results/<run_id>
For imported personal data:
iints carelink-workbench \
--input-csv "/path/to/CareLink export.csv" \
--output-dir results/personal_carelink
iints ai review results/personal_carelink --model ministral-3:3b
Install¶
python -m pip install -U "iints-sdk-python35[mdmp]==1.5.0"
Then verify:
iints doctor --smoke-run
iints ai local-check --model ministral-3:3b
Why this release matters¶
Before v1.5.0, the local AI layer could explain and summarize results, but it did not yet give a dedicated realism review that you could use as a quality-improvement artifact.
With v1.5.0, the SDK can generate, visualize, explain, and critique its own outputs in one workflow.