
Architecting Intelligence
Extending regulated systems thinking into applied AI.
I’ve spent years working inside regulated financial ecosystems — payments infrastructure, scheme alignment, partner integrations, and delivery under strict governance. In those environments, failure modes matter more than features.
Now I’m applying that same discipline to machine learning and generative systems. This isn’t a pivot — it’s an extension.
This page is a structured, public log: fundamentals, experiments, and applied builds — with an emphasis on evaluation, risk, and real user outcomes.
Learning Path
Phase 1 — Foundations
Python fluency, data handling, and the mental models behind supervised learning. Focus: clarity over complexity.
Now: numpy/pandas, data cleaning, train/test splits, baseline thinking
Phase 2 — Classical ML
Regression and classification, feature engineering, and models that survive messy data.
Next: linear/logistic regression, trees, cross-validation, leakage traps
Phase 3 — Evaluation & Risk
Metrics, calibration, and decision quality — especially under imbalance and regulation.
Next: precision/recall tradeoffs, ROC-AUC, PR-AUC, thresholds, explainability
Phase 4 — Generative AI
Prompting, retrieval (RAG), grounding, and evaluation — building safe patterns for real users.
Next: retrieval pipelines, citations, hallucination controls, eval harnesses
Phase 5 — Applied Builds
Turning the learning into products: FinLens + Questions for My Doctor — with constraints and evaluation built in.
Focus: real workflows, measurable outcomes, and responsible system boundaries
Latest Notes
Monday + Thursday updates. Short, cumulative, and linked back to real builds.
Evaluation Metrics in Regulated Systems
Why accuracy is insufficient in regulated and imbalanced systems.
2026-03-09