What actually happens differently because I'm the one doing this work?
I don't do the same thing on every project. The mode depends on what the collaboration needs.
Design the learning experience around the tool — the structure, timing, and facilitation that make it work
Decide what to say no to, what to pivot, what's ripe enough to invest in
Turn someone else's research question into a working thing to put in front of learners
Be the outside mirror — understand both the tech and the pedagogy, no stake in the outcome
In the age of AI, the building is the easy part. The question is: what constraints still exist that I can remove?
| Mode | Risk | Timeline |
|---|---|---|
| Builder | AI coding tools make prototyping accessible to non-technical people. "I can build you a chatbot" is a shrinking advantage. | 1-2 years |
| Sounding Board | Too vague to be a job description on its own. Only defensible if it leads to concrete outcomes or enables other modes. | Now |
| Thoroughness work | Note-taking, status tracking, check-in meetings — exactly what AI agents are getting good at. | 6-12 months |
| Mode | Why It Holds |
|---|---|
| Format Designer | Workshop structure, facilitation timing, when to let people struggle vs. intervene. Pedagogical design, not code. Hard-won through hundreds of sessions. |
| Strategic Filter | Knowing when 60 minutes isn't enough, when to walk away, when a research question needs scoping before building. Judgment about learning thresholds. |
| System Designer | Designing systems that let OTHER people build and experiment at scale. Not "I build for you" but "here's how you build for yourself." Flash Lab & bot101.app already are this. |
Researchers have hypotheses but the friction of building and testing is too high. I'm solving this one-off (Health Coach Bot).
They can't remove constraints they don't know have disappeared. I see this every day in meetings.
Fan/Chu, Marily, Forssell all learning about AI+learning in parallel. No synthesis across the portfolio.
reference/personal-logic-model.md