Personal Logic Model
What actually happens differently because I’m the one doing this work?
The Question
What happens differently because I’m here?
Not a tagline. A question I’m answering with evidence, quarter by quarter. The rest of this document is the evolving case.
Inputs (What I Bring)
- Technical fluency across AI, chatbots, VR/MR, web dev — can build working things, not just spec them
- Design thinking practice — structured facilitation, not just brainstorming
- Instructional design — designing learning experiences end-to-end, from objectives through assessment, not just content delivery
- Trusted seat at the table — between faculty, researchers, students, and external partners at Stanford
- Replicable formats — Flash Lab (designkit.stanford.edu), Build-a-Bot (bot101.app) as portable toolkits others can run
- Pedagogical judgment — know the learning threshold: when a format works and when it doesn’t
Activities (What I Actually Do)
- Rapid prototyping — turn hypotheses into testable artifacts in days/weeks, not months (Health Coach Bot, Colombia chatbot, AI Comic Studio)
- Format design — design the learning experience around the tool — the structure, timing, and facilitation that make it work. This is the 80% that isn’t code.
- Strategic filtering — say no to low-leverage asks, pivot to higher-leverage versions (e.g., ISTE 60-min → train-the-trainer)
- Workshop facilitation — design and deliver hands-on workshops for researchers, faculty, and staff (15+ since joining Stanford) — Legal 101, AI Flash Labs, Build-a-Bot sessions, and custom formats
- Sounding board — be the outside mirror for collaborators. Understand both the tech and the pedagogy without a stake in the research outcome.
- Relationship triage — match investment to readiness. Build where ripe, hold space where not. The 20-tag system operationalizes this.
Outputs (What Gets Produced)
- Working prototypes — turn research questions into testable artifacts (Health Coach Bot, Colombia chatbot, AI Comic Studio, ABCs bots)
- Workshop toolkits with documentation — replicable formats others can run (Flash Lab at designkit.stanford.edu, Build-a-Bot at bot101.app)
- Trained facilitators — people who can run sessions without me
- Clearer thinking — collaborators with refined questions and next steps after talking things through
- Partnership pivots — misfit or shallow relationships redirected to higher-leverage structures
Q1 2026 Evidence
How these outputs map to annual goals this quarter:
Strategic Influence (Goal 2)
| Output | Q1 2026 |
|---|---|
| Workshops designed & delivered | Oman Hackathon (Feb), Legal 101 (Feb), Teaching w/ AI Share-out (Feb) |
| Strategic consultations | Cathy/ABCs bots, Wen/MyBook, Gregory/Tinkery, Dr. Shariffa/special ed, Shane/community college bot, ISTE partnership scoping |
| New strategic connections | Candace Thille, Tamar Perez, Courtney Garza (TCEA proof point) |
| Conceptual framings captured | Broken Proxy (Sahami), AI Driver’s License (Taubman), CASA paradigm |
Scalable Capacity Programs (Goal 3)
| Output | Q1 2026 |
|---|---|
| Facilitators running independently | Courtney Garza (TCEA), Anna-Lena Neurohr (Paris) |
| Self-service toolkits | designkit.stanford.edu live; Gregory Wilson building Tinkery extension |
| Train-the-trainer pipelines | ISTE (3 faculty interested: Winston, Beth, Jeremiah) |
| Working prototypes | ABCs multi-perspective bots, Colombia chatbot (data exported for research) |
| CRM re-engagement | 5 facilitator emails drafted, campaign ready |
Team Communication (Goal 1)
| Output | Q1 2026 |
|---|---|
| Tag roll-up meetings | Jan quarterly (24 CHECK-INs analyzed) |
| Cross-cohort programming | Legal 101 invited SAL grantees + tEquity + Create+AI together |
| Portfolio tracking | 20-tag system operational, Airtable hygiene in progress |
Sustainable Leadership (Goal 4)
| Output | Q1 2026 |
|---|---|
| Reflective practice | Personal Logic Model — built, shared with Josh, updated quarterly |
| Organizational systems | Claude exec assistant (daily prep, EOD, inbox processing, action tracking) |
| Capacity management | 18 → 14 active threads (VFT iOS paused, Health Coach Bot paused) |
Outcomes (What Changes Because of Those Outputs)
- Faster evidence cycles — concept → tested with learners in weeks, not years
- Educators with hands-on experience — they build with AI, not just hear about it
- Internal capacity — partners learn to run their own, not depend on me
- Ideas survive the gap — prototypes and conversations keep momentum between grant cycles
Q1 2026 Evidence
How these outcomes are showing up this quarter:
1. Formats travel without me → capacity building
The test: can someone else run the thing I designed, and does it still work?
| Who | What they ran | What happened |
|---|---|---|
| Courtney Garza | Build-a-Bot at TCEA 2026 | Completed training, then took the initiative to register, submit, and deliver at a major conference on her own. |
| Anna-Lena Neurohr | 2 Flash Labs in Paris | Adapted for research context. 6 research projects emerged from her sessions. |
Outcome indicator: Sessions facilitated by others without me present.
2. New rooms open → cross-pollination
The test: are people or programs connected now that weren’t before?
- Legal 101 put SAL grantees, tEquity grantees, and Create+AI participants in the same room for the first time
- Oman hackathon → Dr. Shariffa special ed whitepaper thread (new research collaboration that didn’t exist before)
- ISTE wants to train their people → external validation that the format has market fit beyond Stanford
- Courtney at TCEA → Build-a-Bot reached a conference audience we had no relationship with
Outcome indicator: Distinct programs, cohorts, or organizations connected through shared offerings.
3. Ideas get tested faster → critical boost
The test: how long from research question to something in front of learners?
- ABCs multi-perspective bots: concept → working tool → oncology faculty using it in ~2 weeks
- Colombia chatbot: Ana’s hypothesis → 4th graders using it → data exported for analysis
- Flash Lab: educators arrive with a question → leave with a working prototype in 3 hours
Outcome indicator: Time from research question to learner-facing prototype.
4. On-ramp feeds the ecosystem → organizational value
The test: does our work generate value for other programs, not just our own?
- Flash Lab positioned as top-of-funnel: every session can refer participants to CSET/PLEX (AI literacy PD), Challenge Success (assessment), youcubed (math), PACE (research connections)
- Legal 101 created visibility for Stanford Startup Law: Sustainability across multiple cohorts
- Build-a-Bot gallery (1000+ bots) and Flash Lab toolkit drive inbound interest to GSE
Outcome indicator: Referrals from our offerings to other GSE/Stanford programs. (New — needs baseline.)
Impact (Why It Matters)
More ideas about learning actually get tested. More researchers have pilot data backing their proposals. A growing network of educators who’ve built, not just watched.
For the organization (Layer 2)
An on-ramp that feeds the ecosystem — not a silo. The metrics that matter: users reached, facilitators trained, sessions delivered by others, inbound interest driven to GSE programs, research studies enabled. A role that multiplies rather than adds.
Operating Modes
I don’t do the same thing on every project. The mode depends on what the collaboration needs.
Format Designer (High defensibility)
Design the learning experience around the tool — the structure, timing, and facilitation that make it work. This is hard-won pedagogical design, not replicable by AI coding tools.
- Examples: Flash Lab workshop structure, Build-a-Bot trainer program
Strategic Filter (High defensibility)
Decide what to say no to, what to pivot, what’s ripe enough to invest in. Requires judgment about learning thresholds and partnership readiness.
- Examples: ISTE withdrawal → train-the-trainer pivot
Builder (Medium defensibility)
Turn someone else’s research question into a working thing to put in front of learners. The building itself is increasingly replicable with AI tools — the knowing-what-to-build is not.
- Examples: Health Coach Bot, Colombia chatbot, AI Comic Studio
Sounding Board (Lower alone — but it’s the pipeline)
Be the outside mirror. Understand both the tech and the pedagogy, with no stake in the research outcome. This is how trust is built that enables the other modes.
- Examples: Makery check-ins (Karin: “really nice to talk things through with us”), DeVeaux conversations
The Absence Test
What wouldn’t happen if I weren’t here?
- Prototypes wouldn’t exist yet — health coach, Colombia chatbot, AI Comic Studio still paragraphs in grant proposals, not things in front of learners
- Flash Lab wouldn’t be replicable — would be “that workshop someone did once” instead of a toolkit at designkit.stanford.edu that others can run
- Format decisions wouldn’t get made — nobody saying “60 minutes isn’t enough for real learning.” The easy ask gets accepted, the high-leverage version never happens.
- Collaborators lose their outside mirror — researchers and faculty talk to their own teams but miss the perspective of someone who understands tech + learning without a stake in the research outcome
Key Framings
Conceptual frames I use when talking to educators about AI in education.
The Broken Proxy (from Mehran Sahami, AI+Education Summit Feb 2026)
The essay, the problem set, the research paper — these artifacts used to be reliable proxies for the learning process. A strong product meant a strong process: the student researched, synthesized, structured their thinking, and produced something. We assessed the product because we trusted it reflected the process.
GenAI broke that assumption. The artifact is no longer a trustworthy signal of the learning that happened. A strong product may reflect strong prompting, not strong understanding. Worse, AI can create false confidence — people produce less secure code with AI assistants but are more likely to believe it’s secure (Perry et al., 2022).
The shift: The question isn’t “how do we catch cheating” — it’s “how do we make the learning process visible again?” This is a design problem, not a policing problem.
Related: Mehran also argues GenAI should be treated as a curricular topic, not just a tool. Tool decisions are secondary; literacy with AI should grow as students move through a curriculum (basics → recognizing bias/hallucinations → expert prompting).
The AI-Era Argument
AI tools can build chatbots and prototypes. The building part of this job is increasingly replicable. What’s not replicable:
- Format design — the workshop structure, facilitation moves, and timing that make a tool actually work for learning. This is pedagogical design, not code.
- Strategic judgment — knowing when 60 minutes isn’t enough, when a research question needs scoping before building, when to say no.
- The trust layer — collaborators invite me into their projects because of repeated interactions where I showed good judgment. You can’t hand someone a codebase and transfer that trust.
The building is the easy part. The seeing, the timing, and the design are the hard parts.
Evidence Chains
Each chain shows how an operating mode (Layer 1: why me) produces an outcome (Layer 2: what the role produces).
Chain 1: Format Design → Formats Travel → Capacity Building
I design the learning experience (not just the tool) → the format is documented as a toolkit → other people pick it up and run it → it works in contexts I never planned for
Conceptual: The format designer’s value is proven when the format survives without the designer. Q1 evidence: Courtney completed Build-a-Bot training, then took the initiative to register and deliver it at TCEA on her own. Anna-Lena adapted Flash Lab for a Paris research audience — 6 projects emerged.
Chain 2: Strategic Filter → Right Format → Higher Leverage
An opportunity arrives → I recognize the easy version won’t produce real learning → we hold out for a structure that actually works → the slower path creates more durable value
Conceptual: The instinct to hold out for the right format rather than accept the easy ask is the judgment call. Q1 evidence: ISTE proposes 60-min session → Josh and I pause → scope train-the-trainer instead → ISTE now wants to train 3 internal faculty. The top-of-funnel framing (Feb 17) positions Flash Lab to feed PLEX, Challenge Success, youcubed — not compete with them.
Chain 3: Prototype → Evidence → Iteration
Researcher describes a vision → I build a working prototype they can put in front of learners → prototype refines the research question → the thing that exists changes what they ask next
Conceptual: The builder’s value isn’t the code — it’s knowing what to build and when it’s ready for learners. Q1 evidence: ABCs bots went from concept to multi-perspective tool to oncology faculty training in ~2 weeks. Colombia chatbot → 4th graders using it → data exported for Ana’s research.
Chain 4: Sounding Board → Trust → Deeper Modes
Show up to check-ins, ask useful questions → collaborator finds value in outside perspective → trust builds → relationship shifts to co-creation or building when the moment is right
Conceptual: The sounding board is the pipeline, not the impact. Q1 evidence: Karin (Makery): “really nice to talk things through with us.” Gregory Wilson → Tinkery toolkit collaboration emerged from check-in relationship. Dr. Shariffa → special ed whitepaper thread grew from a single meeting.
Chain 5: Cross-Pollination → Ecosystem Feeding → Organizational Value
Our offerings bring people from different programs into the same room → participants discover other GSE resources → our work drives numbers for programs we don’t run
Conceptual: The role multiplies rather than adds. Every session is inbound marketing for the ecosystem. Q1 evidence: Legal 101 brought SAL + tEquity + Create+AI together for the first time. Flash Lab positioned as on-ramp to CSET/PLEX, Challenge Success, youcubed, PACE. (Referral tracking needs baseline — new outcome indicator.)
Defensibility Roadmap
In the age of AI, the building is the easy part. The question is: what constraints still exist that I can remove?
What’s Vulnerable
| Mode | Risk | Timeline |
|---|---|---|
| Builder | AI coding tools make prototyping increasingly accessible to non-technical people. “I can build you a chatbot” is a shrinking advantage. | 1-2 years |
| Sounding Board | Too vague to be a job description on its own. “I’m a good thinking partner” isn’t defensible unless it leads to concrete outcomes. | Now |
| Thoroughness work | Note-taking, status tracking, check-in meetings — this is exactly what AI agents are getting good at. | 6-12 months |
What’s Durable
| Mode | Why It Holds | Example |
|---|---|---|
| Format Designer | Designing the learning experience around a tool is pedagogical design, not code. Workshop structure, facilitation timing, when to let people struggle vs. intervene — this is hard-won and not automatable. | Flash Lab, Build-a-Bot workshop |
| Strategic Filter | Knowing when 60 minutes isn’t enough, when a research question needs scoping before building, when to walk away. Requires judgment about learning thresholds that comes from being in the room hundreds of times. | ISTE withdrawal → train-the-trainer pivot |
| System Designer | Designing the systems that let other people build and experiment at scale. Not “I build for you” but “here’s how you build for yourself.” | Flash Lab toolkit (designkit.stanford.edu), bot101.app |
The Next Move: What Constraints Can I Remove?
The winning strategy isn’t protecting current work — it’s spotting what just became possible and building value around that new capability.
Constraint 1: Researchers can’t go from idea to pilot fast enough. They have hypotheses about AI and learning but the friction of building, testing, and iterating is too high. I’m partially solving this (Health Coach Bot) but doing it one-off. Could I systematize it? A “Prototype Sprint” format — like Flash Lab but for research prototyping. Researchers come in with a question, leave with a testable artifact. Not “Reuben builds it for you” but “here’s the framework to build it yourself in 2 weeks.”
Constraint 2: Faculty don’t know what’s possible. They can’t remove constraints they don’t know have disappeared. Could I build a format for research ideation — faculty bring their research question, leave with a scoped prototype plan and an understanding of what AI makes newly testable?
Constraint 3: No cross-pollination across experiments. Marily is learning about health coaching bots. Forssell is learning about AI vs. human instruction. These are all happening in parallel with no synthesis across the portfolio. The person who spots patterns across 15+ AI+education experiments and surfaces insights that no individual project would see — that’s a judgment-layer role that requires being in all the rooms. Which I already am.
The One-Line Strategy
You build things — but you’re also the person who designs the systems that let other people build and experiment at scale.
Flash Lab and Build-a-Bot already are this. The question is: what’s the next one?
Last updated: Feb 17, 2026 — Added Layer 2 (role-based outputs/outcomes/impact tied to annual goals and Q1 evidence)
reference/personal-logic-model.md