Personal Logic Model

What actually happens differently because I'm the one doing this work?

Standalone View

What happens differently because I'm here?

The answer is the evidence below — and it's evolving.
Inputs
What I Bring
Technical fluency
AI, chatbots, VR/MR, web dev — can build working things, not just spec them
Instructional design
Designing learning experiences end-to-end — from objectives through assessment, not just content delivery
Design thinking
Structured facilitation, not just brainstorming
Network position
Trusted seat at the table between faculty, researchers, students, and external partners
Replicable formats
Flash Lab & Build-a-Bot as portable toolkits others can run
Pedagogical judgment
Know the learning threshold — when a format works and when it doesn't
Activities
What I Do
Rapid prototyping
Turn hypotheses into testable artifacts in days, not months (Health Coach Bot, Colombia chatbot)
Format design
Design the learning experience around the tool — the 80% that isn't code
Workshop facilitation
Design and deliver hands-on workshops for researchers, faculty, and staff — 15+ since joining Stanford
Strategic filtering
Say no to low-leverage asks, pivot to higher-leverage versions
Sounding board
Be the outside mirror — help collaborators think through ideas without a stake in their outcome
Relationship triage
Match investment to readiness — build where ripe, hold space where not
Outputs
What Gets Produced
Working prototypes
Turn research questions into testable artifacts (Health Coach Bot, Colombia chatbot, ABCs bots)
Workshop toolkits
Replicable formats others can run (designkit.stanford.edu, bot101.app)
Trained facilitators
People who can run sessions without me
Clearer thinking
Collaborators with refined questions and next steps after talking things through
Partnership pivots
Misfit or shallow relationships redirected to higher-leverage structures
— Q1: Workshops
Oman Hackathon, Legal 101, Teaching w/ AI Share-out
— Q1: Facilitators independent
Courtney (TCEA), Anna-Lena (Paris), Cathy (oncology)
— Q1: Pipelines
ISTE train-the-trainer (3 faculty), designkit.stanford.edu live, Tinkery extension
Outcomes
What Changes
Faster evidence cycles
Concept → tested with learners in weeks, not years
Educators with hands-on experience
They build with AI, not just hear about it
Internal capacity
Partners learn to run their own — not depend on me
Ideas survive the gap
Prototypes and conversations keep momentum between grant cycles
— Q1: Formats travel
Courtney ran Build-a-Bot at TCEA. Anna-Lena adapted Flash Lab for Paris — 6 projects. Cathy trained oncology faculty.
— Q1: New rooms open
Legal 101 connected 3 cohorts. Oman → special ed thread. ISTE = external demand signal.
— Q1: Ideas tested faster
ABCs bots: concept → faculty using it in ~2 weeks. Colombia chatbot → data exported for research.
— Q1: Ecosystem feeding
Flash Lab as top-of-funnel → referrals to CSET/PLEX, Challenge Success, youcubed, PACE.
Impact
Why It Matters
More experimentation
More ideas about learning actually get tested. More pilot data backing proposals. A growing network who've built, not just watched.
For the organization
An on-ramp that feeds the ecosystem. Users, reach, facilitators trained, sessions by others, inbound to GSE programs. A role that multiplies rather than adds.

Operating Modes

I don't do the same thing on every project. The mode depends on what the collaboration needs.

Format Designer

Design the learning experience around the tool — the structure, timing, and facilitation that make it work

e.g. Flash Lab, Build-a-Bot workshop
High — hard-won pedagogical design, not replicable by AI coding tools
Strategic Filter

Decide what to say no to, what to pivot, what's ripe enough to invest in

e.g. ISTE — holding out for the right format over the easy ask
High — requires judgment about learning thresholds and partnership readiness
Builder

Turn someone else's research question into a working thing to put in front of learners

e.g. Health Coach Bot, Colombia chatbot, AI Comic Studio
Medium — the building is increasingly replicable; the knowing-what-to-build is not
Sounding Board

Be the outside mirror — understand both the tech and the pedagogy, no stake in the outcome

e.g. Makery check-ins, DeVeaux conversations
Lower alone — but it's how trust is built that enables the other modes

Evidence Chains

Format Design → Formats Travel → Capacity Building
1 I design the learning experience (not just the tool) → documented as a toolkit
2 Other people pick it up and run it independently
3 It works in contexts I never planned for
4 Q1: Courtney ran Build-a-Bot at TCEA. Anna-Lena adapted Flash Lab for Paris — 6 projects emerged. Cathy trained oncology faculty with ABCs bots.
Strategic Filter → Right Format → Higher Leverage
1 An opportunity arrives (ISTE proposes 60-min session)
2 I recognize the easy version won't produce real learning
3 We hold out for a structure that actually works (train-the-trainer)
4 Q1: ISTE now wants to train 3 internal faculty. Top-of-funnel framing positions Flash Lab to feed PLEX, Challenge Success, youcubed — not compete.
Prototype → Evidence → Iteration
1 Researcher describes a vision → I build a working prototype
2 Prototype goes in front of learners → refines the research question
3 Q1: ABCs bots concept → oncology faculty training in ~2 weeks. Colombia chatbot → 4th graders → data exported for research.
Sounding Board → Trust → Deeper Modes
1 Show up to check-ins, ask useful questions, take notes
2 Collaborator finds value in outside perspective → trust builds
3 Relationship shifts to co-creation or building when the moment is right
4 Q1: Gregory Wilson → Tinkery collaboration emerged from check-ins. Dr. Shariffa → special ed whitepaper grew from one meeting.
Cross-Pollination → Ecosystem Feeding → Organizational Value
1 Our offerings bring people from different programs into the same room
2 Participants discover other GSE resources through our sessions
3 Our work drives numbers for programs we don't run
4 Q1: Legal 101 connected SAL + tEquity + Create+AI for the first time. Flash Lab on-ramp → referrals to CSET/PLEX, Challenge Success, youcubed, PACE.
The Absence Test
What wouldn't happen if I weren't here?
Prototypes wouldn't exist yet
Health coach, Colombia chatbot — still paragraphs in grant proposals, not things in front of learners
Flash Lab wouldn't be replicable
Would be "that workshop someone did once" instead of a toolkit at designkit.stanford.edu that others can run
Format decisions wouldn't get made
Nobody saying "60 minutes isn't enough for real learning" — the easy ask gets accepted, the high-leverage version never happens
Collaborators lose their outside mirror
Researchers and faculty talk to their own teams but miss the perspective of someone who understands tech + learning without a stake in the research outcome

Defensibility Roadmap

In the age of AI, the building is the easy part. The question is: what constraints still exist that I can remove?

What's Vulnerable
ModeRiskTimeline
Builder AI coding tools make prototyping accessible to non-technical people. "I can build you a chatbot" is a shrinking advantage. 1-2 years
Sounding Board Too vague to be a job description on its own. Only defensible if it leads to concrete outcomes or enables other modes. Now
Thoroughness work Note-taking, status tracking, check-in meetings — exactly what AI agents are getting good at. 6-12 months
What's Durable
ModeWhy It Holds
Format Designer Workshop structure, facilitation timing, when to let people struggle vs. intervene. Pedagogical design, not code. Hard-won through hundreds of sessions.
Strategic Filter Knowing when 60 minutes isn't enough, when to walk away, when a research question needs scoping before building. Judgment about learning thresholds.
System Designer Designing systems that let OTHER people build and experiment at scale. Not "I build for you" but "here's how you build for yourself." Flash Lab & bot101.app already are this.
Next Move: What Constraints Can I Remove?
The winning strategy: spot what just became possible, build value around that new capability
Idea → Pilot is too slow

Researchers have hypotheses but the friction of building and testing is too high. I'm solving this one-off (Health Coach Bot).

The move: Systematize it. A "Prototype Sprint" format — researchers come in with a question, leave with a testable artifact. Flash Lab but for research prototyping.
Faculty don't know what's possible

They can't remove constraints they don't know have disappeared. I see this every day in meetings.

The move: Build a format for research ideation — faculty bring their question, leave with a scoped prototype plan and understanding of what AI makes newly testable.
No cross-pollination across experiments

Fan/Chu, Marily, Forssell all learning about AI+learning in parallel. No synthesis across the portfolio.

The move: Be the person who spots patterns across 15+ experiments and surfaces insights no individual project would see. Pure judgment-layer work. Requires being in all the rooms — which I already am.
Source: reference/personal-logic-model.md