Back to Prep Tag Analysis Plan — 3-Month Rollout

Tag Analysis Plan — 3-Month Rollout

For: AS Quarterly Planning | April 8, 2026 Lead: Reuben Thiessen


The Problem

Our 20-tag system captures what kind of value we create — but the data isn’t reliable yet. After an AI-assisted analysis of all 140 Airtable records:

  • STUDIO-BUILD appears on 70% of records — that may accurately reflect our portfolio (building is what we do), but it’s worth confirming: is every use precise, or has it become a reflexive default?
  • 3 official tags have never been used: SPARK, EQUITY-WIN, SYSTEM-SHIFT
  • 4 unofficial tags are actively in use: ADVISE (33x), BOOST (14x), CONFIDENCE-SHIFT, TRUST-MARKER — never formalized
  • 7 records are completely untagged
  • Capacity-building and impact tags are underrepresented — are we doing that work and not tagging it, or is it genuinely a smaller slice?

We agreed in March that we want to be “dead confident in the data by July.” Here’s how.


Current Tag Landscape (140 records)

Tier Tags Count Range
Core STUDIO-BUILD (98), CO-CREATE (64) 60-100
Active BRIDGE-BUILD (36), ADVISE* (33), VALUE-VISIBLE (32), MULTIPLIER (27), TOOLBOX (24), THOUGHT-PARTNER (21) 20-40
Moderate BOOST* (14), LEVEL-UP (12), RIPPLE (11) 10-15
Rare ALIGN (7), SCALE-MOMENT (7), SOUNDING-BOARD (7), NUGGET (6), ANCHOR (3), REFRAME (3) 1-7
Ghost VULNERABLE-SHARE (2), BREAKTHROUGH (1), CHECK-IN (1) 1-2
Never used SPARK, EQUITY-WIN, SYSTEM-SHIFT 0

* Not on the official 20-tag list but actively used in data


How Tags Map to the Handbook

The handbook organizes our 20 tags into three OKR categories. Some tags appear in multiple categories — they’re the “connective tissue” tags. Here’s what the Airtable data looks like through that lens:

OKR Category Handbook Tags What the Data Shows
R&D NUGGET, TOOLBOX, STUDIO-BUILD, VALUE-VISIBLE, REFRAME, ANCHOR, SPARK, BRIDGE-BUILD, BREAKTHROUGH, EQUITY-WIN, SYSTEM-SHIFT, CO-CREATE Dominant. STUDIO-BUILD (98) and CO-CREATE (64) drive this. But 3 R&D tags are never used (SPARK, EQUITY-WIN, SYSTEM-SHIFT) and BREAKTHROUGH has only 1 use.
Capacity-building LEVEL-UP, MULTIPLIER, VALUE-VISIBLE, TOOLBOX, SPARK, BRIDGE-BUILD, SCALE-MOMENT, BREAKTHROUGH, RIPPLE Underrepresented relative to R&D. MULTIPLIER (27) is healthy — Gregory, Courtney, ISTE TTT all evidence this. But LEVEL-UP (12) feels low given how much teaching/training we do.
Engagement VALUE-VISIBLE, REFRAME, BRIDGE-BUILD, THOUGHT-PARTNER, CHECK-IN, ANCHOR, SOUNDING-BOARD, VULNERABLE-SHARE, ALIGN, BREAKTHROUGH Weakest category. CHECK-IN (1), SOUNDING-BOARD (7), VULNERABLE-SHARE (2). Yet we do this work — seed grant check-ins, 1:1 thought-partnering — we’re just not tagging it.

The unofficial tags fill real gaps:

  • ADVISE (33 uses) — not in the handbook, but describes something distinct from THOUGHT-PARTNER or NUGGET. When we provide expert guidance within an ongoing relationship, that’s ADVISE. The handbook’s Design cycle (“advising on roadmaps to impact”) even uses this word.
  • BOOST (14 uses) — the handbook’s purpose statement literally says “We provide specialized boosts that mobilize promising research at critical moments.” The word is in our DNA but not in our codebook.

The handbook’s Implementation Protocol says we should:

  • Weekly: share 1-2 moments, assign tags collectively
  • Monthly: count tag frequency, identify patterns (e.g., BREAKTHROUGH often follows VULNERABLE-SHARE)
  • Quarterly: select 3-5 strongest examples per category

We’re not doing the weekly or monthly cadence yet. The quarterly is what we’re starting tomorrow.

March 2026 amendment to monthly meetings added: focus on 1) the “what”, 2) the “how”, 3) the “so what” — tags are the system that makes the “so what” answerable.


3-Month Rollout

Phase 1: Baseline & Calibration (April)

Goal: Understand where we are and align on what tags mean.

  • Top 3 / Bottom 3 exercise — Each team member picks the 3 tags they use most and 3 they never use. Compare. Are we interpreting them the same way?
  • Resolve unofficial tags — ADVISE and BOOST have 47 combined uses. Decide: add to official list, or map to existing tags?
  • Review ghost/unused tags — Do SPARK, EQUITY-WIN, SYSTEM-SHIFT describe work we’re doing but not tagging? Or are they aspirational?
  • Publish updated tag definitions — One shared reference with examples from our own work, not hypotheticals.

Output: Revised tag list + shared definitions doc

Phase 2: Retag & Test (May–June)

Goal: Clean the historical data and test the AI-assisted methodology.

  • AI-assisted retag pass — Run all 140 records through AI analysis with the updated definitions. Generate suggested tags for each.
  • Manual review — Each team member reviews AI suggestions for their projects. Accept, reject, or adjust.
  • Methodology check — Is AI tagging saving time? Is it accurate enough? Document what’s working.
  • Tag the 7 untagged records — Quick cleanup.
  • Monthly discipline — Tag new entries at creation, not retroactively.

Output: Clean dataset + methodology assessment

Phase 3: Confident Data (July)

Goal: Trust the data enough to report on it.

  • Final validation — Spot-check 20 records across all three team members.
  • Lock methodology — Document the tagging workflow (AI-suggest → human-verify) for ongoing use.
  • First real analysis — Tag frequency trends, patterns across projects, evidence chains for each OKR type.
  • Team alignment check — Are we all confident? If not, what’s still fuzzy?

Output: Reliable data + repeatable process + first analytical report


Discussion Questions for Tomorrow

  1. ADVISE and BOOST — Should we add them to the official list? ADVISE has 33 uses — it clearly describes something we do. BOOST has 14.
  2. STUDIO-BUILD at 70% — Does this accurately reflect our portfolio mix, or are some of those entries better described by a more specific tag? Worth a spot-check.
  3. Never-used tags — Are SPARK, EQUITY-WIN, and SYSTEM-SHIFT describing work we’re not doing or work we’re not recognizing?
  4. Who reviews what? — For Phase 2, should each person review only their own projects, or cross-review for calibration?
  5. Cadence — Monthly check-in on tag health, or just a July checkpoint?

Connection to Goal 1

Team Communication Architecture & Values: 75%+ seed grantees with 3+ tags

Currently hard to measure because the data isn’t reliable. This plan gets us there. Once we trust the tags, we can:

  • Track engagement depth per project
  • Spot cooling relationships early
  • Build evidence chains (tag sequences that tell a story)
  • Report with confidence to Isabelle and stakeholders

Prepared with AI-assisted analysis of 140 AC:DE Macro Manager Layer records (Airtable, April 7, 2026)

Source: prep/2026-04-08-tag-analysis-plan.md