30 Days of AI Collaboration

April 2026 · Tractor & Silo + PKM Vault + Command Center + Lens · 4 AI tools
466
Sessions
55,224
Turns
1,772
Decisions
4,447
Topics tracked
15.5
Avg sessions / day
118.5
Avg turns / session
3.8
Avg decisions / session
45.9%
Friction-heavy rate
53.0%
Topics matured
74.5%
With tech metadata

The shape of April

Daily session volume across the month. The shape reflects real work rhythm: 04-28's 30 sessions (1.9× the daily average) aligns with launch-week density on Silo iOS. The April 30 spike is a Gemini ingestion batch with shorter sessions, not a real day-of-work; the Apr 22 dip is a known travel day.

SESSIONS PER DAY02550751001252026-04-01: 1 sessions, 100 turns, 4 decisions, 1 friction-heavy2026-04-02: 7 sessions, 1,534 turns, 36 decisions, 4 friction-heavy2026-04-03: 24 sessions, 3,669 turns, 62 decisions, 8 friction-heavy2026-04-04: 8 sessions, 1,043 turns, 42 decisions, 8 friction-heavy2026-04-05: 5 sessions, 1,021 turns, 27 decisions, 5 friction-heavy2026-04-06: 9 sessions, 496 turns, 37 decisions, 4 friction-heavy2026-04-07: 6 sessions, 1,786 turns, 34 decisions, 5 friction-heavy2026-04-08: 11 sessions, 882 turns, 40 decisions, 7 friction-heavy2026-04-09: 25 sessions, 3,862 turns, 109 decisions, 16 friction-heavy2026-04-10: 21 sessions, 919 turns, 53 decisions, 6 friction-heavy2026-04-11: 13 sessions, 2,582 turns, 59 decisions, 7 friction-heavy2026-04-12: 19 sessions, 2,731 turns, 81 decisions, 12 friction-heavy2026-04-13: 24 sessions, 3,776 turns, 86 decisions, 13 friction-heavy2026-04-14: 4 sessions, 818 turns, 12 decisions, 2 friction-heavy2026-04-15: 11 sessions, 1,178 turns, 37 decisions, 5 friction-heavy2026-04-16: 20 sessions, 3,335 turns, 76 decisions, 11 friction-heavy2026-04-17: 12 sessions, 1,045 turns, 42 decisions, 4 friction-heavy2026-04-18: 24 sessions, 4,113 turns, 120 decisions, 17 friction-heavy2026-04-19: 5 sessions, 891 turns, 31 decisions, 5 friction-heavy2026-04-20: 7 sessions, 2,792 turns, 39 decisions, 5 friction-heavy2026-04-21: 4 sessions, 491 turns, 17 decisions, 2 friction-heavy2026-04-22: 1 sessions, 2 turns, 2 decisions, 0 friction-heavy2026-04-23: 10 sessions, 1,458 turns, 42 decisions, 5 friction-heavy2026-04-24: 8 sessions, 1,220 turns, 22 decisions, 3 friction-heavy2026-04-25: 6 sessions, 2,125 turns, 30 decisions, 4 friction-heavy2026-04-26: 2 sessions, 68 turns, 11 decisions, 0 friction-heavy2026-04-27: 13 sessions, 533 turns, 31 decisions, 4 friction-heavy2026-04-28: 30 sessions, 7,599 turns, 161 decisions, 22 friction-heavy2026-04-29: 12 sessions, 1,696 turns, 52 decisions, 9 friction-heavy2026-04-30: 124 sessions, 1,459 turns, 377 decisions, 20 friction-heavy04-0104-0504-1004-1504-2004-2504-30
Bar height = session count. Color: orange = 60+ (batch artifact), amber = 25+, cream = 6-24, gray = ≤5. Hover any bar for the day's full numbers.

Tools and projects · friction signature

Friction-heavy rate by tool and by project. Claude Code is the dominant client; Gemini CLI runs at much lower friction (shorter sessions, lighter scope). The project view shows where the heat lived: Silo absorbed the most friction-heavy sessions (81 of 135), driven by iOS launch-week complexity.

Claude Code
59.3%
Sessions324
Turns53,302
Decisions1,361
Gemini CLI
15.6%
Sessions141
Turns1,918
Decisions410
Codex
0.0%
Sessions1
Turns4
Decisions1
Per-project breakdown: faded bar = total sessions; solid bar = friction-heavy subset. Number = friction-heavy / total.
PKM Vault
23/136
Silo
81/135
Command Center
55/97
Lens
25/36
Slideshow Gen
8/19
Almanac
8/12
thedetech
5/8
Finance
4/6

Session length · the long tail absorbs the time

Distribution of sessions by turn count. Most sessions are short and tractable. The rose-colored long tail is the friction zone: 140 sessions (30.0%) ran past 100 turns and absorbed 86.3% of total dialogue. The audit's correlation between high turn count and architectural drift lives in those bars.

SESSIONS BY TURN-COUNT BUCKET1-5 turns: 166 sessions, 412 total turns6-10 turns: 18 sessions, 144 total turns11-25 turns: 35 sessions, 588 total turns26-50 turns: 50 sessions, 1,806 total turns51-100 turns: 56 sessions, 4,388 total turns101-250 turns: 69 sessions, 10,780 total turns251+ turns: 71 sessions, 35,365 total turns1661-5186-103511-255026-505651-10069101-25071251+
Rose = friction zone (over 100 turns). Hover for exact totals.

Token economics · partial coverage

Token accounting was wired into Claude Code mid-month, so this view is partial: 225 of 466 sessions have token data, and the patterns from those don't generalize to the whole corpus. Useful as a baseline order-of-magnitude.

6,430,831
Tokens in
230,548
Tokens out
300
Tokens / turn (avg)
8,327
Tokens / decision (avg)
Subset of 225 sessions; total turns in subset: 22,179; total decisions: 800.

The compounding curve

Cumulative decisions (orange) and sessions (cyan) across April. The curve is roughly linear but accelerates after April 9, the inflection point where launch-readiness work began stacking on top of normal daily output. Decisions accumulated faster than sessions did, which is the compounding effect: the Vault as input made each successive session more decision-dense.

CUMULATIVE DECISIONSCUMULATIVE SESSIONS050010001500010020030040004-0104-0504-1004-1504-2004-2504-30
Solid orange = cumulative decisions (left axis). Dashed cyan = cumulative sessions (right axis).

Recommendation → decision conversion rate

When the AI made a recommendation, how often did it become a recorded decision in the same session? thedetech ran at 128.1% while Finance sat at 36.0%. The first means I was reaching decisions faster than the AI recommended them; the second means recommendations stacked up without commitment. Conversion can be misleading. A rate over 100% means more decisions were recorded than recommendations issued, the human was leading.

Command Center
49.0%
Silo
85.9%
PKM Vault
72.4%
Lens
115.1%
Slideshow Gen
67.0%
Finance
36.0%
Almanac
80.4%
thedetech
128.1%
Conversion rate = decisions / recommendations across all sessions per project. Bars colored by project semantic palette and capped visually at 100%.

Open questions atlas · the knowledge boundary

868 open questions across April, classified by theme. The largest categories are where AI consistently fails to close the loop: strategy/roadmap calls that need taste, architectural decisions that need codebase context, and debugging questions that require live system access the model doesn't have. These are the human's job, still.

Other / Unclassified
535 · 61.6%
Strategy/Roadmap
129 · 14.9%
UX/Product
50 · 5.8%
Business/Pricing
47 · 5.4%
Architecture
41 · 4.7%
Performance/Scale
28 · 3.2%
Security/Auth
23 · 2.6%
Debugging/Root cause
15 · 1.7%
Keyword classification of 868 open questions captured by the Librarian. Questions with no matching keyword fall into "Other / Unclassified."

Top 12 technologies by session-mention

Granular tool mentions, deduplicated within each session. Git tops the list at 48.7% of sessions with tech metadata, nearly half the work touched version control in some way. The Rails-Swift dual gravity is real and almost evenly split. The local-AI stack (Gemini CLI, LM Studio, Ollama, Claude Code) collectively shows up across more sessions than any single language.

Git
169 · 48.7%
Ruby on Rails
103 · 29.7%
Swift
100 · 28.8%
Bash
77 · 22.2%
GitHub
72 · 20.7%
Python
70 · 20.2%
RevenueCat
69 · 19.9%
GitHub CLI
66 · 19.0%
Markdown
56 · 16.1%
Gemini CLI
55 · 15.9%
Heroku
55 · 15.9%
LM Studio
54 · 15.6%
Color: workflow (orange), languages and frameworks (amber), build tools and formats (cream), infrastructure (teal), AI tooling (indigo). Counted on the 74.5% of sessions where the Librarian captured a technologies_discussed field.