April 2026 · Tractor & Silo + PKM Vault + Command Center + Lens · 4 AI tools
466
Sessions
55,224
Turns
1,772
Decisions
4,447
Topics tracked
15.5
Avg sessions / day
118.5
Avg turns / session
3.8
Avg decisions / session
45.9%
Friction-heavy rate
53.0%
Topics matured
74.5%
With tech metadata
The shape of April
Daily session volume across the month. The shape reflects real work rhythm: 04-28's 30 sessions (1.9× the daily average) aligns with launch-week density on Silo iOS. The April 30 spike is a Gemini ingestion batch with shorter sessions, not a real day-of-work; the Apr 22 dip is a known travel day.
Bar height = session count. Color: orange = 60+ (batch artifact), amber = 25+, cream = 6-24, gray = ≤5. Hover any bar for the day's full numbers.
Tools and projects · friction signature
Friction-heavy rate by tool and by project. Claude Code is the dominant client; Gemini CLI runs at much lower friction (shorter sessions, lighter scope). The project view shows where the heat lived: Silo absorbed the most friction-heavy sessions (81 of 135), driven by iOS launch-week complexity.
Claude Code
59.3%
Sessions324
Turns53,302
Decisions1,361
Gemini CLI
15.6%
Sessions141
Turns1,918
Decisions410
Codex
0.0%
Sessions1
Turns4
Decisions1
Per-project breakdown: faded bar = total sessions; solid bar = friction-heavy subset. Number = friction-heavy / total.
PKM Vault
23/136
Silo
81/135
Command Center
55/97
Lens
25/36
Slideshow Gen
8/19
Almanac
8/12
thedetech
5/8
Finance
4/6
Session length · the long tail absorbs the time
Distribution of sessions by turn count. Most sessions are short and tractable. The rose-colored long tail is the friction zone: 140 sessions (30.0%) ran past 100 turns and absorbed 86.3% of total dialogue. The audit's correlation between high turn count and architectural drift lives in those bars.
Rose = friction zone (over 100 turns). Hover for exact totals.
Token economics · partial coverage
Token accounting was wired into Claude Code mid-month, so this view is partial: 225 of 466 sessions have token data, and the patterns from those don't generalize to the whole corpus. Useful as a baseline order-of-magnitude.
6,430,831
Tokens in
230,548
Tokens out
300
Tokens / turn (avg)
8,327
Tokens / decision (avg)
Subset of 225 sessions; total turns in subset: 22,179; total decisions: 800.
The compounding curve
Cumulative decisions (orange) and sessions (cyan) across April. The curve is roughly linear but accelerates after April 9, the inflection point where launch-readiness work began stacking on top of normal daily output. Decisions accumulated faster than sessions did, which is the compounding effect: the Vault as input made each successive session more decision-dense.
When the AI made a recommendation, how often did it become a recorded decision in the same session? thedetech ran at 128.1% while Finance sat at 36.0%. The first means I was reaching decisions faster than the AI recommended them; the second means recommendations stacked up without commitment. Conversion can be misleading. A rate over 100% means more decisions were recorded than recommendations issued, the human was leading.
Command Center
49.0%
Silo
85.9%
PKM Vault
72.4%
Lens
115.1%
Slideshow Gen
67.0%
Finance
36.0%
Almanac
80.4%
thedetech
128.1%
Conversion rate = decisions / recommendations across all sessions per project. Bars colored by project semantic palette and capped visually at 100%.
Open questions atlas · the knowledge boundary
868 open questions across April, classified by theme. The largest categories are where AI consistently fails to close the loop: strategy/roadmap calls that need taste, architectural decisions that need codebase context, and debugging questions that require live system access the model doesn't have. These are the human's job, still.
Other / Unclassified
535 · 61.6%
Strategy/Roadmap
129 · 14.9%
UX/Product
50 · 5.8%
Business/Pricing
47 · 5.4%
Architecture
41 · 4.7%
Performance/Scale
28 · 3.2%
Security/Auth
23 · 2.6%
Debugging/Root cause
15 · 1.7%
Keyword classification of 868 open questions captured by the Librarian. Questions with no matching keyword fall into "Other / Unclassified."
Top 12 technologies by session-mention
Granular tool mentions, deduplicated within each session. Git tops the list at 48.7% of sessions with tech metadata, nearly half the work touched version control in some way. The Rails-Swift dual gravity is real and almost evenly split. The local-AI stack (Gemini CLI, LM Studio, Ollama, Claude Code) collectively shows up across more sessions than any single language.
Git
169 · 48.7%
Ruby on Rails
103 · 29.7%
Swift
100 · 28.8%
Bash
77 · 22.2%
GitHub
72 · 20.7%
Python
70 · 20.2%
RevenueCat
69 · 19.9%
GitHub CLI
66 · 19.0%
Markdown
56 · 16.1%
Gemini CLI
55 · 15.9%
Heroku
55 · 15.9%
LM Studio
54 · 15.6%
Color: workflow (orange), languages and frameworks (amber), build tools and formats (cream), infrastructure (teal), AI tooling (indigo). Counted on the 74.5% of sessions where the Librarian captured a technologies_discussed field.