<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Debugging on Thede Technologies</title><link>https://thedetech.com/tags/debugging/</link><description>Recent content in Debugging on Thede Technologies</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Wed, 29 Apr 2026 16:53:36 -0500</lastBuildDate><atom:link href="https://thedetech.com/tags/debugging/index.xml" rel="self" type="application/rss+xml"/><item><title>The move was upstream</title><link>https://thedetech.com/blog/2026-04-28-the-move-was-upstream/</link><pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate><guid>https://thedetech.com/blog/2026-04-28-the-move-was-upstream/</guid><description>&lt;p>I spent most of a day pointing at broken cases. Each time I pointed at one, the model patched it. The next regeneration would surface a different broken case in a different file. We&amp;rsquo;d patch that. Then a third. By late afternoon I was the one suggesting we slow down and return to first principles — the model wasn&amp;rsquo;t pulling me up a level on its own.&lt;/p>
&lt;p>The bug itself was small. I&amp;rsquo;ve been wiring up a pipeline that turns transcripts of my conversations with AI tools into clean Markdown for an Obsidian vault. The reasoning part — extracting topics, summarizing, tagging — was working. The &lt;em>rendering&lt;/em> kept breaking. Stray characters, broken callouts, content that ended up harder to read than the original conversation had been. I was working it inside Gemini CLI, mostly on their fast tier with occasional escalation to their reasoning model. Twelve back-and-forth turns across roughly six hours of intermittent work, and we still hadn&amp;rsquo;t landed it.&lt;/p></description></item></channel></rss>