<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Benchmarks on Thede Technologies</title><link>https://thedetech.com/tags/benchmarks/</link><description>Recent content in Benchmarks on Thede Technologies</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Thu, 23 Apr 2026 22:42:53 -0500</lastBuildDate><atom:link href="https://thedetech.com/tags/benchmarks/index.xml" rel="self" type="application/rss+xml"/><item><title>A general-purpose MoE multimodal beat every dedicated vision model on my father's handwriting</title><link>https://thedetech.com/blog/2026-04-23-moe-beats-dedicated-vision/</link><pubDate>Thu, 23 Apr 2026 00:00:00 +0000</pubDate><guid>https://thedetech.com/blog/2026-04-23-moe-beats-dedicated-vision/</guid><description>&lt;p>For context: I&amp;rsquo;ve been transcribing a multi-generational archive of handwritten family letters on my own hardware. The first two posts covered &lt;a href="https://thedetech.com/blog/2026-04-23-family-archives-ai-can-read/">why I&amp;rsquo;m doing this at all&lt;/a>
 and &lt;a href="https://thedetech.com/blog/2026-04-23-local-models-aging-mac/">how to set it up on your own machine&lt;/a>
. This post is the surprising finding — the one I didn&amp;rsquo;t expect going in.&lt;/p>
&lt;p>I assumed the right tool for a vision task was a vision model. That&amp;rsquo;s the obvious reach. If you&amp;rsquo;re reading handwriting, you reach for something labeled &amp;ldquo;VL.&amp;rdquo; If you can find one fine-tuned for OCR and handwriting specifically, even better.&lt;/p></description></item></channel></rss>