<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Apple Silicon on Thede Technologies</title><link>https://thedetech.com/tags/apple-silicon/</link><description>Recent content in Apple Silicon on Thede Technologies</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Thu, 23 Apr 2026 22:42:53 -0500</lastBuildDate><atom:link href="https://thedetech.com/tags/apple-silicon/index.xml" rel="self" type="application/rss+xml"/><item><title>Running real models locally on a Mac Studio that isn't new anymore</title><link>https://thedetech.com/blog/2026-04-23-local-models-aging-mac/</link><pubDate>Thu, 23 Apr 2026 00:00:00 +0000</pubDate><guid>https://thedetech.com/blog/2026-04-23-local-models-aging-mac/</guid><description>&lt;p>I have a multi-generational archive of handwritten family letters sitting in my house, and I wanted to read it without sending the private content to a cloud provider. The &lt;a href="https://thedetech.com/blog/2026-04-23-family-archives-ai-can-read/">first post in this series&lt;/a>
 is the &lt;em>why&lt;/em>. This one is the &lt;em>how&lt;/em> — on the specific hardware I already own, with the specific software I landed on.&lt;/p>
&lt;p>My Mac Studio is almost four years old now, but it has 64 GB of RAM and it has an M1 Max chip. It has proven itself to be quite capable of running large language models locally for personal benefit. It helps with maintaining privacy, and it also is much less expensive than leveraging any cloud provider.&lt;/p></description></item></channel></rss>