It’s nice to be excited about a computer company’s products again.

Framework held an event today, showing off some new updates for existing products and a couple of new ones as well. I’ve been daily driving their Framework Laptop 16 running NixOS as my non-work laptop for the last year, and a few nits (and some WAY too loud fans) aside, it’s been a great machine. While I was hoping for some updates to that model, it’s still cool to see where they’re going.

One theme prevalent throughout the event was the use of Ryzen-based APUs that include machine learning processing silicon (which AMD calls XDNA). If generative AI must exist, I much prefer running it locally, and I really don’t trust OpenAI/Meta/Google/Microsoft/Apple with any data beyond simple Q&A-type things. That’s still a tech space that requires professional-class hardware today, but it will likely become more accessible over the next few years. I assume this was partly a condition of being a relatively small startup yet getting such access to AMD’s hardware engineers, but it did make the presentation feel a little too much like “we’re trying to impress investors more than fans” energy.

The Framework Desktop was the most interesting reveal. It falls into the growing niche of small-but-mighty desktops like the Mac Studio, built on chips that have fast compute, graphics, and machine learning. The biggest downside is the soldered-on RAM, which AMD apparently couldn’t make socketable in this chip. However, that tradeoff means you get high-performance memory that can support massive models like those needed for high-parameter LLMs. You can also network them together; they showed a cluster of four desktop boards running in tandem, which could theoretically provide 384 GB of memory, enough to run something like DeepSeek-R1 with 4-bit quantization. To be clear, a cluster is a high-end tool for developers and hobbyists, but the desktop itself looks like a viable pre-built or desktop replacement for most people.

Source: Framework
Source: Framework

Their other laptop announcements were interesting. They teased a new 12-inch convertible laptop/tablet hybrid, which has their first touch screen and stylus support. This is definitely aimed at a different market than their other products, more akin to machines for kids and schools who want repairable hardware. I assume this won’t be a viable replacement for an art tablet or competitive with an Apple Pencil beyond basic writing and diagramming needs, but that would be a great market to start targeting as their stylus tech improves. They talked about some proof-of-concept ideas for more configurable keyboards for the Framework 16, which appears to be some way off.

But of course, their truly most exciting reveal…

Source: Framework
Source: Framework

Translucent plastics for the screen bezel and the expansion cards. These look gorgeous and will make a fantastic accent to the bottom and sides of a laptop. I am getting some of these as soon as I can. I hope they come to the Framework Laptop 16 very quickly.

While the event itself was pretty great, there were some pieces that were either disappointing or just… off. The lack of updates to the Framework 16 was really the most glaring, though given their startup nature and the fact that it’s their newest mainline product, it’s excusable. It would’ve been nice to get some upgrades this year, maybe another GPU model or quieter fans (dear Framework, please fix the fans, oh my god). The other piece that was a little unsettling was how some of their changes were either targeting investor types or were contrary to their past efforts. While the Desktop had a plausible technical reason for having soldered memory (seemingly driven by AMD requirements), it does mean the device is that much less upgradeable and repairable. The frequent nods to AI and being a Copilot+ PC were seemingly aimed at a less-technical audience. They did have some upgrades for the Framework 13, so hope is not lost, but we’ll have to keep an eye on this.

But overall, it’s good to see Framework still pushing out interesting products that focus on modularity, upgradability, and repairability. The rest of the industry is trying to turn the PC into another appliance you have no control over, and I choose to support those fighting against that trend.

Large language models (LLMs) are a polarizing technology. It seems that no matter who you talk to, they are either magical tools that will bring about machine consciousness and the end of scarcity and human labor, or a worthless autocorrect that steals the work of the world’s creators while boiling the oceans to fill the internet with regurgitated content slop. In many social spaces online, there isn’t much room for nuance between those extremes.

To my mind, both of these views are misguided attempts to influence human behavior. Those on the positive side are either uninformed about their limits or are trying to convince investors to part with their money. On the negative side are people who have associated “AI” with tech companies that have been forcing anti-features, privacy-violating spyware, and needlessly confusing redesigns into their products. In both cases, there is a nugget of truth that has been buried under propaganda, and when that propaganda becomes ideological, the truth becomes whatever fits the ideology.

In practice, the truth is in the middle. The evidence that LLMs are capable of some tasks is overwhelming and incontrovertible. For many people, they have value in brainstorming, summarizing, coding, authoring, roleplaying, natural language processing, and other jobs. But it’s also not a panacea; there are clear limits to what they can do, and questions about accuracy are not unfounded. The machines that generate and power these models consume significant energy, and tracking that consumption and the sources that power them is vital.

However, despite the plundering that led to the production of these models, the fact remains that they have been produced. There is no putting this toothpaste back into the tube; we have models, you can download them, and they cannot be eliminated. There is no value in breathlessly proselytizing about them, and there is no merit in sticking one’s head in the sand. There is real utility here. An LLM edited this very post, finding three typos and a grammatical error. We would be better served by treating LLMs skeptically and honestly, understanding how to use them, building intuitive mental models about how they work, and knowing how they might be manipulated to produce misleading or incorrect results.

That’s what I seek to do here: learn about LLMs as they are, and share my findings without prejudice about them being evil or being snake oil. For those who belong to the camp of one of the extremes, you are welcome to stay there if you like, but I am not really interested in talking someone in or out of an ideology. For those who do want to think critically about this new technology, I invite you to contribute to this effort as well and provide critique about how I approach these experiments.