Skip to content

Just Ralph it, baby!

Published:
Just Ralph it, baby!

This year started with a lot of beef. I gave a talk last year titled ‘The Claude Code Wars’, and it seems that we have now entered the second episode.

This is where it all began, roughly six months ago. Geoffrey Huntley wrote a blog post about how Ralphing works. It’s based on a simple idea: Pick a prompt, then let coding agents run on this prompt for as long as necessary, or until you are happy with the result. This approach became a meme, solving the “human-in-the-loop” problem that agents have today by brute-forcing the approach and burning tokens.

Fast forward to January and Gas Town by Steve Yegge is released. It’s a multi-agent orchestration system with the following architecture diagram:

Gas Town Architecture

If that sounds complicated, that’s because it is. You have one main agent to communicate with: the mayor. The mayor then commands a team of different roles to build your project. Apparently, people are using multiple Claude Max $200 plans simultaneously with this approach. It instantly became very popular.

Gas Town was probably built using a similar approach. You can see in the Commits graph on GitHub that an absurd number of commits were created in a short amount of time.

Gas Town Commit Graph

Although Gas Town is more elaborate than Ralphing, the promise is the same: building software doesn’t require any skill anymore. It can be completely replaced by brute-forcing it with average agents. Give them enough time and your project will surpass any project written by humans.

On the other side of the arena are people with a deep commitment to software engineering who are trying to combine the power of their trained brains with that of agents. Peter’s post, Shipping at Inference-Speed, is an excellent summary and illustrates the extent to which standard software engineering practices have diverged in the last year.

Armin’s post delves deeply into the topic and is well researched. For example, Beads is an issue tracker created by the makers of Gas Town that uses 240k lines of code to track issues in a simple format.

Highly interesting Popcorn Time 🍿, watching everything from the sidelines? I don’t think so.

We’re seeing very different ways of using agents to assist with software project development. Unless you take the totally agnostic stance of not using agents at all, it’s impossible not to position yourself on some side here.

I’m investing my time and money in enhancing my existing software development skills with agentic tools like Claude Code or Codex. I don’t want to be completely replaced by brute force approaches. Fingers crossed! 🤞

The Case for TreeOS, Part 1

Published:

In December, I released the first beta version of my small operating system, TreeOS. It is freely available on GitHub and is accompanied by a B2B solution called OnTree.co that lets you run internal or private AI-powered apps and skills 100% locally.

I am not happy with the software quality yet, but looking back, this is not surprising. When I started working with agents, it made sense to start with the most extreme approach: writing the entire code base with agents. Unsurprisingly, agentic engineering is very different from classical software engineering. I have learned a lot, as you can see in the result. I summarized my learnings in a conference talk I held yesterday in Berlin, and probably in one of our upcoming Agentic Coding Meetups in Hamburg as well. I also have a much clearer picture of how to improve the code base.

This is the first in a series about the reasons that led me to build TreeOS.

The LLM Gap

There is a gap between what current computers and smartphones can do locally and what is possible in the cloud with massive GPUs. Here is a rough classification:

Mobile models (2B-8B)

There are some excellent models in this size class, often highly specialized for one use case. For example, I use these models for speech-to-text or text-to-speech locally. Modern smartphones have up to 16 GB of memory, so the operating system and these models can run in parallel. From a battery perspective, it is important that these models only run for short periods of time, or your battery drains too fast.

Local models (8B-32B)

This is the classic local model class. There are a huge number of optimized models. Users of these models typically have a powerful gaming graphics card with 16-32 GB of RAM. This allows them to use their CPU normally and run long-running tasks on the graphics card.

Cloud models (200B and up)

The exact sizes of the major cloud models are not disclosed. One of the leading open-source models is Kimi K2 Thinking, which has 1 trillion parameters. This equates to a requirement of roughly 250 GB of RAM. Besides a maxed-out Mac Studio with 500 GB of RAM, I am not aware of any off-the-shelf hardware solution that can run these models locally.

The gap (32B-200B)

This gap is not receiving much attention at the moment, but I am optimistic that this will change in 2026. We have already seen that mixture-of-expert models, such as GPT-OSS 120B, are highly capable and can run at reasonable speeds. Also, computers in this class can run a 32B model and hold a lot of context. This is important for agentic workflows, where the agent must build a mental map of the problem, whether it is code-related or not. However, as this memory also needs to be stored in graphics RAM, consumer graphics cards are not suitable for these workloads.

With the AMD AI 300 and 400 series, affordable machines are now available that are ideally suited to long-running asynchronous tasks.

The first bet of TreeOS is that this niche will grow in 2026.

Two Book Recommendations: Sorry, No Happy Endings!

Published:
Two Book Recommendations: Sorry, No Happy Endings!

Two Book Recommendations: Sorry, No Happy Endings!

In 2026, I will cover a wider range of topics beyond AI and 3D printing. To kick off the new year, I want to share two books that have reshaped my thinking over the past year. Unfortunately, one is only available in German. The other is available in both English and German.

Deutsche Militärgeschichte by Stig Förster

I read a review of this book in Süddeutsche Zeitung. As I’m not at all a fan of the military, it took me a while to download the sample to my Kindle app and start reading. At school, I never had a good history teacher and I always hated the subject. I mostly remember it as a boring game of memorising years without any connection between them, which my brain didn’t want to remember. Like most brains, mine works better if there’s a story, connection and reasoning included.

This book also mentions years, but it’s not at all about exact dates. It’s much more about the context of society at that time and the dynamics that often inevitably led to military conflicts. Stig Förster believes that the military is part of human organisation and that violent force is recurring, whether we want it or not. Having read it, I tend to agree, which unfortunately makes the outlook on the next decade grim.

Interestingly, the book covers the period from 1525 to 2025, continuing up to the present day and including the current conflict with Russia. Although I found the final chapters the most interesting, I found the entire 1,294-page book a surprisingly easy read. Although I studied World War I and World War II extensively at school, the book provided me with many new insights into these periods. For example, I learnt that the Nazi regime knew as early as 1941 that they would ultimately lose the war and adjusted their goals accordingly.

Buy as an ebook at Thalia and you get an EPUB without any DRM measures.

How Countries Go Broke, The Big Cycle by Ray Dalio

Another book that I initially resisted starting to read. Ray Dalio is a major hedge fund manager. What could I possibly learn from a greedy finance guy whose only purpose in life is to maximise their own profits?

Quite a lot, as it turns out. It’s also very much a history book, albeit a financial history book. The central claim is that economies go through a major cycle every 50 to 150 years. Our cycle began in 1950, after the Second World War, and is now in its final phase. The author doesn’t just make this claim, but also presents extensive data from previous cycles, arguing that this knowledge is largely hidden because most people experience it only once in their lifetime.

He also offers a fascinating perspective on China, presenting it as more than just the evil state it is often portrayed as. It was a much harder read than Stig Förster because it contained a lot of information about the financial system that I hadn’t come across before. If you’re ever curious about macroeconomics, whether because of Bitcoin, debt, or what bonds are, you’re in for a treat.

Buy here at Thalia or here at Amazon, both times with DRM.

I wish you a great start to 2026!

A Personal Note: And No, It’s Not The End!

Published:
A Personal Note: And No, It’s Not The End!

It’s the week before Christmas. You’re probably feeling the pressure to live up to all the expectations that Christmas brings each year. I hope you can let a few of them slide and choose to have a good time for a few hours in between.

From the outset, I have treated my newsletter The Liquid Engineer as an experimental playground. I started out with lots of motivation for 3D printing, and I am still convinced it’s a hugely powerful technology that will be adopted much more widely in the coming years. Then, even more interesting things happened: I switched my focus to agentic coding. In recent weeks, I’ve also incorporated a lot of content on local LLMs and the necessary software and hardware for this. This is because I’m trying to build a business in this area with OnTree.

Thank you for following me on this journey so far!

I realized that I write my best posts when I write about topics that interest me. I will continue to write this newsletter, but I won’t be bound to any particular topics in the future. It might be about something that has just happened or something that I am currently working on. If you joined me for the AI content, I invite you to stick around over the next few weeks to see if you can still find value in the content. Since AI is keeping me busy, I expect there will still be plenty of AI-related content. If there’s a topic you’d like to read about, just hit reply and let me know!

For the past few months, I have also been publishing the newsletter posts right here, on my personal blog. If you prefer this method, feel free to dust off your old RSS reader and subscribe to my RSS feed!

Have a great Christmas with your loved ones!

The Future Of Computing Will Be Hyper-personalized: Why I Signed The Resonant Computing Manifesto

Published:
The Future Of Computing Will Be Hyper-personalized: Why I Signed The Resonant Computing Manifesto

Last week, an exciting website appeared in my feeds: The Resonant Computing Manifesto.

The manifesto revolves around the idea that we are at a crossroads with AI. We can either double down on the direction we’ve already taken in our digital lives, a race to the bottom, or we can do something different. The goal is to build the most appealing digital junk food to maximize ad consumption and profits. This turns the internet into a few big platforms that are noisy and attention-seeking and don’t serve customer needs. Big tech companies like TikTok, Facebook, and Instagram are pushing in this direction with full force, trying to use AI for hyper-personalization on their platforms.

The manifesto coined the term “resonance,” which comes from the field of architecture (the architecture of real buildings, not software). It describes how we feel more at home and more human in certain environments. It is a quality without a name that is not strictly measurable and more intuitively graspable.

The manifesto suggests that AI can advance the current state of the internet and allow for new possibilities. This technology enables more hyper-personalized experiences off the major platforms because the technical requirements for one-size-fits-all solutions have disappeared.

The manifesto centers on five principles that resonate with my vision for OnTree:

  1. A private experience on the Internet,
  2. that is dedicated exclusively to each customer,
  3. with no single entity controlling it (Plural),
  4. adaptable to the specific needs of the customer,
  5. and prosocial, making our offline lives better, too.

I love the whole piece. Of course, it’s idealistic and probably sounds naive at first. However, I believe this world could use much more idealistic and naive believers in a new internet. Without these dreamers, nothing will change.

The only shortcoming I see is that the website doesn’t address the consequences of people being “primary stewards of their own context.” To me, this is impossible without a mindset shift away from passively being monetized and toward actively funding the software we want to succeed. Without making it clear that we must put our money where our mouth is, I feel this manifesto is incomplete.

Kagi.com is the perfect example here. Google’s primary interest in search is always monetization. Therefore, it is logically impossible for them to want you to find what you’re searching for on page one, spot one. Kagi.com has a far superior, ad-free search engine, and their main slogan is “Humanize the Web.” With attractive family and duo plans, I find Kagi to be excellent value for the money, we pay less than four euros per family member per month.

To get a resonant internet, we have to pay the right companies the right amount of money.

(Source of the banner this time is resonantcomputing.org)