Skip to content

The Case for TreeOS, Part 1

Published:

In December, I released the first beta version of my small operating system, TreeOS. It is freely available on GitHub and is accompanied by a B2B solution called OnTree.co that lets you run internal or private AI-powered apps and skills 100% locally.

I am not happy with the software quality yet, but looking back, this is not surprising. When I started working with agents, it made sense to start with the most extreme approach: writing the entire code base with agents. Unsurprisingly, agentic engineering is very different from classical software engineering. I have learned a lot, as you can see in the result. I summarized my learnings in a conference talk I held yesterday in Berlin, and probably in one of our upcoming Agentic Coding Meetups in Hamburg as well. I also have a much clearer picture of how to improve the code base.

This is the first in a series about the reasons that led me to build TreeOS.

The LLM Gap

There is a gap between what current computers and smartphones can do locally and what is possible in the cloud with massive GPUs. Here is a rough classification:

Mobile models (2B-8B)

There are some excellent models in this size class, often highly specialized for one use case. For example, I use these models for speech-to-text or text-to-speech locally. Modern smartphones have up to 16 GB of memory, so the operating system and these models can run in parallel. From a battery perspective, it is important that these models only run for short periods of time, or your battery drains too fast.

Local models (8B-32B)

This is the classic local model class. There are a huge number of optimized models. Users of these models typically have a powerful gaming graphics card with 16-32 GB of RAM. This allows them to use their CPU normally and run long-running tasks on the graphics card.

Cloud models (200B and up)

The exact sizes of the major cloud models are not disclosed. One of the leading open-source models is Kimi K2 Thinking, which has 1 trillion parameters. This equates to a requirement of roughly 250 GB of RAM. Besides a maxed-out Mac Studio with 500 GB of RAM, I am not aware of any off-the-shelf hardware solution that can run these models locally.

The gap (32B-200B)

This gap is not receiving much attention at the moment, but I am optimistic that this will change in 2026. We have already seen that mixture-of-expert models, such as GPT-OSS 120B, are highly capable and can run at reasonable speeds. Also, computers in this class can run a 32B model and hold a lot of context. This is important for agentic workflows, where the agent must build a mental map of the problem, whether it is code-related or not. However, as this memory also needs to be stored in graphics RAM, consumer graphics cards are not suitable for these workloads.

With the AMD AI 300 and 400 series, affordable machines are now available that are ideally suited to long-running asynchronous tasks.

The first bet of TreeOS is that this niche will grow in 2026.

Two Book Recommendations: Sorry, No Happy Endings!

Published:
Two Book Recommendations: Sorry, No Happy Endings!

Two Book Recommendations: Sorry, No Happy Endings!

In 2026, I will cover a wider range of topics beyond AI and 3D printing. To kick off the new year, I want to share two books that have reshaped my thinking over the past year. Unfortunately, one is only available in German. The other is available in both English and German.

Deutsche Militärgeschichte by Stig Förster

I read a review of this book in Süddeutsche Zeitung. As I’m not at all a fan of the military, it took me a while to download the sample to my Kindle app and start reading. At school, I never had a good history teacher and I always hated the subject. I mostly remember it as a boring game of memorising years without any connection between them, which my brain didn’t want to remember. Like most brains, mine works better if there’s a story, connection and reasoning included.

This book also mentions years, but it’s not at all about exact dates. It’s much more about the context of society at that time and the dynamics that often inevitably led to military conflicts. Stig Förster believes that the military is part of human organisation and that violent force is recurring, whether we want it or not. Having read it, I tend to agree, which unfortunately makes the outlook on the next decade grim.

Interestingly, the book covers the period from 1525 to 2025, continuing up to the present day and including the current conflict with Russia. Although I found the final chapters the most interesting, I found the entire 1,294-page book a surprisingly easy read. Although I studied World War I and World War II extensively at school, the book provided me with many new insights into these periods. For example, I learnt that the Nazi regime knew as early as 1941 that they would ultimately lose the war and adjusted their goals accordingly.

Buy as an ebook at Thalia and you get an EPUB without any DRM measures.

How Countries Go Broke, The Big Cycle by Ray Dalio

Another book that I initially resisted starting to read. Ray Dalio is a major hedge fund manager. What could I possibly learn from a greedy finance guy whose only purpose in life is to maximise their own profits?

Quite a lot, as it turns out. It’s also very much a history book, albeit a financial history book. The central claim is that economies go through a major cycle every 50 to 150 years. Our cycle began in 1950, after the Second World War, and is now in its final phase. The author doesn’t just make this claim, but also presents extensive data from previous cycles, arguing that this knowledge is largely hidden because most people experience it only once in their lifetime.

He also offers a fascinating perspective on China, presenting it as more than just the evil state it is often portrayed as. It was a much harder read than Stig Förster because it contained a lot of information about the financial system that I hadn’t come across before. If you’re ever curious about macroeconomics, whether because of Bitcoin, debt, or what bonds are, you’re in for a treat.

Buy here at Thalia or here at Amazon, both times with DRM.

I wish you a great start to 2026!

A Personal Note: And No, It’s Not The End!

Published:
A Personal Note: And No, It’s Not The End!

It’s the week before Christmas. You’re probably feeling the pressure to live up to all the expectations that Christmas brings each year. I hope you can let a few of them slide and choose to have a good time for a few hours in between.

From the outset, I have treated my newsletter The Liquid Engineer as an experimental playground. I started out with lots of motivation for 3D printing, and I am still convinced it’s a hugely powerful technology that will be adopted much more widely in the coming years. Then, even more interesting things happened: I switched my focus to agentic coding. In recent weeks, I’ve also incorporated a lot of content on local LLMs and the necessary software and hardware for this. This is because I’m trying to build a business in this area with OnTree.

Thank you for following me on this journey so far!

I realized that I write my best posts when I write about topics that interest me. I will continue to write this newsletter, but I won’t be bound to any particular topics in the future. It might be about something that has just happened or something that I am currently working on. If you joined me for the AI content, I invite you to stick around over the next few weeks to see if you can still find value in the content. Since AI is keeping me busy, I expect there will still be plenty of AI-related content. If there’s a topic you’d like to read about, just hit reply and let me know!

For the past few months, I have also been publishing the newsletter posts right here, on my personal blog. If you prefer this method, feel free to dust off your old RSS reader and subscribe to my RSS feed!

Have a great Christmas with your loved ones!

The Future Of Computing Will Be Hyper-personalized: Why I Signed The Resonant Computing Manifesto

Published:
The Future Of Computing Will Be Hyper-personalized: Why I Signed The Resonant Computing Manifesto

Last week, an exciting website appeared in my feeds: The Resonant Computing Manifesto.

The manifesto revolves around the idea that we are at a crossroads with AI. We can either double down on the direction we’ve already taken in our digital lives, a race to the bottom, or we can do something different. The goal is to build the most appealing digital junk food to maximize ad consumption and profits. This turns the internet into a few big platforms that are noisy and attention-seeking and don’t serve customer needs. Big tech companies like TikTok, Facebook, and Instagram are pushing in this direction with full force, trying to use AI for hyper-personalization on their platforms.

The manifesto coined the term “resonance,” which comes from the field of architecture (the architecture of real buildings, not software). It describes how we feel more at home and more human in certain environments. It is a quality without a name that is not strictly measurable and more intuitively graspable.

The manifesto suggests that AI can advance the current state of the internet and allow for new possibilities. This technology enables more hyper-personalized experiences off the major platforms because the technical requirements for one-size-fits-all solutions have disappeared.

The manifesto centers on five principles that resonate with my vision for OnTree:

  1. A private experience on the Internet,
  2. that is dedicated exclusively to each customer,
  3. with no single entity controlling it (Plural),
  4. adaptable to the specific needs of the customer,
  5. and prosocial, making our offline lives better, too.

I love the whole piece. Of course, it’s idealistic and probably sounds naive at first. However, I believe this world could use much more idealistic and naive believers in a new internet. Without these dreamers, nothing will change.

The only shortcoming I see is that the website doesn’t address the consequences of people being “primary stewards of their own context.” To me, this is impossible without a mindset shift away from passively being monetized and toward actively funding the software we want to succeed. Without making it clear that we must put our money where our mouth is, I feel this manifesto is incomplete.

Kagi.com is the perfect example here. Google’s primary interest in search is always monetization. Therefore, it is logically impossible for them to want you to find what you’re searching for on page one, spot one. Kagi.com has a far superior, ad-free search engine, and their main slogan is “Humanize the Web.” With attractive family and duo plans, I find Kagi to be excellent value for the money, we pay less than four euros per family member per month.

To get a resonant internet, we have to pay the right companies the right amount of money.

(Source of the banner this time is resonantcomputing.org)

What Folding Laundry Taught Me About Working With AI

Published:

What Folding Laundry Taught Me About Working With AI

Yesterday evening I was folding laundry. It was one of those pesky loads, the basket was filled with socks. We’re a four person household, in theory, that should make it easier to distinguish all the socks.

I made some space on the table to accommodate the individual socks. Laying them out flat helps find pairs. After folding one-third of the container, I realized that the space I had assigned was way too small and was already overflowing. Since the rest of the table was full, there was no more space to allocate for more socks. This was a seemingly simple and mundane task, that suddenly induced stress in me. Where would I put all these socks now?

Granted, the solution was quite easy in this case. I created some space by stowing some folded laundry, and I had enough room for the socks. What’s the connection to working with AI, you ask?

When AI became publicly available with the launch of ChatGPT, many people immediately recognized this technology’s potential. Recognizing that it’s a new technology with many unknowns, they created companies and planned generously, allowing the companies ample time to find product-market fit and generate revenue.

Stress occurs when plans and reality diverge. It’s the same mechanism, whether you have enough space for your socks or how much runway your company has. Right now, we see many companies entering a stressful phase, especially the big ones. OpenAI, for example, issued a Code Red in an internal memo. Apple abruptly fired their AI chief, John Giannandrea.

Delivering value with AI is a lot harder than everyone thought, we underestimated the complexity of AI. This has led investors to attempt crazy things, this TechCrunch article provides an absurd example: Pumping $90 million dollars into a business with an annual recurring revenue of around $400,000, valuing it at $415 million dollars sounds absurd. This strategy is called king making: declaring a winner in a market and hoping to convince customers to choose the “market leader.” It’s another symptom of the stress we’re seeing in the system right now.

This great article by Paul Ford brings it all together. He wishes for the bubble to burst, because the the frenzy for return on invest ends and we can focus on letting nerds do their best work.

Happy hacking!