Skip to content

My SSH Setup: A Scalable Multi-Machine Configuration

Published:

The “Too many authentication failures” error is a common SSH problem that signals a misconfigured setup. After setting up multiple home servers recently, I’ve developed a clean solution that eliminates this issue entirely.

The Core Principle: Unique Keys Per Machine

The fundamental rule: every machine gets its own SSH key pair. Using a single key across multiple machines creates unnecessary security risks. If one machine is compromised, you must revoke that key across all services—GitHub, GitLab, every server. With unique keys, you revoke only the compromised machine’s key while maintaining access from other devices.

Key Generation and Management

I use 1Password for secure passphrase management, though any password manager works. The process for each new machine:

First, create a passphrase in 1Password named clearly, such as “SSH Passphrase - Mac Mini M4”. This protects the key if the disk is accessed.

Generate the key with a descriptive filename:

ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_macmini -C "stefan@macmini"

The -f flag specifies the filename, avoiding generic names like id_rsa. When prompted, paste the passphrase from 1Password (characters won’t display during paste—this is normal).

For disaster recovery, backup the private key in 1Password. Run cat ~/.ssh/id_ed25519_macmini, copy the entire output including BEGIN/END lines, and store it as an SSH Key item in 1Password. The key remains encrypted with your passphrase.

Repeat this process for each machine: id_ed25519_mac for Mac, id_ed25519_linux for Linux desktop, and so on.

SSH Config: The Solution to Authentication Failures

The “too many authentication failures” error occurs when SSH attempts every available key. Servers interpret this as a brute-force attempt and block the connection.

The solution is explicit key mapping in ~/.ssh/config. On my Mac:

Host github.com
  HostName github.com
  User git
  IdentityFile ~/.ssh/id_ed25519_mac
  IdentitiesOnly yes

Host homeserver
  HostName 192.168.1.100
  User stefan
  IdentityFile ~/.ssh/id_ed25519_mac
  IdentitiesOnly yes

The critical directive is IdentitiesOnly yes, which instructs SSH to use only the specified key, preventing authentication failures.

On my Linux desktop, the configuration uses the Linux-specific key:

Host github.com
  HostName github.com
  User git
  IdentityFile ~/.ssh/id_ed25519_linux
  IdentitiesOnly yes

Host homeserver
  HostName 192.168.1.100
  User stefan
  IdentityFile ~/.ssh/id_ed25519_linux
  IdentitiesOnly yes

Now ssh homeserver connects immediately from any configured machine.

Automated Server Provisioning

Modern Linux installers offer an elegant solution for initial SSH setup. During Ubuntu Server installation, select “Import SSH identity from GitHub” and enter your GitHub username.

The installer fetches all public keys from github.com/yourusername.keys and adds them to ~/.ssh/authorized_keys. Your server is immediately accessible from all your machines upon first boot—no manual key distribution required.

The Current Relevance

With AMD’s efficient processors and widespread fiber internet, home servers have become practical again. The infrastructure improvements—symmetric high-speed connections and power-efficient hardware—make self-hosting viable.

This SSH setup scales elegantly. New machines simply need their key generated and added to GitHub. New servers import from GitHub during installation. If a machine is compromised, revoke one key without affecting other access.

The configuration takes minutes to implement but saves hours of troubleshooting. Each server provisioning requires only entering a GitHub username. The local config files handle all connection details automatically.

This approach provides security through key isolation, convenience through automated provisioning, and reliability through explicit configuration. It’s a professional setup that works consistently across any number of machines and servers.

AllowedTools vs YOLO mode: Secure But Powerful Agentic Engineering

Published:

AllowedTools vs YOLO mode: Secure But Powerful Agentic Engineering

Recently, I’ve defaulted to using my coding agents in YOLO mode. I found a better way to balance security and ease of use.

Once you get the hang of agentic coding, it can feel like babysitting. Can I read this file? Can I search these directories? Everything has to be allowed individually by default. The easiest fix is to switch to YOLO mode. Instead of starting claude in the terminal, start claude —dangerously-skip-permissions. This allows your agent to do everything: read all the files, delete all the files, commit to every repository on your hard disk. Even connecting to production servers and databases using your SSH keys. YOLO mode is the right name, real accidents happened.

But YOLO mode has limitations too. I started to install Claude on my managed servers. It’s helpful for boring server administration tasks. Unfortunately, Claude doesn’t work in YOLO mode when you’re the root user, which is typical for cloud machines. I’m not sure if I agree with Anthropic’s limitation, since this can be less dangerous than running Claude on my private machine with all my private data in YOLO mode.

Fortunately, better options are emerging. One I like is allowed tools. This gives the agent fine-grained controls on what he can do on his own and what not. Together with the slash commands, I wrote about last week, this is a powerful combination. Similar to the dotfiles that many developers use for a familiar environment on new machines, I can imagine checking out a claude-tools repository with custom slash commands for repeating tasks. And also including allowedTools for uninterrupted execution.

Disclaimer: I haven’t built this yet. Hopefully, I’ll have a demo for you in the next weeks!

Custom Slash Commands: A Field Trip With Claude Code

Published:

Custom Slash Commands: A Field Trip With Claude Code

Today I’m taking you on a field trip on how I built my first two custom slash commands for Claude Code.

I apparently slept under a rock regarding custom slash commands since they’ve been available for a long time. Boris Cherny mentioned them in a video I linked earlier, which made me aware.

The first custom command I wrote last week is named /make-ci-happy. It’s a simple prompt that informs the agent how to run all the tests on Ci locally. It also gives guardrails on what to fix and escalate back to me. Because this isn’t Claude’s standard behavior, it became a repetitive task I had to carefully repeat every time before committing. It’s of course highly tailored to this repository, so it’s a custom / command only available here. It’s an elegant system to have slash commands available on your machine, or only per repository.

So this is nice and helps me every day a little bit. But I wanted to see how far I can take this. I’m getting nearer to the public release of my TreeOS open source software. It’s written in Go and compiles to a binary for macOS and Linux. It has a self-update mechanism built in. Most mechanisms use a JSON on a server, which the binary queries. It’s better to control this JSON and not rely on the GitHub releases page. My repository’s source code isn’t ready, and I haven’t settled on a license. Still, I want to test the update mechanism first. This is possible via a second public repository and some GitHub Actions magic. It builds the release in one repository but pushes the artifacts to another. At the same time, the JSON needs to be updated, which lies in a different repository. For new stable releases I want to adapt the JSON and add a blog post. If this sounds complicated, it is. The perfect problem to automate via custom slash commands.

The best way to build custom slash commands is to ask Claude Code to build them. Ask it to read the documentation first, because the knowledge about slash commands isn’t in the model. I named this command /treeos-release to create a namespace for my custom commands and made it available on my whole system. The paths to the needed repositories are hardcoded inside the script.

You might think this isn’t proper engineering with hardwiring it to one system. Probably you’re right. Since I don’t see the need to run the script elsewhere than on my computer, it’s fine for now. One of the main advantages of working with Coding Agents is everything can be more fluid and brittle in the beginning. I can make it more stable later, if needed.

Thee Result? Within a few minutes, I had a script. It didn’t work correctly. I reported this to Claude, who fixed it. On the second try, I shipped the first beta to the public repository. Yay!

TreeOS v0.1.0 Release

Upon closer inspection, it fell apart again. For further testing, I continued creating a stable release on top of the beta release. This failed, and it turned out the first invocation of the slash command hadn’t used the created script at all. Claude Code had done it all by itself! We modified it together and added the following warning:

Claude Code Guardrails Warning

In short, it needed a few more repetitions until I was happy with the script. I ended up splitting it into multiple scripts, because making Claude Code patiently wait for CI is hard. Overall, it’s an interesting work style, because Claude can finish the work of a half-baked script if needed. This allows iterative improvement of the script while continuing with the main work.

I highly recommend custom slash commands. It’s a well-thought-out system that integrates nicely into Claude Code. Creating and debugging the slash commands are easy. Start with your most repetitive tasks, ensuring every command runs a main script to increase result consistency.

You could argue that these scripts lock you into Claude Code versus other coding agents. While that is true, I don’t think it will be challenging for any other agent to copy my existing code/commands to their system, as long as the systems are similar.

Ultimately, Claude Code is a motivated but average developer. Like most average developers I’ve worked with, including myself, they usually need a few attempts to get it right.

Oh, and regarding the first binary of TreeOS visible above: It would probably work on your machine, but I haven’t created concise documentation for Mac or Linux, so I can’t recommend it. If you’re interested, reply to this email and I’ll add you to the early alpha testers. 👊

DHH is into Home Servers, too

Published:

DHH is into Home Servers, too

Home servers are back and many cloud computing offerings are a complete rip-off: DHH discovered the same seismic changes this year, and he’s a genius marketer.

David Heinemeier Hansson, or DHH in short, must live in the same social media bubble as I do, our most important topics overlap this year: home servers are on the cusp of becoming a serious alternative to cloud offerings and the cloud is turning into an expensive joke. Also, like me, he holds a serious grudge against Apple. #fuckapple

It used to be that home servers were kind of a joke. That’s because all home computers were a joke. Intel dominated the scene with no real competition. SSDs were tiny or nonexistent. This made computers and notebooks were power-hungry, hot, and slooow. The M-series CPUs from Apple are not even 5 years old. Also only in the last 5 years AMD got their shit together and started shipping serious consumer CPUs.

So you could have a home server, but they were slow machines. Plus, your internet connection was also slow. Most people had asynchronous DSL connections with maybe okayish download speeds and totally underpowered upload speeds. Accessing your server from the outside was a pain in the ass. I remember running Plex on my home server 10 years ago and watching a video on my mobile phone, in bad resolution. I don’t remember the bigger bottleneck: the slow CPU transcoding or my slow upload speed.

Back to 2025, this has changed dramatically. Many homes upgraded to fiber connections, providing fast up- and download speeds. Well-manufactured mini PCs are available cheaply. While Mac minis can be a valid option for fast compute, AMD has gotten serious about this niche with their AI 395+ flagship CPU with integrated graphics and 128GB of shared RAM. These machines are not a joke anymore. If your use cases require a lot of RAM, like local LLM inference, going back to the edge aka your home becomes an interesting alternative.

And then we haven’t started talking about sovereignty and independence from unpredictable vendors or countries…

I wholeheartedly recommend DHH’s full talk, it’s a very energetic fuck you to the “modern stack” in general and Apple in particular.

The Liquid Engineer - My New Newsletter

Published:

I’ll start a weekly newsletter called “The Liquid Engineer” next week. What does it mean and what will it cover? Read on or subscribe here if you’re sold already… 😂

❓What’s the meaning behind the name?

🫠 Liquid. Why is software called software? The term was coined in the mid-1900s in contrast to hardware, which is quite hard to change. So softness is related to the changeability and updateability of the code. Fun fact: This also explains the term firmware, the low-level software layer that controls hardware. It’s a special kind of software that’s a bit harder: firm.

Today, software encompasses a huge bandwidth of softness. You can still buy software on CDs and DVDs, especially games. This code never changes. You have apps on your computer or phone updated weekly or monthly. And you’re interacting with server-side software via your browser or app, updated 10s, 100s, or 1000s of times a day, e.g. on Google’s, Meta’s or Netflix’s servers.

So software is becoming more and more fluid, it is becoming liquid. But software is only one of the many interesting fields of engineering, and the same applies to almost all other fields. Zara, the fashion brand, produces clothes in Europe to speed up cycle phases and react quicker to demands and trends. Just to make one example, more to come in the newsletter.

⚙️ Engineer: Engineering is a huge part of my life. I always liked to create things. Engineering sounds like a very lonely job in front of the computer. Actually it’s a team activity involving a lot of communication with the team and customers. There is always the same process behind it, from building Lego to engineering huge software projects. It can be broken down into three steps:

💡 It starts with thinking about a possible solution to a hopefully real problem. This involves seeing the problem, identifying it as solvable with current technologies, and imagining a solution.

💪 Then comes the act of building, be it with tools and hands or a keyboard and computer. For most of us, it’s more rewarding to build with our hands. This is the addictive fun part, because you can get absurd amounts of flow state out of it.

🧐 Next is the testing phase, where we assess our solution. It’s never perfect, triggering the next thinking phase. And so the cycle continues.

Repeating this cycle leads to more sophisticated solutions. The time it takes to go through one full cycle has a huge impact on the quality of the solution.

❓What’s the newsletter about?

Now comes the confusing part: It will be about 3D printing! It’s a magical technology that makes mechanical engineering much more like software. It’s also a great visualization of how software principles spread into our everyday world.

Subscribe here, if you haven’t already!