Michael Vorburger.ch Blog

2026

Calculating Gemini CLI Token Costs for Agentic Vibe Coding

March 31, 2026
Calculating Gemini CLI Token Costs for Agentic Vibe Coding image

Calculating Gemini CLI Token Costs for Agentic Vibe Coding

While parallelizing AI workflows with background agents is a massive productivity booster, this “fire and forget” vibe coding introduces a new challenge: keeping track of your LLM API costs. If you want to quickly convert your terminal token usage into actual dollars, I highly recommend using this Gemini CLI Cost Calculator.

Using the Gemini CLI, you get a transparent summary of your token usage at the end of every session:

Read more →

Parallelizing Agentic Coding: Supercharging AI Workflows with Terminal Notifications

March 30, 2026
Parallelizing Agentic Coding: Supercharging AI Workflows with Terminal Notifications image

Parallelizing Agentic Coding: Supercharging AI Workflows with Terminal Notifications

The real power of AI-assisted development isn’t just having an agent write code for you; it’s the ability to parallelize your workflow. When you assign a complex, multi-step refactoring task or a deep codebase investigation to a tool like the Gemini CLI, you shouldn’t just sit there watching the terminal output scroll by. You should be switching to another pane to write documentation, review PRs, or tackle another problem entirely while the agent grinds away in the background.

Read more →

How I am prompting LLMs: Should you say Thank You? Please?

March 9, 2026
How I am prompting LLMs: Should you say Thank You? Please? image

How I am prompting LLMs: Should you say Thank You? Please?

https://huggingface.co/blog/jdelavande/thank-you-energy is an interesting article.

What it doesn’t mention is the “exponential” cost of saying “Thank You” at the end of a long conversation… as each follow-up prompt must send the entire conversation, real world energy consumption is likely much higher than the “synthetic” Thank You on an empty context.

Personally I’m currently typically prompting LLMs like this:

  1. I use “imperative” language (“do”, not “could you” nor “please”)
  2. I frequently create new sessions, instead of never ending long conversations (/clear in Gemini CLI)
  3. I don’t send any follow-up prompt when the task at hand is completed to my satisfaction
  4. I on (pretty rare) occasions still can’t quite avoid an “oh wow, you’re awesome” 😀

Sending a “Thank You” to an LLM as the last prompt to end a conversation does not like a good idea energy wise.

Read more →

How to log to Google Cloud Logging as JSON from Java with SLF4j

March 6, 2026
How to log to Google Cloud Logging as JSON from Java with SLF4j image

How to log to Google Cloud Logging as JSON from Java with SLF4j

Add https://github.com/logfellow/logstash-logback-encoder and put this into your logback.xml:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <!-- Use JSON format for scalable logging suitable for Google Cloud Logging -->
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            <!-- logstash-logback-encoder writes timestamps in the default TimeZone of the JVM, but GCP wants UTC -->
            <timeZone>UTC</timeZone>

            <!-- Align field names with Google Cloud Structured Logging requirements;
                 see https://docs.cloud.google.com/logging/docs/structured-logging,
                 and https://docs.cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry. -->
            <fieldNames>
                <!-- GCP expects 'timestamp', logstash-logback-encoder defaults to '@timestamp' -->
                <timestamp>timestamp</timestamp>

                <!-- GCP expects 'severity', logstash-logback-encoder defaults to 'level' -->
                <level>severity</level>

                <!-- Ignore levelValue as severity is sufficient for GCP -->
                <levelValue>[ignore]</levelValue>

                <!-- Disable logstash-logback-encoder's '@version' field as GCP doesn't use it -->
                <version>[ignore]</version>

                <!-- GCP expects 'message' which is the default for logstash-logback-encoder -->
            </fieldNames>
        </encoder>
    </appender>

    <!-- Wrap STDOUT in AsyncDisruptorAppender for better performance, decoupling logging from I/O -->
    <appender name="ASYNC_STDOUT" class="net.logstash.logback.appender.LoggingEventAsyncDisruptorAppender">
        <appender-ref ref="STDOUT" />
    </appender>

    <!-- Suppress verbose internal logging from certain libraries if needed -->
    <!-- <logger name="org.apache" level="WARN" /> -->

    <root level="INFO">
        <appender-ref ref="ASYNC_STDOUT" />
    </root>
</configuration>

Alternatives:

Read more →

From Prompt to Production: AI Vibe Coding Web Frontends by Chaining Google's Stitch, AI Studio, and Antigravity

February 24, 2026
From Prompt to Production: AI Vibe Coding Web Frontends by Chaining Google's Stitch, AI Studio, and Antigravity image

From Prompt to Production: AI Vibe Coding Web Frontends by Chaining Google’s Stitch, AI Studio, and Antigravity

I recently sat down to finally try out hands-on for myself just exactly how easy it is in February 2026 to have an AI generate a well designed full-fledged working HTML/CSS/JS front-end UI.

The Design Phase: Stitch

Starting with Google’s Stitch, I iterated on a few high-level graphical design ideas.

This feels similar to what you would have done with your human graphical designer, using tools like Figma, back in the pre-AI era.

Read more →

Gemini Fixed Audio Bug

February 5, 2026
Gemini Fixed Audio Bug image

Gemini Fixed Audio Bug

I’m experimenting with using the Gemini Live API, and have (obviously) “vibe coded” (parts of) what I’m doing.

It worked surprisingly well right away, but there was this annoying audio bug. (Signal Processing is not my forte.)

Then I had an idea for something which I didn’t really think would actually work, but hey, try it anyway, right?

I recorded a short audio clip of the problem, and uploaded it to Gemini, asking it for help. And oh boy, is this impressive or what:

Read more →

2025

NixOS Testing

October 12, 2025
NixOS Testing image

NixOS Testing

I’ve continued to dabble with NixOS since attending NixCon 2025.

Today I finally got around to learning how to do testing of NixOS configurations; and it’s actually pretty cool!!

First install nix, if you haven’t already. Then, as explored in my LearningLinux repo here, put this into a hello.nix file:

{ pkgs, ... }:
{
  environment.systemPackages = with pkgs; [
    hello
  ];
}

and then this into a test.nix file:

{
  nodes = {
    machine1 = { pkgs, ... }: { };
    machine2 = { pkgs, ... }: { };
    machine3 = import ./hello.nix;
  };

  testScript = ''
    start_all()
    for m in [machine1, machine2, machine3]:
      m.systemctl("start network-online.target")
      m.wait_for_unit("network-online.target")

    machine1.succeed("ping -c 1 machine2")
    machine2.succeed("ping -c 1 machine1")

    machine3.succeed("hello")
    machine2.fail("hello")
  '';
}

and then put this into a flake.nix file:

Read more →

Vorburger.ch AI Git Memory `aifiles`

October 11, 2025
Vorburger.ch AI Git Memory `aifiles` image

Vorburger.ch AI Git Memory aifiles

Like everyone else, I am increasingly using AI tools.

Today I finally got around to start setting up what will be the (public) “memory” of my personal future AI agents that will work for me.

Being a developer, I don’t want this to be hidden away in some proprietary black box, but instead want to be able to inspect, edit, and version control it.

I also don’t want it be specific to any one AI tool or provider, but instead be portable and reusable across all of them; whether that’s the (awesome!) Gemini CLI, Anthropic’s Claude Code, OpenAI’s, or my very own Enola.dev.

Read more →

NixCon 2025

September 7, 2025
NixCon 2025 image

NixCon 2025

I’ve attended https://2025.nixcon.org in Rapperswil-Jona near Zürich in Switzerland, and here are some thoughts about it, written up on my way home.

Nix is several things:

  • an interesting but at least initially to me a bit scary functional configuration language
  • a package 📦 manager (like apt or dnf), which can be used on any Linux distro, as well as on 🍎 Macs (like Homebrew 🍺 brew)
  • a declarative Linux distro called NixOS (like Fedora, Ubuntu or Debian)

This may be one part of what makes it a little hard to grasp in the beginning. Other parts could be the at least initially somewhat confusing old (nix-* commands) vs. new (Flakes; and nix … commands) usage styles, the several docs sites (and Wiki), and different ways of doing many things. On the other hand, perhaps some of this “I need to figure out things for myself like I used to on my first 8bit home computer” (if you’re old enough to remember?) is also what appeals to people interested in Nix? ✅ Check for myself personally! 😆

Read more →

SSH with private keys sealed in TPM on Fedora Linux

June 19, 2025
SSH with private keys sealed in TPM on Fedora Linux image

SSH with private keys sealed in TPM on Fedora Linux

Instead of safely storing SSH private keys on a Yubikey (if you don’t have one) you might want to keep private keys sealed in TPM.

Here is how to do this on Fedora Linux using https://github.com/Foxboron/ssh-tpm-agent:

$ sudo dnf install openssl-devel
$ go install github.com/foxboron/ssh-tpm-agent/cmd/...@latest

$ ~/go/bin/ssh-tpm-keygen --supported
ecdsa bit lengths: 256 384
rsa bit lengths: 2048

As this TPM supports ECDSA keys with 384 (but not 521) bits, so:

Read more →
Load more...