<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
    <title>HanLHo. - Fractional Architect &amp; Software Product Engineer - ratatui</title>
    <link rel="self" type="application/atom+xml" href="https://hanlho.com/tags/ratatui/atom.xml"/>
    <link rel="alternate" type="text/html" href="https://hanlho.com"/>
    <generator uri="https://www.getzola.org/">Zola</generator>
    <updated>2026-04-16T00:00:00+00:00</updated>
    <id>https://hanlho.com/tags/ratatui/atom.xml</id>
    <entry xml:lang="en">
        <title>And so I ended up with a TUI</title>
        <published>2026-04-16T00:00:00+00:00</published>
        <updated>2026-04-16T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/and-so-i-ended-up-with-a-tui/"/>
        <id>https://hanlho.com/p/and-so-i-ended-up-with-a-tui/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/and-so-i-ended-up-with-a-tui/">&lt;p&gt;This is a short experience report about building a tiny TUI&#x2F;CLI to inspect configured agents and model assignments in Opencode.&lt;&#x2F;p&gt;
&lt;p&gt;I wanted a simple way to see which models were configured on which (sub)agents and their relative costs. I also did not want to spend much time building it. To me, this is an opportunity to try one-shotting it from a PRD.&lt;&#x2F;p&gt;
&lt;p&gt;Basically:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;which agents exist&lt;&#x2F;li&gt;
&lt;li&gt;which models they use&lt;&#x2F;li&gt;
&lt;li&gt;how much those models cost per 1M tokens&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;The result is a small CLI program called &lt;a href=&quot;https:&#x2F;&#x2F;codeberg.org&#x2F;hanlho&#x2F;tiny-cli&#x2F;src&#x2F;branch&#x2F;main&#x2F;opencode-config-lens&quot;&gt;&lt;code&gt;Opencode Config Lens&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;In short, it scans &lt;code&gt;~&#x2F;.config&#x2F;opencode&#x2F;&lt;&#x2F;code&gt; for the main config and can optionally include Weave config, which I am currently trialling. It then combines that with current pricing data from https:&#x2F;&#x2F;models.dev and renders a table that makes the configuration easier to inspect at a glance.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;vibe-coded&quot;&gt;Vibe coded&lt;&#x2F;h2&gt;
&lt;p&gt;The project was vibe-coded from a single &lt;a href=&quot;https:&#x2F;&#x2F;codeberg.org&#x2F;hanlho&#x2F;tiny-cli&#x2F;src&#x2F;branch&#x2F;main&#x2F;opencode-config-lens&#x2F;PRD.md&quot;&gt;PRD&lt;&#x2F;a&gt; that I created in a back-and-forth with GPT-5.4. Asking the coding agent to implement the PRD produced a working program.&lt;&#x2F;p&gt;
&lt;p&gt;After the initial generation, I made a few more &lt;a href=&quot;https:&#x2F;&#x2F;codeberg.org&#x2F;hanlho&#x2F;tiny-cli&#x2F;commits&#x2F;branch&#x2F;main&#x2F;opencode-config-lens&quot;&gt;functional changes and refactors&lt;&#x2F;a&gt;. I could not resist iterating, which is to be expected: building things tends to reveal new things. I also cleaned up a few things. I tell myself I am learning how LLMs work, or just trying not to release something too sloppy, but I am not sure I should not have stopped earlier.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;so-i-ended-up-with-a-tui&quot;&gt;So I ended up with a TUI&lt;&#x2F;h2&gt;
&lt;p&gt;This is also my first experiment with building an actual TUI application (using &lt;a href=&quot;https:&#x2F;&#x2F;ratatui.rs&quot;&gt;Ratatui&lt;&#x2F;a&gt;). It is probably overkill compared with a simple CLI table, but since I was instructing an LLM, the bar was low. Below is what it looks like.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;img&#x2F;ocl-screen.png&quot; alt=&quot;Opencode Config Lens&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;What is interesting is that I only had to ask for a TUI instead of a text-based CLI in my original spec. I literally changed it to: “build me a TUI application.” Asking an LLM to make those changes did not feel much different from asking for changes to a text-based CLI.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;reflections&quot;&gt;Reflections&lt;&#x2F;h2&gt;
&lt;p&gt;Despite the one-shot being fully functional, I kept iterating for a few more hours. I enjoyed it, but mostly because once you see the product, you get more ideas for improvement. Could I have built the CLI by hand in the same total amount of time? Probably not by much, and it would have been less good-looking and responsive. Definitely not a TUI.&lt;&#x2F;p&gt;
&lt;p&gt;I did not learn much about building a TUI or practising Rust, so there is that. But that was not really the goal. I wanted some basic insight, and I got that. I should probably get better at knowing when to stop.&lt;&#x2F;p&gt;
&lt;p&gt;This is also a program that probably would not have been written without LLMs. (I do appreciate the irony of using LLMs to build LLM tooling.)&lt;&#x2F;p&gt;
&lt;h2 id=&quot;closing-thought&quot;&gt;Closing thought&lt;&#x2F;h2&gt;
&lt;p&gt;The program does its job well. It looks good, and I still find it surprising how easily you can get to this result with an LLM and a coding agent. Although the TUI worked from the start, I kept iterating not to fix bugs but to make it clearer and better. That feels similar to more professional development: it&#x27;s only when you see the product or build it hands-on that you learn, get ideas, and improve.&lt;&#x2F;p&gt;
</content>
        
    </entry>
</feed>
