<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
    <title>HanLHo. - Fractional Architect &amp; Software Product Engineer - architecture</title>
    <link rel="self" type="application/atom+xml" href="https://hanlho.com/tags/architecture/atom.xml"/>
    <link rel="alternate" type="text/html" href="https://hanlho.com"/>
    <generator uri="https://www.getzola.org/">Zola</generator>
    <updated>2026-03-29T00:00:00+00:00</updated>
    <id>https://hanlho.com/tags/architecture/atom.xml</id>
    <entry xml:lang="en">
        <title>Notes on Why AI is the Third Coming of Domain-Driven Design</title>
        <published>2026-03-29T00:00:00+00:00</published>
        <updated>2026-03-29T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/notes-on-why-ai-is-the-third-coming-of-domain-driven-design/"/>
        <id>https://hanlho.com/p/notes-on-why-ai-is-the-third-coming-of-domain-driven-design/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/notes-on-why-ai-is-the-third-coming-of-domain-driven-design/">&lt;p&gt;Notes on &quot;Dear Architects&quot; podcast episode: &lt;a href=&quot;https:&#x2F;&#x2F;pod.link&#x2F;1877884226&#x2F;episode&#x2F;YjhhNTA2MTItM2I3ZC00ZTUwLTkzNWEtMTI5ODVhOWVjMzMy&quot;&gt;&quot;Why AI is the Third Coming of Domain-Driven Design&quot;&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;The title is only a small part of the episode. My main takeaway is that, because AI changes the medium of communication to natural language, precise ubiquitous language will matter even more.&lt;&#x2F;p&gt;
&lt;p&gt;Condensed takeaways:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Modularity should make future change easier.&lt;&#x2F;li&gt;
&lt;li&gt;Coupling should be chosen deliberately.&lt;&#x2F;li&gt;
&lt;li&gt;Boundaries should reflect how teams actually work.&lt;&#x2F;li&gt;
&lt;li&gt;Modeling should optimize for usefulness, not completeness.&lt;&#x2F;li&gt;
&lt;li&gt;Architecture should record its reasoning and assumptions.&lt;&#x2F;li&gt;
&lt;li&gt;Architecture is ongoing judgment and trade-offs, not a one-time design step.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Creating architecture diagrams with C4 and coding agents</title>
        <published>2026-03-11T00:00:00+00:00</published>
        <updated>2026-03-11T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/creating-architecture-diagrams-with-c4-and-coding-agents/"/>
        <id>https://hanlho.com/p/creating-architecture-diagrams-with-c4-and-coding-agents/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/creating-architecture-diagrams-with-c4-and-coding-agents/">&lt;p&gt;LLMs can draw diagrams, but you get better results with a conceptual model, a validation loop, and a lightweight verification pass against the codebase than with free-form diagramming.&lt;&#x2F;p&gt;
&lt;p&gt;I used the &lt;a href=&quot;https:&#x2F;&#x2F;c4model.com&quot;&gt;C4 model&lt;&#x2F;a&gt; extensively to map architecture landscapes. Last week I saw an opportunity to catch up with it and try it out with coding agents. I found that modelling architecture in a text-based format with guardrails (a DSL with rules) is easier and more consistent for a coding agent. I tried it out on a small Rust project I know well. This post is a field note of my findings.&lt;&#x2F;p&gt;
&lt;aside class=&quot;sidebar-note sidebar-note-right&quot;&gt;
  &lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;c4model.com&quot;&gt;C4&lt;&#x2F;a&gt; is a zoom-in model for software architecture.&lt;&#x2F;p&gt;
&lt;p&gt;This post discusses only the levels we actually need:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;System context: people and software systems.&lt;&#x2F;li&gt;
&lt;li&gt;Container: deployable&#x2F;runnable things inside a software system.&lt;&#x2F;li&gt;
&lt;li&gt;Component: the main building blocks inside a container.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;A key element for coding agents: C4 can be expressed as a text model (a DSL), so the architecture model can be edited like code and validated&#x2F;exported via a CLI.&lt;&#x2F;p&gt;
&lt;p&gt;C4 is model-as-code: one model, many views&#x2F;diagrams.&lt;&#x2F;p&gt;

&lt;&#x2F;aside&gt;
&lt;h2 id=&quot;the-test-project&quot;&gt;The test project&lt;&#x2F;h2&gt;
&lt;p&gt;To try this out, I used one of my personal projects: a text-based &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;lhohan&#x2F;simple-time-tracker&quot;&gt;time-tracking application&lt;&#x2F;a&gt; with two runtime modes (a CLI and a web dashboard). Both operate on the same domain and the same Markdown time-entry files.&lt;&#x2F;p&gt;
&lt;p&gt;The functionality does not matter much for this post, except for two things. First, the codebase is relatively small and easy to analyse. Second, it is well-structured: ports and adapters, plus behaviour-driven, DSL-based acceptance tests.&lt;&#x2F;p&gt;
&lt;p&gt;I&#x27;ve used C4 on larger landscapes too. I expect the workflow to translate, but the experience will differ on larger (or less structured) codebases.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;building-the-model&quot;&gt;Building the model&lt;&#x2F;h2&gt;
&lt;p&gt;I started the coding agent session with a direct request to build C4 diagrams for the project at system and container level, with the DSL written first.&lt;&#x2F;p&gt;
&lt;pre style=&quot;background-color:#eff1f5;color:#4f5b66;&quot;&gt;&lt;code&gt;&lt;span&gt;build me a c4 model at system level and container level (as defined by the C4 model).
&lt;&#x2F;span&gt;&lt;span&gt;Please create the DSL first
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;reference: c4model.com
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Below, I&#x27;ll go through the process using the diagrams, but keep in mind these are all generated from a text-based DSL. From the start, the agent produced a working model in the Structurizr DSL. I then gave it a &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;lhohan&#x2F;simple-time-tracker&#x2F;blob&#x2F;790b4799892330714977e7092682a9c9fec72499&#x2F;justfile#L109&quot;&gt;command to run Structurizr CLI&lt;&#x2F;a&gt; as a check at each step.&lt;&#x2F;p&gt;
&lt;p&gt;To start with, the agent inspected the Rust codebase to work out the system boundary. It established one &lt;code&gt;Time Tracker&lt;&#x2F;code&gt; software system with two runtime modes: a CLI and a web dashboard. Both use the same Markdown time-entry files.&lt;&#x2F;p&gt;
&lt;p&gt;(Apologies for the dark diagrams; dark mode was enabled when I took these screenshots. To enlarge them, open the images in a new tab.)&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;img&#x2F;c4-llm-system-1.png&quot; alt=&quot;C4 system context diagram (first pass)&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Here is a summary of the session:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;One of the first decisions was scope: whether to model only the web path or both runtime paths. The choice was to represent them as separate containers.&lt;&#x2F;li&gt;
&lt;li&gt;We modelled a software system, two application containers, an internal datastore for run statistics, and the Markdown time-entry files as an external dependency. That first version was structurally correct.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;img&#x2F;c4-llm-container-1.png&quot; alt=&quot;C4 container diagram (first pass)&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;(I had completely forgotten about the runtime statistics feature ...)&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;After the first version, something felt missing between the Markdown files and the CLI&#x2F;web containers: the shared core logic. In C4 terms, that isn&#x27;t another container; it belongs at component level. So I kept the container model strict and added the component level to make the shared logic explicit.&lt;&#x2F;p&gt;
&lt;p&gt;I initially asked it to model the shared core logic as a container, but the agent pushed back, and the model improved because of it. I asked it to add component views for both runtime containers instead of inventing a fake &lt;code&gt;core&lt;&#x2F;code&gt; container. That preserved a strict container model while making the architecture more insightful.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Naming discussions helped sharpen the model. The agent came up with names I was not sure about, but on a first pass it probably did a better job than I would have. One direction I set explicitly was to name things as close as possible to the codebase. The names were not bad, but this is not where I want to leave room for interpretation.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;To support those component views, we introduced a shared component fragment that both CLI and web could include. That shared layer covered parsing, domain types, reporting, and execution statistics. The result was a more accurate picture of how the code is actually organised.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;img&#x2F;c4-llm-component-cli-1.png&quot; alt=&quot;C4 component diagram for the CLI (first pass)&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Once the model structure felt right, I shifted to presentation. I asked the agent to style it so different roles were easier to distinguish: CLI and web containers, shared components, adapters, renderers, and datastores. Then I asked for rounded boxes and a more explicit person-style user element.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;The final result:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;img&#x2F;c4-llm-system-dark-1.png&quot; alt=&quot;C4 system context diagram (styled)&quot; &#x2F;&gt;
&lt;img src=&quot;&#x2F;img&#x2F;c4-llm-container-dark-1.png&quot; alt=&quot;C4 container diagram (styled)&quot; &#x2F;&gt;
&lt;img src=&quot;&#x2F;img&#x2F;c4-llm-component-cli-3.png&quot; alt=&quot;C4 component diagram for the CLI (styled)&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;I have also made &lt;a href=&quot;https:&#x2F;&#x2F;lhohan.github.io&#x2F;simple-time-tracker&#x2F;site&#x2F;&quot;&gt;the generated static site with the diagrams&lt;&#x2F;a&gt; available as it was straightforward to do with help from the agent. You can click the small magnifying glass icons to zoom into the next level.&lt;&#x2F;p&gt;
&lt;p&gt;In summary, this result took several passes: boundaries first; then the component layer; then names aligned with the code; and finally presentation.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-dsl-in-practice&quot;&gt;The DSL in practice&lt;&#x2F;h2&gt;
&lt;p&gt;One important artefact we have not discussed yet: the DSL itself. &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;lhohan&#x2F;simple-time-tracker&#x2F;blob&#x2F;790b4799892330714977e7092682a9c9fec72499&#x2F;docs&#x2F;c4&#x2F;time-tracker.dsl&quot;&gt;Here is the full model&lt;&#x2F;a&gt; with the diagrams defined in the Structurizr DSL. All the edits were done by the agent, including the initial creation from scratch. I reviewed, asked questions, and iterated.&lt;&#x2F;p&gt;
&lt;p&gt;Before this, I typed every box and relationship by hand (scrolling up and down the file, or keeping two windows open), added tech stacks (taking care not to confuse the order of strings), and so on. Using the agent was a major documentation speed boost, and the DSL came out clean and organised the way I prefer: relationships after the element definitions, not inside them.&lt;&#x2F;p&gt;
&lt;p&gt;While I see the risk of not thinking things through, being relieved of painstaking manual element&#x2F;relationship editing, working with agent also gave me:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Iteration close to the code&lt;&#x2F;li&gt;
&lt;li&gt;Meaningful discussions on abstraction levels and naming&lt;&#x2F;li&gt;
&lt;li&gt;A knowledgeable architecture assistant at hand&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Why I think it worked: the model is defined in text, so the agent can edit it like code. C4 provides guardrails through a small number of nested abstraction levels, and the DSL keeps names, descriptions, and styles consistent across views. A CLI tool to validate the model closes the loop, so the agent can check its work as it goes.&lt;&#x2F;p&gt;
&lt;p&gt;In addition, you can ask the LLM to review the model, in the context of the actual codebase or not.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;operationalising-the-workflow&quot;&gt;Operationalising the workflow&lt;&#x2F;h2&gt;
&lt;aside class=&quot;sidebar-note sidebar-note-right&quot;&gt;
  &lt;p&gt;I used Codex CLI with Codex 5.3; any other recent coding agent and model will probably work as well.&lt;&#x2F;p&gt;

&lt;&#x2F;aside&gt;
&lt;p&gt;Going forward, here is how I will instruct LLMs to work with C4 and keep the architecture diagrams up to date.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;agent-skill&quot;&gt;Agent Skill&lt;&#x2F;h3&gt;
&lt;p&gt;First, after completing this experiment, I turned my learning into &lt;strong&gt;a reusable agent skill called &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;lhohan&#x2F;agent-chisels&#x2F;blob&#x2F;c198e00546f1274e3afcdda58dfd74423fcaa29c&#x2F;agentfiles&#x2F;shared&#x2F;skills&#x2F;modelling-c4-diagrams&#x2F;SKILL.md&quot;&gt;&lt;code&gt;modelling-c4-diagrams&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;&lt;&#x2F;strong&gt;, which I can now use from any project.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;agents-md-instructions&quot;&gt;AGENTS.md instructions&lt;&#x2F;h3&gt;
&lt;p&gt;In the project&#x27;s &lt;code&gt;AGENTS.md&lt;&#x2F;code&gt; I added &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;lhohan&#x2F;simple-time-tracker&#x2F;blob&#x2F;790b4799892330714977e7092682a9c9fec72499&#x2F;AGENTS.md?plain=1#L12&quot;&gt;a short reference&lt;&#x2F;a&gt; so future agents can discover the DSL files and know how to validate&#x2F;export. This avoids repeating the discovery work in each new session.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;markdown&quot; style=&quot;background-color:#eff1f5;color:#4f5b66;&quot; class=&quot;language-markdown &quot;&gt;&lt;code class=&quot;language-markdown&quot; data-lang=&quot;markdown&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;- &lt;&#x2F;span&gt;&lt;span style=&quot;font-weight:bold;color:#d08770;&quot;&gt;**Architecture docs (C4)**&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;: source DSL at &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;`docs&#x2F;c4&#x2F;time-tracker.dsl`&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; (shared components in &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;`docs&#x2F;c4&#x2F;shared-tracking-core.dsl`&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;); validate with &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;`just architecture-docs-validate`&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;; export static site with &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;`just architecture-docs-export`
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h3 id=&quot;verification&quot;&gt;Verification&lt;&#x2F;h3&gt;
&lt;p&gt;In this project, the LLM and I used the following commands to verify the output:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Validate C4 Structurizr DSL:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;structurizr-cli validate -workspace docs&#x2F;c4&#x2F;time-tracker.dsl&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;Export C4 diagrams to docs&#x2F;site for inspection (and GitHub Pages publishing)
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;structurizr-cli export -workspace docs&#x2F;c4&#x2F;time-tracker.dsl -format static -output docs&#x2F;site&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;View the architecture documentation
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;open docs&#x2F;site&#x2F;index.html&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;structurizr-interface-to-llm&quot;&gt;Structurizr interface to LLM&lt;&#x2F;h3&gt;
&lt;p&gt;I found the validation loop with the CLI to work well. If the export succeeds, the DSL is valid and the views conform to the tool&#x27;s rules.&lt;&#x2F;p&gt;
&lt;p&gt;That still does not tell you whether the model is accurate, or whether the diagrams communicate well. The &lt;a href=&quot;https:&#x2F;&#x2F;c4model.com&#x2F;diagrams&#x2F;checklist&quot;&gt;C4 diagram review checklist&lt;&#x2F;a&gt; is a good yardstick.&lt;&#x2F;p&gt;
&lt;p&gt;The LLM did not seem to require much extra instruction to create a proper model and views. I pointed it to c4model.com at the beginning of the session, and that may have been enough context. (Hard to tell what it knows or does under the hood.)&lt;&#x2F;p&gt;
&lt;p&gt;The skill I created and referenced above now serves as a main interface.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;The architecture model and diagrams are insightful artefacts (&quot;pictures can say more than words&quot;). But most of the thinking and modelling usually happens visually, while recording it often becomes a chore. This experiment showed me that LLMs can help keep a model up to date without turning it into a separate manual process.&lt;&#x2F;p&gt;
&lt;p&gt;When the model is constrained (C4) and expressed as text (a DSL), you can version it like code, review it like code, and validate&#x2F;export it through a CLI. Constrained text models plus validation give coding agents a better architecture-diagram workflow than free-form diagramming.&lt;&#x2F;p&gt;
&lt;hr &#x2F;&gt;
&lt;h2 id=&quot;addendum-next-experiments&quot;&gt;Addendum: Next experiments&lt;&#x2F;h2&gt;
&lt;p&gt;Some follow-ups I might try if I run this workflow again.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;keeping-the-model-in-sync&quot;&gt;Keeping the model in sync&lt;&#x2F;h3&gt;
&lt;p&gt;Work with the LLM to design how to encode parts of the architecture model directly in the codebase. Use the C4 views as shared context, then define a way to keep the model in sync with the implementation. Unless you are using a very principled framework (maybe Spring in Java?), I expect this to be quite custom per project anyway.&lt;&#x2F;p&gt;
&lt;p&gt;Coding agents may lower the barrier to getting started with this kind of non-obvious quality-improvement work.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;verification-beyond-the-cli&quot;&gt;Verification beyond the CLI&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Use an MCP like Chrome DevTools to inspect exported diagrams as a second verification step.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;One concrete use case: manual editing is often required to position boxes and, especially, dependencies. A visual inspection could double-check that no text boxes overlap and that lines do not cross boxes.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Coding agents could be used to evaluate the shape of the architecture outside of the code.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;publishing-and-representation&quot;&gt;Publishing and representation&lt;&#x2F;h3&gt;
&lt;p&gt;Export to Mermaid (or PlantUML) for embedding in the agent&#x27;s instructions, but keep the Structurizr DSL as the source of truth. Split the DSL so documentation for each container or component lives closer to the code.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;a-better-llm-interface-in-tooling&quot;&gt;A better LLM interface in tooling&lt;&#x2F;h3&gt;
&lt;p&gt;This requires changes to Structurizr. It could provide build&#x2F;run instructions for LLMs via an extensive &lt;code&gt;--help&lt;&#x2F;code&gt; output, or ship &lt;a href=&quot;&#x2F;p&#x2F;add-a-cli-subcommand-to-keep-llm-instructions-in-sync&#x2F;&quot;&gt;a dedicated subcommand that prints LLM instructions&lt;&#x2F;a&gt; (similar to &lt;code&gt;bd prime&lt;&#x2F;code&gt;).&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>AI’s Opportunity: Pacing Control Loops with Development</title>
        <published>2026-02-04T00:00:00+00:00</published>
        <updated>2026-02-04T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/ais-opportunity-pacing-control-loops-with-development/"/>
        <id>https://hanlho.com/p/ais-opportunity-pacing-control-loops-with-development/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/ais-opportunity-pacing-control-loops-with-development/">&lt;p&gt;What caught my attention in the book &lt;em&gt;Vibe Coding&lt;&#x2F;em&gt; by Gene Kim and Steve Yegge is the idea that, as LLMs and coding agents change how we build software, control loops—tests, reviews, and other signals that tell you whether a change behaves as expected—should be faster and more integrated into development feedback loops than before. My intuition says this makes perfect sense.&lt;&#x2F;p&gt;
&lt;p&gt;For example, when there is a dedicated test stage or a QA role that tests after the fact, that role inevitably struggles to keep up with the speed of development. Over time, this makes it increasingly difficult to sustain a &#x27;testing after the fact&#x27; organisation of quality.&lt;&#x2F;p&gt;
&lt;p&gt;So how do we solve this?&lt;&#x2F;p&gt;
&lt;p&gt;Some may think that introducing AI by implementing it at the test level after the fact, could be the solution. However, at the rate of development I see and read about, this approach will be hard to keep up with. One either has to accept not fully taking advantage of what AI can help development with, or rethink how testing is integrated into the development process.&lt;&#x2F;p&gt;
&lt;p&gt;Put bluntly: if AI lets you produce a feature in hours but the first meaningful acceptance signal only arrives days later in a separate stage, quality assurance become the bottleneck.&lt;&#x2F;p&gt;
&lt;p&gt;To me, the logical consequence is a stronger shift towards automated quality controls, including acceptance tests and code reviews at the least. I refer to acceptance tests here as writing executable specifications of expected behaviour (in domain language) before or alongside the code. This implies that testing &lt;em&gt;has&lt;&#x2F;em&gt; to move earlier in the development chain &lt;em&gt;because&lt;&#x2F;em&gt; of AI.&lt;&#x2F;p&gt;
&lt;p&gt;AI is an opportunity to start writing acceptance tests if you have not yet. It pushes us to invest time in strategic test design, testing against stable contracts, testing from a behavioural point of view, and isolating test descriptions from the actual implementation.&lt;&#x2F;p&gt;
&lt;p&gt;Put differently, the shift in development practices that LLMs are causing should inspire &lt;em&gt;more&lt;&#x2F;em&gt; adherence to testing best practices, not less. That is, if you want to keep on adding new features, fix and prevent bugs, and keep up the pace of development.&lt;&#x2F;p&gt;
&lt;p&gt;More broadly, to keep benefiting from AI over time, we should shift towards tightly coupled feedback loops embedded in everyday development. This is not limited to testing but also applies to, for example, reviews. In that sense, AI doesn’t remove quality practices; it raises the stakes if you don’t have them.&lt;&#x2F;p&gt;
&lt;p&gt;If testing &#x27;shifts left&#x27;, team structures must change as well. This evolution points towards smaller, more autonomous teams where testing, development, and feedback are inseparable rather than sequential.&lt;&#x2F;p&gt;
&lt;p&gt;AI presents us with an opportunity: not faster quality control after the fact, but to design systems, processes, and teams that make quality the fastest path forward, so control loops can keep up with the increasing pace of development loops.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>On Building Reliable Software with LLMs</title>
        <published>2026-01-27T00:00:00+00:00</published>
        <updated>2026-01-27T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/on-building-reliable-software-with-llms/"/>
        <id>https://hanlho.com/p/on-building-reliable-software-with-llms/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/on-building-reliable-software-with-llms/">&lt;p&gt;This post captures my current thinking on how LLMs are impacting software development, particularly around software quality and engineering discipline.&lt;&#x2F;p&gt;
&lt;p&gt;My main observation: most of the best practices we&#x27;ve relied on for years are just as important—maybe even more so—in an LLM-assisted development environment. Working with LLMs requires &lt;em&gt;more&lt;&#x2F;em&gt; discipline and attention to fundamentals, not less.&lt;&#x2F;p&gt;
&lt;p&gt;When using LLMs, there is a heightened risk of losing understanding: of the problem domain, the underlying technology, and the implementation details. Code can become messy quickly without careful attention, review, and guidance. While this is certainly true, we didn&#x27;t need LLMs for this to happen. Why else have so many projects failed historically? Why is technical debt a topic in most projects?&lt;&#x2F;p&gt;
&lt;p&gt;The critical difference with LLMs is the &lt;em&gt;increased risk and temptation of velocity&lt;&#x2F;em&gt;. We move too fast and skip the practices that help us maintain and change software in the future. Discipline and &lt;a href=&quot;https:&#x2F;&#x2F;aicoding.leaflet.pub&#x2F;3mbrvhyye4k2e&quot;&gt;rigour&lt;&#x2F;a&gt; have become more important than ever.&lt;&#x2F;p&gt;
&lt;p&gt;These practices are becoming MORE crucial in an LLM-assisted workflow:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Codebase quality.&lt;&#x2F;strong&gt; This matters for LLM agents too, because they learn from existing code. A clean, well-organised codebase helps agents perform better; inconsistencies lead to poorer results. An LLM will mimic what&#x27;s already there.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Feedback loops and testing.&lt;&#x2F;strong&gt; If an LLM is helping you write code, you need reliable ways to verify it hasn&#x27;t broken anything. A well-designed, automated test suite that&#x27;s easy to extend and interpret helps maintain understanding and control of implemented functionality.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Well-designed boundaries and contracts&lt;&#x2F;strong&gt; both within and outside your application. These allow you to constrain, shape, isolate, and test the work an LLM produces.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Managing risk and technical debt.&lt;&#x2F;strong&gt; Be intentional and explicit about where you rely on LLMs and where you don&#x27;t. Document these decisions. Maintain a technical debt log with risk assessments and timelines.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Documentation of past decisions.&lt;&#x2F;strong&gt; Keep a history of architectural decisions through decision logs and ADRs, and ensure the LLM you&#x27;re working with is aware of them. I&#x27;ve had LLMs point out inconsistencies in the codebase or flag how new change requests conflict with past decisions.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Taken together, these practices are what make LLM-assisted development sustainable rather than brittle.&lt;&#x2F;p&gt;
&lt;p&gt;I don&#x27;t think the skill gap in building and delivering software will ultimately be about prompt cleverness. LLM agents will be genuinely helpful tools, and working effectively with them will be an accelerator. However, as we rely on them more to write code—even when we review that code carefully—the most important work increasingly becomes the disciplined practice of boxing them in with testing, architecture, and contracts.&lt;&#x2F;p&gt;
&lt;p&gt;Avoid painting yourself into a corner. In an LLM-assisted workflow, that means being deliberate about where you let agents move fast, and where you slow them down with guardrails. LLMs make it easier to move fast, and easier to get stuck.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Ground Your ADRs with a Verification Section</title>
        <published>2024-12-11T00:00:00+00:00</published>
        <updated>2024-12-11T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/ground-your-adrs-with-a-verification-section/"/>
        <id>https://hanlho.com/p/ground-your-adrs-with-a-verification-section/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/ground-your-adrs-with-a-verification-section/">&lt;p&gt;Making architectural decisions is one thing, but have you ever wondered how to make them more effective? Adding a Verification section to Architecture Decision Records (ADRs) can make the difference. This simple addition bridges the gap between theory and practice, making decisions actionable and measurable.&lt;&#x2F;p&gt;
&lt;p&gt;If you&#x27;re new to ADRs, check out &lt;a href=&quot;&#x2F;p&#x2F;less-mentioned-benefits-of-architecture-decision-records&#x2F;&quot;&gt;my post on their benefits&lt;&#x2F;a&gt; first.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;common-adr-challenges&quot;&gt;Common ADR Challenges&lt;&#x2F;h2&gt;
&lt;p&gt;Architecture Decision Records (ADRs) are a valuable tool for capturing architectural choices. However, they can suffer from the following weaknesses:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Decisions can be made, or become, detached from implementation&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Intent and success criteria remain vague or unmeasurable&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;These challenges can lead to:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Architectural decisions that exist only on paper&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Difficulty in assessing whether decisions are being followed&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Missed opportunities for feedback and improvement&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;introducing-verification&quot;&gt;Introducing Verification&lt;&#x2F;h2&gt;
&lt;p&gt;Adding a &#x27;Verification&#x27; section to ADRs helps ground these decisions in reality and improve their quality. This section details how decisions will be evaluated or implemented, creating a clear path from decision to implementation.&lt;&#x2F;p&gt;
&lt;p&gt;Writing an ADR is often only the beginning, not the end. Establishing feedback loops helps to crystallise our decisions in actual implementation.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;grounding-architectural-decisions-in-reality&quot;&gt;Grounding Architectural Decisions in Reality&lt;&#x2F;h3&gt;
&lt;p&gt;It is a good idea to state goals in measurable and verifiable ways, and this applies equally to architecture and engineering. This is similar to writing requirements in parallel with the tests for these requirements. Or how writing code can put an analysis to the test. Each of these practices grounds our work in reality and provides feedback on our intentions.&lt;&#x2F;p&gt;
&lt;p&gt;Architecture decisions benefit from a similar grounding. Just as business analysts and developers can collaborate through behavioural tests, architects and developers can collaborate through verification criteria. More broadly, regardless of roles, the architecture function benefits from verification criteria. This creates反馈 loops that improve quality.&lt;&#x2F;p&gt;
&lt;p&gt;Importantly, verification works in both directions. Architecture and implementation connect and as such developers engage with the architecture and in return the architect receives feedback on what is working and what is not. When decisions aren&#x27;t being acted upon, it becomes visible, which is a sign to investigate why and adapt.&lt;&#x2F;p&gt;
&lt;p&gt;A key benefit of keeping verification in mind: ADRs become more actionable and effective.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;verification-approaches&quot;&gt;Verification Approaches&lt;&#x2F;h2&gt;
&lt;p&gt;You might think this adds substantial work to your process. Not necessarily so. Verification methods can range from simple to sophisticated. Different architectural decisions require different verification approaches. Or you may, to get started, decide to go for a very light-weight approach.&lt;&#x2F;p&gt;
&lt;p&gt;Choose an approach that matches your context and needs. Here are some options:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Ask questions: the simplest approach uses straightforward questions for self-assessment.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Automated tests &#x2F; fitness functions: integrate checks in your development pipeline, look for ways to assess architectural characteristics objectively.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Define metrics: what can you measure to determine if decisions are effective?&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Architecture Hoisting: the most comprehensive and strict approach, as described by George Fairbanks:&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;Architecture hoisting is a stricter kind of architecture-focused design. When following an architecture hoisting approach, developers design the architecture with the intent of guaranteeing a goal or property of the system. Guarantees are difficult to come by in any kind of software design, but architecture hoisting strives to guarantee a goal or property through architecture choices. The idea is that once a goal or property has been hoisted into the architecture, developers should not need to write additional code to achieve it.&quot;&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;Let&#x27;s see some examples.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;simple&quot;&gt;Simple&lt;&#x2F;h3&gt;
&lt;p&gt;Suppose the recorded decision is to put all code in a mono-repo. Then the verification step could be as simple as:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;json&quot; style=&quot;background-color:#eff1f5;color:#4f5b66;&quot; class=&quot;language-json &quot;&gt;&lt;code class=&quot;language-json&quot; data-lang=&quot;json&quot;&gt;&lt;span&gt;## Verification
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;Is your service in our mono-repo &amp;lt;include name&amp;gt;?
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This looks too easy right? In some cases it simply can be. Especially if your ADRs apply to multiple teams, and verification is not handled centrally, your verification tends to be more questions or check-based than actual steering towards an implementation.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;self-assessment&quot;&gt;Self-Assessment&lt;&#x2F;h3&gt;
&lt;p&gt;For another example, suppose your cross-team decision is to adopt Continuous Delivery:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;json&quot; style=&quot;background-color:#eff1f5;color:#4f5b66;&quot; class=&quot;language-json &quot;&gt;&lt;code class=&quot;language-json&quot; data-lang=&quot;json&quot;&gt;&lt;span&gt;## Verification
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;- Is the CI pipeline automated?
&lt;&#x2F;span&gt;&lt;span&gt;- Can you deploy to production without manual steps?
&lt;&#x2F;span&gt;&lt;span&gt;- If not, what manual gates are in place?
&lt;&#x2F;span&gt;&lt;span&gt;- Are feature flags used for in-progress work?
&lt;&#x2F;span&gt;&lt;span&gt;- Are branches short-lived?
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Note the self-assessment tone. This style can be adopted when architecture is an enabling and supporting function.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;automated&quot;&gt;Automated&lt;&#x2F;h3&gt;
&lt;p&gt;An example requiring effort but providing more guarantees to compliance:&lt;&#x2F;p&gt;
&lt;p&gt;Suppose we are all working in the previously mentioned mono-repo and discover that teams are not respecting service boundaries. We decide to establish naming conventions to be able to enforce no inappropriate, accidental dependencies are created. We could again simply ask a question in the Verification section if naming conventions are followed, but automating this checking is relatively easy, so we decide to automate this check.&lt;&#x2F;p&gt;
&lt;p&gt;Here we make compliance with our ADR automatic by hoisting the decision in our way of working.&lt;&#x2F;p&gt;
&lt;p&gt;The verification section in the initial ADR may then look like this:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;json&quot; style=&quot;background-color:#eff1f5;color:#4f5b66;&quot; class=&quot;language-json &quot;&gt;&lt;code class=&quot;language-json&quot; data-lang=&quot;json&quot;&gt;&lt;span&gt;## Verification
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;A dependency checker will be written and integrated in our build tooling for each service.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;- Are all of your services&amp;#39; build tooling integrated with the dependency checker?
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;**Roles and organizational structures can heavily influence the verification section. If you have, say, a Developer Experience Team they may likely work with the teams and makes sure this gets integrated in their CI-pipeline. Other times seniors in the teams may take this upon them. Ideally these responsibilities are made clear in the Verification section as well.&lt;&#x2F;p&gt;
&lt;p&gt;More examples of self-assessment questions can be found at &lt;a href=&quot;https:&#x2F;&#x2F;engineering-principles.jlp.engineering&#x2F;self-assessment&#x2F;&quot;&gt;John Lewis&#x27; Software Engineering Principles Self-Assessment&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;implementing-the-verification-section&quot;&gt;Implementing the Verification section&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;code&gt;Verification&lt;&#x2F;code&gt; is my recommended term because of this definition:&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;[!quote] &quot;VERIFY implies the establishing of correspondence of actual facts or details with those proposed or guessed at.&quot; (Merriam-Webster)&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;However, depending on your environment a different name could be a pragmatic choice. &lt;code&gt;Compliance&lt;&#x2F;code&gt; often resonates better in regulated environments. I have used &#x27;Validation&#x27; before. While regularly validating or evaluating our decisions is good practice, here we are aiming to put a system in place to check if our decisions are implemented.&lt;&#x2F;p&gt;
&lt;p&gt;Here is an example of a &lt;code&gt;Verification&lt;&#x2F;code&gt; section you could include in your ADR template:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;json&quot; style=&quot;background-color:#eff1f5;color:#4f5b66;&quot; class=&quot;language-json &quot;&gt;&lt;code class=&quot;language-json&quot; data-lang=&quot;json&quot;&gt;&lt;span&gt;## Verification
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;How will we ensure compliance with this decision?
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;Consider:
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;- Questions for self-assessment
&lt;&#x2F;span&gt;&lt;span&gt;- Specific metrics
&lt;&#x2F;span&gt;&lt;span&gt;- Verifications in the form of tests or fitness functions
&lt;&#x2F;span&gt;&lt;span&gt;- Implementation guidance
&lt;&#x2F;span&gt;&lt;span&gt;- Making it easy to adopt
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Based on this, a more extended example for a Continuous Delivery ADR:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;json&quot; style=&quot;background-color:#eff1f5;color:#4f5b66;&quot; class=&quot;language-json &quot;&gt;&lt;code class=&quot;language-json&quot; data-lang=&quot;json&quot;&gt;&lt;span&gt;## Verification
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;Questions:
&lt;&#x2F;span&gt;&lt;span&gt;- Is the CI pipeline automated?
&lt;&#x2F;span&gt;&lt;span&gt;- Can you deploy to production without manual steps?
&lt;&#x2F;span&gt;&lt;span&gt;- Are feature flags used for in-progress work?
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;Metrics:
&lt;&#x2F;span&gt;&lt;span&gt;- Deployment frequency (target: daily)
&lt;&#x2F;span&gt;&lt;span&gt;- Lead time for changes (target: &amp;lt; &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;1&lt;&#x2F;span&gt;&lt;span&gt; day)
&lt;&#x2F;span&gt;&lt;span&gt;- Change failure rate (target: &amp;lt; &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;15&lt;&#x2F;span&gt;&lt;span&gt;%)
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;Implementation:
&lt;&#x2F;span&gt;&lt;span&gt;- Teams must implement automated deployment, Delivery Platform Team will assist
&lt;&#x2F;span&gt;&lt;span&gt;- Regular metrics reporting (data can be collected manually)
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;considerations&quot;&gt;Considerations&lt;&#x2F;h2&gt;
&lt;p&gt;Won&#x27;t adding this section complicate writing these ADRs? Well yes, there is another section to write and you need to think about how to make your ADRs more effective. Making them more effective and actionable is something you would want regardless, so the thinking part about this should not be skipped. Instead, incorporate verification thinking while writing the ADR: state decisions in measurable and verifiable ways.&lt;&#x2F;p&gt;
&lt;p&gt;Now if you ask me, is it better to have a record of &lt;em&gt;a&lt;&#x2F;em&gt; decision without this section than not having one at all? Then my answer would be yes. Don&#x27;t let perfection stop you from adopting or recording your decisions - use verification to improve them over time.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;Architecture benefits from being grounded in implementation. Adding a Verification section to your ADRs creates feedback loops and improves their quality.&lt;&#x2F;p&gt;
&lt;p&gt;The verification section serves as more than a checklist. It:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Connects architecture with implementation&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Provides a mechanism for feedback in both directions&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Reveals when decisions aren&#x27;t being acted upon&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Makes ADRs more actionable and effective&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;This grounding helps prevent architectural decisions from remaining theoretical or becoming shelfware.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Less Mentioned Benefits of Architecture Decision Records</title>
        <published>2024-11-27T00:00:00+00:00</published>
        <updated>2024-11-27T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/less-mentioned-benefits-of-architecture-decision-records/"/>
        <id>https://hanlho.com/p/less-mentioned-benefits-of-architecture-decision-records/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/less-mentioned-benefits-of-architecture-decision-records/">&lt;p&gt;Teams adopt &lt;a href=&quot;https:&#x2F;&#x2F;adr.github.io&#x2F;adr-templates&#x2F;&quot;&gt;Architecture Decision Records&lt;&#x2F;a&gt; (ADRs) to document decisions, avoid revisiting settled matters, onboard newcomers, adapt to changing circumstances, and sharpen their thinking. However, there are several other valuable benefits that often go unnoticed.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;a-clear-decision-process&quot;&gt;A clear decision process&lt;&#x2F;h4&gt;
&lt;p&gt;The most basic of these: having a clear decision process beats having none at all.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;objective-evaluation&quot;&gt;Objective evaluation&lt;&#x2F;h4&gt;
&lt;p&gt;ADRs provide a framework to evaluate design choices objectively. Instead of getting caught in personal debates, we focus on documenting and analysing technical trade-offs together.&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;Developers on the same project will often disagree about design options. The disagreements can often be resolved, or at least reduced from a boil to a simmer, by exposing the decision making process. If the disagreement is simply that one developer thinks option A is better, and another thinks option B is better, it is hard to choose. When the rationale for each is expressed using the template, it may be clear that A helps usability, while B helps testability. This does not immediately resolve the disagreement, since both usability and testability are desirable, but now the question is which quality is a higher priority for the project. It casts the problem as an engineering or requirements decision, not as a judgment about who is the better designer, and can help take egos out of the dispute.&quot; (George Fairbanks, Just Enough Software Architecture)&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;h4 id=&quot;grounding-in-reality&quot;&gt;Grounding in reality&lt;&#x2F;h4&gt;
&lt;p&gt;Through ADRs, engineering roles could become more invested in architectural decisions, while architect roles stay connected with practical implementation realities. This grounds architectural decisions in practical reality, bridging vision and practice.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;preventing-paralysis&quot;&gt;Preventing paralysis&lt;&#x2F;h4&gt;
&lt;p&gt;When teams can&#x27;t trace why and by whom past decisions were made, decision paralysis could occur. ADRs overcome this by providing the documented context teams need to more confidently evolve their systems.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;efficiency&quot;&gt;Efficiency&lt;&#x2F;h4&gt;
&lt;p&gt;Capturing architectural decisions through ADRs requires minimal effort compared to the time and resources teams spend rediscovering or reconstructing the reasoning behind past choices. This investment in documentation prevents costly archaeological expeditions through old emails, meetings, and code.&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;The cost of recording knowledge is small compared to the cost of acquiring knowledge.&quot; (Software Development Pearls, Karl Wiegers)&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;h4 id=&quot;further-reading&quot;&gt;Further Reading&lt;&#x2F;h4&gt;
&lt;p&gt;If you like to learn more about ADRs, here are some of my favorite online resources:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;adr.github.io&quot;&gt;The GitHub adr organization&lt;&#x2F;a&gt; provides additional motivation, tooling, and pointers to public documentation.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Olaf Zimmermann&#x27;s &lt;a href=&quot;https:&#x2F;&#x2F;ozimmer.ch&#x2F;index&#x2F;2020&#x2F;04&#x2F;15&#x2F;BlogHighlightsAndOutlook.html&quot;&gt;Architectural Decisions (ADs) and Software Architecture&lt;&#x2F;a&gt; offers insights into the practice.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Software Architecture Note: On Negotiation and Limiting Accidental Complexity</title>
        <published>2023-11-28T00:00:00+00:00</published>
        <updated>2023-11-28T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/software-architecture-note-on-negotiation-and-limiting-accidental-complexity/"/>
        <id>https://hanlho.com/p/software-architecture-note-on-negotiation-and-limiting-accidental-complexity/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/software-architecture-note-on-negotiation-and-limiting-accidental-complexity/">&lt;p&gt;In this brief article, I will discuss two insights from the book &lt;a href=&quot;https:&#x2F;&#x2F;www.goodreads.com&#x2F;book&#x2F;show&#x2F;44144493-fundamentals-of-software-architecture&quot;&gt;&lt;strong&gt;Fundamentals of Software Architecture: An Engineering Approach&lt;&#x2F;strong&gt;&lt;&#x2F;a&gt; by Mark Richards and Neal Ford: the importance of negotiation in an architect&#x27;s job and an effective communication approach. I will then offer a way on how these aspects can be incorporated into architectural processes.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;negotiation-and-complexity-in-the-architect-role&quot;&gt;Negotiation and complexity in the architect role&lt;&#x2F;h3&gt;
&lt;p&gt;Negotiation is an integral part of a software architect&#x27;s role, as they will be challenged by developers who may possess greater technical knowledge of the proposed solution, by other architects within the organisation who may question their ideas or approach to the problem, and by business stakeholders who will demand justification for the cost and time invested. While this may seem logical when read, seeing it explicitly stated provides comfort in knowing that disagreements are normal and that working together often requires adjusting ideas along the way.&lt;&#x2F;p&gt;
&lt;p&gt;Another challenge as an architect, and for a developer as well, is to avoid accidental complexity (&quot;we made a problem hard&quot;) and focus on the essential complexity (&quot;we have a hard problem&quot;). An effective way to avoid accidental complexity is to use what the book calls the &#x27;4C&#x27;s: &lt;strong&gt;C&lt;&#x2F;strong&gt;lear and &lt;strong&gt;C&lt;&#x2F;strong&gt;oncise &lt;strong&gt;C&lt;&#x2F;strong&gt;ommunication, and &lt;strong&gt;C&lt;&#x2F;strong&gt;ollaborating with all stakeholders are essential for a software architect to limit the chances of accidentally making things harder.&lt;&#x2F;p&gt;
&lt;p&gt;While these leadership traits are important for the &lt;em&gt;role&lt;&#x2F;em&gt; of an architect, they can also be incorporated into the architectural &lt;em&gt;process&lt;&#x2F;em&gt;.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;negotiation-and-complexity-in-the-architecture-process&quot;&gt;Negotiation and complexity in the architecture process&lt;&#x2F;h3&gt;
&lt;p&gt;One way to achieve the 4C&#x27;s and facilitate negotiation, which I have implemented as an improvement over other processes (e.g. an &#x27;Architecture Board&#x27;) not mentioned in the book but extensively described in the article &lt;a href=&quot;https:&#x2F;&#x2F;martinfowler.com&#x2F;articles&#x2F;scaling-architecture-conversationally.html&quot;&gt;&quot;Scaling the Practice of Architecture, Conversationally&quot;&lt;&#x2F;a&gt;, is by introducing an &lt;em&gt;Architecture Forum&lt;&#x2F;em&gt;. Such a forum leads to artefacts in the form of principles, lightweight Architecture Decision Records (ADR), and a &#x27;Tech Radar&#x27;. The Forum helps share new ideas and decide the partners to collaborate with on different topics. People involved will then write down their choices clearly and concisely, aligning with the intent of ADRs.&lt;&#x2F;p&gt;
&lt;p&gt;Negotiation happens while documenting the decisions. Collaboration on written text leaves less room for interpretation than a meeting does and you are directly working on what will become the final result. (Of course, meetings can still support converging on the final decision.) The records of these decisions will then help others, and ourselves, in the future remember the often-forgotten aspect of how things are: the why.&lt;&#x2F;p&gt;
&lt;hr &#x2F;&gt;
&lt;p&gt;A lot more can be written on the topic of an Architecture Forum and its artefacts, especially ADRs, but I leave it at this for now. If you would like to know more, do not hesitate to show interest and I may go deeper in the next post.&lt;&#x2F;p&gt;
</content>
        
    </entry>
</feed>
