<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
    <title>HanLHo. - Fractional Architect &amp; Software Product Engineer - testing</title>
    <link rel="self" type="application/atom+xml" href="https://hanlho.com/tags/testing/atom.xml"/>
    <link rel="alternate" type="text/html" href="https://hanlho.com"/>
    <generator uri="https://www.getzola.org/">Zola</generator>
    <updated>2026-02-27T00:00:00+00:00</updated>
    <id>https://hanlho.com/tags/testing/atom.xml</id>
    <entry xml:lang="en">
        <title>Harness Engineering</title>
        <published>2026-02-27T00:00:00+00:00</published>
        <updated>2026-02-27T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/harness-engineering/"/>
        <id>https://hanlho.com/p/harness-engineering/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/harness-engineering/">&lt;p&gt;Today I heard the term “harness engineering” for the first time:&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;Harness engineering is the practice of building tooling, tests, and automation that let coding agents execute tasks safely and reliably.&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;If code is written more and more by LLMs, the focus seems to be shifting to creating guardrails so agents can validate their own work.&lt;&#x2F;p&gt;
&lt;p&gt;Heard in: &lt;a href=&quot;https:&#x2F;&#x2F;share.snipd.com&#x2F;episode&#x2F;d7924de8-b7e1-41fc-8f37-6edee96f12f6&quot;&gt;The Pragmatic Engineer - Mitchell Hashimoto’s new way of writing code&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>AI’s Opportunity: Pacing Control Loops with Development</title>
        <published>2026-02-04T00:00:00+00:00</published>
        <updated>2026-02-04T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/ais-opportunity-pacing-control-loops-with-development/"/>
        <id>https://hanlho.com/p/ais-opportunity-pacing-control-loops-with-development/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/ais-opportunity-pacing-control-loops-with-development/">&lt;p&gt;What caught my attention in the book &lt;em&gt;Vibe Coding&lt;&#x2F;em&gt; by Gene Kim and Steve Yegge is the idea that, as LLMs and coding agents change how we build software, control loops—tests, reviews, and other signals that tell you whether a change behaves as expected—should be faster and more integrated into development feedback loops than before. My intuition says this makes perfect sense.&lt;&#x2F;p&gt;
&lt;p&gt;For example, when there is a dedicated test stage or a QA role that tests after the fact, that role inevitably struggles to keep up with the speed of development. Over time, this makes it increasingly difficult to sustain a &#x27;testing after the fact&#x27; organisation of quality.&lt;&#x2F;p&gt;
&lt;p&gt;So how do we solve this?&lt;&#x2F;p&gt;
&lt;p&gt;Some may think that introducing AI by implementing it at the test level after the fact, could be the solution. However, at the rate of development I see and read about, this approach will be hard to keep up with. One either has to accept not fully taking advantage of what AI can help development with, or rethink how testing is integrated into the development process.&lt;&#x2F;p&gt;
&lt;p&gt;Put bluntly: if AI lets you produce a feature in hours but the first meaningful acceptance signal only arrives days later in a separate stage, quality assurance become the bottleneck.&lt;&#x2F;p&gt;
&lt;p&gt;To me, the logical consequence is a stronger shift towards automated quality controls, including acceptance tests and code reviews at the least. I refer to acceptance tests here as writing executable specifications of expected behaviour (in domain language) before or alongside the code. This implies that testing &lt;em&gt;has&lt;&#x2F;em&gt; to move earlier in the development chain &lt;em&gt;because&lt;&#x2F;em&gt; of AI.&lt;&#x2F;p&gt;
&lt;p&gt;AI is an opportunity to start writing acceptance tests if you have not yet. It pushes us to invest time in strategic test design, testing against stable contracts, testing from a behavioural point of view, and isolating test descriptions from the actual implementation.&lt;&#x2F;p&gt;
&lt;p&gt;Put differently, the shift in development practices that LLMs are causing should inspire &lt;em&gt;more&lt;&#x2F;em&gt; adherence to testing best practices, not less. That is, if you want to keep on adding new features, fix and prevent bugs, and keep up the pace of development.&lt;&#x2F;p&gt;
&lt;p&gt;More broadly, to keep benefiting from AI over time, we should shift towards tightly coupled feedback loops embedded in everyday development. This is not limited to testing but also applies to, for example, reviews. In that sense, AI doesn’t remove quality practices; it raises the stakes if you don’t have them.&lt;&#x2F;p&gt;
&lt;p&gt;If testing &#x27;shifts left&#x27;, team structures must change as well. This evolution points towards smaller, more autonomous teams where testing, development, and feedback are inseparable rather than sequential.&lt;&#x2F;p&gt;
&lt;p&gt;AI presents us with an opportunity: not faster quality control after the fact, but to design systems, processes, and teams that make quality the fastest path forward, so control loops can keep up with the increasing pace of development loops.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>On Building Reliable Software with LLMs</title>
        <published>2026-01-27T00:00:00+00:00</published>
        <updated>2026-01-27T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/on-building-reliable-software-with-llms/"/>
        <id>https://hanlho.com/p/on-building-reliable-software-with-llms/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/on-building-reliable-software-with-llms/">&lt;p&gt;This post captures my current thinking on how LLMs are impacting software development, particularly around software quality and engineering discipline.&lt;&#x2F;p&gt;
&lt;p&gt;My main observation: most of the best practices we&#x27;ve relied on for years are just as important—maybe even more so—in an LLM-assisted development environment. Working with LLMs requires &lt;em&gt;more&lt;&#x2F;em&gt; discipline and attention to fundamentals, not less.&lt;&#x2F;p&gt;
&lt;p&gt;When using LLMs, there is a heightened risk of losing understanding: of the problem domain, the underlying technology, and the implementation details. Code can become messy quickly without careful attention, review, and guidance. While this is certainly true, we didn&#x27;t need LLMs for this to happen. Why else have so many projects failed historically? Why is technical debt a topic in most projects?&lt;&#x2F;p&gt;
&lt;p&gt;The critical difference with LLMs is the &lt;em&gt;increased risk and temptation of velocity&lt;&#x2F;em&gt;. We move too fast and skip the practices that help us maintain and change software in the future. Discipline and &lt;a href=&quot;https:&#x2F;&#x2F;aicoding.leaflet.pub&#x2F;3mbrvhyye4k2e&quot;&gt;rigour&lt;&#x2F;a&gt; have become more important than ever.&lt;&#x2F;p&gt;
&lt;p&gt;These practices are becoming MORE crucial in an LLM-assisted workflow:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Codebase quality.&lt;&#x2F;strong&gt; This matters for LLM agents too, because they learn from existing code. A clean, well-organised codebase helps agents perform better; inconsistencies lead to poorer results. An LLM will mimic what&#x27;s already there.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Feedback loops and testing.&lt;&#x2F;strong&gt; If an LLM is helping you write code, you need reliable ways to verify it hasn&#x27;t broken anything. A well-designed, automated test suite that&#x27;s easy to extend and interpret helps maintain understanding and control of implemented functionality.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Well-designed boundaries and contracts&lt;&#x2F;strong&gt; both within and outside your application. These allow you to constrain, shape, isolate, and test the work an LLM produces.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Managing risk and technical debt.&lt;&#x2F;strong&gt; Be intentional and explicit about where you rely on LLMs and where you don&#x27;t. Document these decisions. Maintain a technical debt log with risk assessments and timelines.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Documentation of past decisions.&lt;&#x2F;strong&gt; Keep a history of architectural decisions through decision logs and ADRs, and ensure the LLM you&#x27;re working with is aware of them. I&#x27;ve had LLMs point out inconsistencies in the codebase or flag how new change requests conflict with past decisions.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Taken together, these practices are what make LLM-assisted development sustainable rather than brittle.&lt;&#x2F;p&gt;
&lt;p&gt;I don&#x27;t think the skill gap in building and delivering software will ultimately be about prompt cleverness. LLM agents will be genuinely helpful tools, and working effectively with them will be an accelerator. However, as we rely on them more to write code—even when we review that code carefully—the most important work increasingly becomes the disciplined practice of boxing them in with testing, architecture, and contracts.&lt;&#x2F;p&gt;
&lt;p&gt;Avoid painting yourself into a corner. In an LLM-assisted workflow, that means being deliberate about where you let agents move fast, and where you slow them down with guardrails. LLMs make it easier to move fast, and easier to get stuck.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Using Code Coverage as a Check for Test Refactoring</title>
        <published>2025-07-03T00:00:00+00:00</published>
        <updated>2025-07-03T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/using-code-coverage-as-a-check-for-test-refactoring/"/>
        <id>https://hanlho.com/p/using-code-coverage-as-a-check-for-test-refactoring/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/using-code-coverage-as-a-check-for-test-refactoring/">&lt;p&gt;&lt;strong&gt;TLDR:&lt;&#x2F;strong&gt; Use code coverage reports to verify that test refactorings haven&#x27;t accidentally changed what functionality you&#x27;re testing. Same coverage percentage before and after refactoring gives confidence your tests still cover the same code paths.&lt;&#x2F;p&gt;
&lt;p&gt;If we refactor code, tests should cover us from breaking things. But what about refactoring tests? How do we know our tests are still testing the same functionality as before?&lt;&#x2F;p&gt;
&lt;p&gt;While test refactoring isn&#x27;t widely discussed I found myself refactoring the tests of my applications whilst reviewing and researching testing approaches.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-am-i-refactoring-tests&quot;&gt;Why am I refactoring tests?&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;experimentation-with-executable-specifications&quot;&gt;Experimentation with executable specifications&lt;&#x2F;h3&gt;
&lt;p&gt;I often use Fluent Test DSLs to develop executable specifications. The DSL may change if it expresses the functionality more clearly or concisely. This might involve renaming a method, changing an assertion, or refactoring the DSL implementation. These changes are usually low risk, but it&#x27;s helpful to verify we haven&#x27;t accidentally missed any behaviour.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;optimize-assertions&quot;&gt;Optimize assertions&lt;&#x2F;h3&gt;
&lt;p&gt;Sometimes a single test ends up testing too much and you start trimming the test to only validate the exact behaviour you intended to test, possibly aiming for a single clean &lt;code&gt;assert&lt;&#x2F;code&gt;. This sometimes means creating multiple tests to cover the same functionality.&lt;&#x2F;p&gt;
&lt;p&gt;The opposite may also happen. You have too many tests that affect feedback time and you want to combine tests to improve performance. Not ideal but may be a pragmatic approach at times.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;consistent-test-naming&quot;&gt;Consistent test naming&lt;&#x2F;h3&gt;
&lt;p&gt;Test naming can grow organically and become inconsistent over time. This happens especially when tests use sentence descriptions. Small changes pile up until your tests all follow slightly different naming styles.&lt;&#x2F;p&gt;
&lt;p&gt;You might adopt &lt;a href=&quot;&#x2F;p&#x2F;test-naming-guidelines&#x2F;&quot;&gt;Testing naming guidelines&lt;&#x2F;a&gt; for consistency. When you start applying them, you realise you haven&#x27;t reviewed all tests together recently because you&#x27;ve been adding features one by one instead.&lt;&#x2F;p&gt;
&lt;p&gt;During this naming cleanup, you often find other small improvements you want to make. Some tests may no longer seem relevant in the wider context. This leads to more small refactorings.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;ai-supported-test-refactoring&quot;&gt;AI-supported test refactoring&lt;&#x2F;h2&gt;
&lt;p&gt;You might want to use AI to help with these refactorings.&lt;&#x2F;p&gt;
&lt;p&gt;Take the consistent test naming example above. I use AI agents for this task, asking them to review my current tests and align with &lt;a href=&quot;&#x2F;p&#x2F;test-naming-guidelines&#x2F;&quot;&gt;testing guidelines&lt;&#x2F;a&gt;. I also ask the AI to check for small cleanup opportunities.&lt;&#x2F;p&gt;
&lt;p&gt;An AI assistant can make a plan and list all the changes I need to implement. I then work through them one by one, often by copy-pasting. This isn&#x27;t the most exciting work, so it would be easier if the AI could make these changes directly after I approve them.&lt;&#x2F;p&gt;
&lt;p&gt;This raises a question: how do I know I&#x27;m still testing the same functionality without checking every line of code?&lt;&#x2F;p&gt;
&lt;p&gt;Test renaming shouldn&#x27;t change the test logic, but splitting or trimming tests might. You might even let an AI agent change code directly, which could lead to unintended changes.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;verifying-test-refactorings-with-code-coverage&quot;&gt;Verifying test refactorings with code coverage&lt;&#x2F;h2&gt;
&lt;p&gt;The first check after any refactoring is obvious: do the tests still compile and run?&lt;&#x2F;p&gt;
&lt;p&gt;But how do we know the tests still cover the same functionality after refactoring?&lt;&#x2F;p&gt;
&lt;p&gt;I use code coverage as a basic check for this. Code coverage shows which code lines your tests execute, and it can catch obvious mistakes during test refactoring. If I have exactly the same coverage percentage before and after refactoring (like &lt;code&gt;94.614%&lt;&#x2F;code&gt;), it suggests I haven&#x27;t accidentally deleted entire tests or changed which code paths are exercised.&lt;&#x2F;p&gt;
&lt;p&gt;This isn&#x27;t code coverage&#x27;s intended purpose, and it has clear limitations - it won&#x27;t catch changes in test logic or assertions that still execute the same code lines. But it&#x27;s a useful test for catching the most obvious refactoring mistakes.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;when-to-refactor-tests&quot;&gt;When to refactor tests&lt;&#x2F;h2&gt;
&lt;p&gt;I wanted to include the types of test refactorings that code coverage can help verify. These fall into two categories: individual test refactorings are usually easy to manage, but sometimes we need to, or should, refactor tests at the level of entire test suites.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Refactor individual tests when:&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The test is too complex or difficult to understand&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;The test asserts too many behaviours at once&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;The test is too slow or resource-intensive&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;The test doesn&#x27;t match the rest of the test suite style&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;The test is outdated or no longer relevant&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;&lt;strong&gt;Refactor entire test suites when:&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The test suite fails with almost every production code change&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;The test suite lacks consistency across tests&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;The test suite doesn&#x27;t align with current testing guidelines&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;The test suite has become too large or complex to manage&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;The test framework or design needs upgrading&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;Using code coverage as a basic check for test refactorings isn&#x27;t comprehensive, but it&#x27;s a simple technique that catches obvious mistakes.&lt;&#x2F;p&gt;
&lt;p&gt;I&#x27;m curious how others approach this challenge. Do you have other methods for ensuring your tests haven&#x27;t been unintentionally modified during refactoring?&lt;&#x2F;p&gt;
&lt;p&gt;Thank you for reading, Hans&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Test Naming Guidelines</title>
        <published>2025-06-26T00:00:00+00:00</published>
        <updated>2025-06-26T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/test-naming-guidelines/"/>
        <id>https://hanlho.com/p/test-naming-guidelines/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/test-naming-guidelines/">&lt;p&gt;&lt;strong&gt;TLDR&lt;&#x2F;strong&gt;: Below are test naming guidelines that help me write consistent, clear test names.&lt;&#x2F;p&gt;
&lt;p&gt;This is quick post with the Test Naming Guidelines I have been using to make my test names consistent across multiple projects. I use also these with LLMs when writing tests or refactoring multiple tests to this style.&lt;&#x2F;p&gt;
&lt;p&gt;I used one of my personal projects as a basis to develop, refine and structure these guidelines with help from an LLM.&lt;&#x2F;p&gt;
&lt;p&gt;Below are the guidelines.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;format&quot;&gt;Format&lt;&#x2F;h2&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#eff1f5;color:#4f5b66;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;[subject]_should_[expected_behavior]_[optional_when_condition]
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;components&quot;&gt;Components&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;strong&gt;Subject&lt;&#x2F;strong&gt;: The component, feature, or system under test&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Examples: &lt;code&gt;user&lt;&#x2F;code&gt;, &lt;code&gt;api&lt;&#x2F;code&gt;, &lt;code&gt;filter&lt;&#x2F;code&gt;,&lt;code&gt;time_report&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;&lt;strong&gt;Expected Behaviour&lt;&#x2F;strong&gt;: What should happen, described as an action or outcome&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Examples: &lt;code&gt;return_success&lt;&#x2F;code&gt;, &lt;code&gt;validate_input&lt;&#x2F;code&gt;, &lt;code&gt;fail_with_error&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;&lt;strong&gt;Optional When Condition&lt;&#x2F;strong&gt;: Include only when necessary for clarity or disambiguation&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Format: &lt;code&gt;when_[condition]&lt;&#x2F;code&gt;&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Examples: &lt;code&gt;when_input_valid&lt;&#x2F;code&gt;, &lt;code&gt;when_user_authenticated&lt;&#x2F;code&gt;&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;guidelines-for-when-conditions&quot;&gt;Guidelines for When Conditions&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;include-when-condition-if&quot;&gt;Include &lt;code&gt;when_[condition]&lt;&#x2F;code&gt; if:&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Essential for understanding&lt;&#x2F;strong&gt;: The condition is crucial to know what&#x27;s being tested&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Multiple variants exist&lt;&#x2F;strong&gt;: Similar tests with different conditions need distinction&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Specific circumstances&lt;&#x2F;strong&gt;: The behavior only occurs under particular conditions&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;omit-when-condition-for&quot;&gt;Omit &lt;code&gt;when_[condition]&lt;&#x2F;code&gt; for:&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Basic&#x2F;default behavior&lt;&#x2F;strong&gt;: Standard functionality that doesn&#x27;t require special conditions&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Self-evident scenarios&lt;&#x2F;strong&gt;: Cases where the expected behavior already implies the context&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Overly obvious conditions&lt;&#x2F;strong&gt;: When the condition adds no meaningful information&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;examples&quot;&gt;Examples&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;strong&gt;Good (concise when appropriate):&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#eff1f5;color:#4f5b66;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;health_check_should_return_200
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;user_should_be_created
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Avoid (unnecessarily verbose):&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#eff1f5;color:#4f5b66;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;health_check_should_return_200_when_request_valid&lt;&#x2F;span&gt;&lt;span&gt;  &#x2F;&#x2F; &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;valid&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot; is implied by 200 response
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;user_should_be_created_when_data_provided&lt;&#x2F;span&gt;&lt;span&gt;  &#x2F;&#x2F; data is obviously needed
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Good (meaningful distinctions):&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#eff1f5;color:#4f5b66;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;user_should_login_successfully_when_credentials_valid
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;user_should_be_rejected_when_credentials_invalid
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;principle&quot;&gt;Principle&lt;&#x2F;h2&gt;
&lt;p&gt;Keep test names &lt;strong&gt;as short as possible while maintaining clarity&lt;&#x2F;strong&gt;. The &lt;code&gt;when_&lt;&#x2F;code&gt; clause is a tool for disambiguation, not a mandatory requirement.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;key-takeaways&quot;&gt;Key Takeaways&lt;&#x2F;h2&gt;
&lt;p&gt;Architecture benefits from being grounded in implementation. Adding a Verification section to your ADRs creates feedback loops and improves their quality.&lt;&#x2F;p&gt;
&lt;p&gt;The verification section serves as more than a checklist. It:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Connects architecture with implementation&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Provides a mechanism for feedback in both directions&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Reveals when decisions aren&#x27;t being acted upon&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Makes ADRs more actionable and effective&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;This grounding helps prevent architectural decisions from remaining theoretical or becoming shelfware.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Notes on TDD &amp; DDD From the Ground Up Live Coding</title>
        <published>2024-11-14T00:00:00+00:00</published>
        <updated>2024-11-14T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/notes-on-tdd-ddd-from-the-ground-up-live-coding/"/>
        <id>https://hanlho.com/p/notes-on-tdd-ddd-from-the-ground-up-live-coding/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/notes-on-tdd-ddd-from-the-ground-up-live-coding/">&lt;p&gt;Some short notes from Chris Simon&#x27;s Talk &lt;a href=&quot;https:&#x2F;&#x2F;youtube.com&#x2F;watch?v=1WBIUYJVnok&amp;amp;si=ZiyE8Hvx3U5LsNHR&quot;&gt;TDD &amp;amp; DDD From the Ground Up Live Coding&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;When choosing the right testing level, developers face an important trade-off. Higher-level tests provide better coverage for refactoring, but make it harder to pinpoint the exact location of failures. Finding the right balance is crucial for maintainable tests.&lt;&#x2F;p&gt;
&lt;p&gt;Domain modelling through traditional entity diagrams can lead to hidden assumptions. Event storming, by contrast, helps create more explicit models that better reflect how systems change over time. This approach brings us closer to the actual domain and helps reveal important details we might otherwise miss.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Notes on 🚀 TDD, Where Did It All Go Wrong</title>
        <published>2024-11-14T00:00:00+00:00</published>
        <updated>2024-11-14T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/notes-on-tdd-where-did-it-all-go-wrong/"/>
        <id>https://hanlho.com/p/notes-on-tdd-where-did-it-all-go-wrong/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/notes-on-tdd-where-did-it-all-go-wrong/">&lt;p&gt;My short notes on &lt;a href=&quot;https:&#x2F;&#x2F;youtube.com&#x2F;watch?v=EZ05e7EMOLM&amp;amp;si=z4CYVriIZOOKwQNV&quot;&gt;🚀 TDD, Where Did It All Go Wrong&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;Still a lot of testing wisdom in this talk (re-watched 6 years after publication).&lt;&#x2F;p&gt;
&lt;h2 id=&quot;focus-on-behaviour&quot;&gt;Focus on behaviour&lt;&#x2F;h2&gt;
&lt;p&gt;Behaviour should be your primary focus when writing tests. The need for a new test should arise from new behaviour or requirements. While TDD is often contrasted with BDD, Kent Beck&#x27;s early work already emphasised the importance of focusing on behaviour.&lt;&#x2F;p&gt;
&lt;p&gt;When writing tests, be mindful of coupling. Your tests should not all fail when you change your code. Write tests against public contracts where possible. In my experience, you can deviate from this rule if your unit under test has a stable internal contract.&lt;&#x2F;p&gt;
&lt;p&gt;This approach leads to better &#x27;refactorability&#x27;. For a practical demonstration, I recommend watching &lt;a href=&quot;https:&#x2F;&#x2F;youtube.com&#x2F;watch?v=1WBIUYJVnok&amp;amp;si=ZiyE8Hvx3U5LsNHR&quot;&gt;TDD &amp;amp; DDD From the Ground Up Live Coding by Chris Simon&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;atdd-tools-are-not-worth-it-if-business-is-not-actively-participating&quot;&gt;ATDD tools are not worth it if business is not actively participating&lt;&#x2F;h2&gt;
&lt;p&gt;While I have promoted the use of ATDD (Acceptance Test Driven Development) tools like Gherkin, I must agree, the burden of translating between natural language and code can be &#x27;horrible&#x27;, to the point where internal frameworks are implemented to manage this complexity.&lt;&#x2F;p&gt;
&lt;p&gt;More importantly, the effort may not be worthwhile without an engaged business analyst or product owner. In my experience, business stakeholders rarely show interest in participating. While I&#x27;ve questioned whether I could have done more to encourage engagement, this talk confirms this is a common challenge. Though disappointing to acknowledge, as the practice appears promising on paper, this seems to be the reality we face.&lt;&#x2F;p&gt;
&lt;p&gt;That said, the consistent style these tools promote can still be valuable in test writing.&lt;&#x2F;p&gt;
&lt;p&gt;If you had success using Gherkin or similar tool I would be interested in learn how.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;do-not-forget-about-refactor&quot;&gt;Do not forget about Refactor&lt;&#x2F;h2&gt;
&lt;p&gt;The TDD-cycle is Red-Green-&lt;strong&gt;Refactor&lt;&#x2F;strong&gt;. First fix the problem, then improve the design. The central idea is to decouple thinking about the problem from thinking about the design. The Refactor step is where the design is improved and is an integral part of the cycle.&lt;&#x2F;p&gt;
&lt;p&gt;This methodical approach leads to more maintainable code, contrasting with the approach of the &#x27;Duct-tape Programmer&#x27; (this talk) or &#x27;Tactical Tornado&#x27; (&lt;a href=&quot;https:&#x2F;&#x2F;www.goodreads.com&#x2F;book&#x2F;show&#x2F;39996759-a-philosophy-of-software-design?from_search=true&amp;amp;from_srp=true&amp;amp;qid=VdXFhGejkS&amp;amp;rank=1&quot;&gt;Ousterhout&lt;&#x2F;a&gt;) approach.&lt;&#x2F;p&gt;
&lt;p&gt;When you can improve the design without changing tests, you have achieved good decoupling.&lt;&#x2F;p&gt;
</content>
        
    </entry>
</feed>
