<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
    <title>HanLHo. - Fractional Architect &amp; Software Product Engineer - bdd</title>
    <link rel="self" type="application/atom+xml" href="https://hanlho.com/tags/bdd/atom.xml"/>
    <link rel="alternate" type="text/html" href="https://hanlho.com"/>
    <generator uri="https://www.getzola.org/">Zola</generator>
    <updated>2026-02-04T00:00:00+00:00</updated>
    <id>https://hanlho.com/tags/bdd/atom.xml</id>
    <entry xml:lang="en">
        <title>AI’s Opportunity: Pacing Control Loops with Development</title>
        <published>2026-02-04T00:00:00+00:00</published>
        <updated>2026-02-04T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/ais-opportunity-pacing-control-loops-with-development/"/>
        <id>https://hanlho.com/p/ais-opportunity-pacing-control-loops-with-development/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/ais-opportunity-pacing-control-loops-with-development/">&lt;p&gt;What caught my attention in the book &lt;em&gt;Vibe Coding&lt;&#x2F;em&gt; by Gene Kim and Steve Yegge is the idea that, as LLMs and coding agents change how we build software, control loops—tests, reviews, and other signals that tell you whether a change behaves as expected—should be faster and more integrated into development feedback loops than before. My intuition says this makes perfect sense.&lt;&#x2F;p&gt;
&lt;p&gt;For example, when there is a dedicated test stage or a QA role that tests after the fact, that role inevitably struggles to keep up with the speed of development. Over time, this makes it increasingly difficult to sustain a &#x27;testing after the fact&#x27; organisation of quality.&lt;&#x2F;p&gt;
&lt;p&gt;So how do we solve this?&lt;&#x2F;p&gt;
&lt;p&gt;Some may think that introducing AI by implementing it at the test level after the fact, could be the solution. However, at the rate of development I see and read about, this approach will be hard to keep up with. One either has to accept not fully taking advantage of what AI can help development with, or rethink how testing is integrated into the development process.&lt;&#x2F;p&gt;
&lt;p&gt;Put bluntly: if AI lets you produce a feature in hours but the first meaningful acceptance signal only arrives days later in a separate stage, quality assurance become the bottleneck.&lt;&#x2F;p&gt;
&lt;p&gt;To me, the logical consequence is a stronger shift towards automated quality controls, including acceptance tests and code reviews at the least. I refer to acceptance tests here as writing executable specifications of expected behaviour (in domain language) before or alongside the code. This implies that testing &lt;em&gt;has&lt;&#x2F;em&gt; to move earlier in the development chain &lt;em&gt;because&lt;&#x2F;em&gt; of AI.&lt;&#x2F;p&gt;
&lt;p&gt;AI is an opportunity to start writing acceptance tests if you have not yet. It pushes us to invest time in strategic test design, testing against stable contracts, testing from a behavioural point of view, and isolating test descriptions from the actual implementation.&lt;&#x2F;p&gt;
&lt;p&gt;Put differently, the shift in development practices that LLMs are causing should inspire &lt;em&gt;more&lt;&#x2F;em&gt; adherence to testing best practices, not less. That is, if you want to keep on adding new features, fix and prevent bugs, and keep up the pace of development.&lt;&#x2F;p&gt;
&lt;p&gt;More broadly, to keep benefiting from AI over time, we should shift towards tightly coupled feedback loops embedded in everyday development. This is not limited to testing but also applies to, for example, reviews. In that sense, AI doesn’t remove quality practices; it raises the stakes if you don’t have them.&lt;&#x2F;p&gt;
&lt;p&gt;If testing &#x27;shifts left&#x27;, team structures must change as well. This evolution points towards smaller, more autonomous teams where testing, development, and feedback are inseparable rather than sequential.&lt;&#x2F;p&gt;
&lt;p&gt;AI presents us with an opportunity: not faster quality control after the fact, but to design systems, processes, and teams that make quality the fastest path forward, so control loops can keep up with the increasing pace of development loops.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Test Naming Guidelines</title>
        <published>2025-06-26T00:00:00+00:00</published>
        <updated>2025-06-26T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/test-naming-guidelines/"/>
        <id>https://hanlho.com/p/test-naming-guidelines/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/test-naming-guidelines/">&lt;p&gt;&lt;strong&gt;TLDR&lt;&#x2F;strong&gt;: Below are test naming guidelines that help me write consistent, clear test names.&lt;&#x2F;p&gt;
&lt;p&gt;This is quick post with the Test Naming Guidelines I have been using to make my test names consistent across multiple projects. I use also these with LLMs when writing tests or refactoring multiple tests to this style.&lt;&#x2F;p&gt;
&lt;p&gt;I used one of my personal projects as a basis to develop, refine and structure these guidelines with help from an LLM.&lt;&#x2F;p&gt;
&lt;p&gt;Below are the guidelines.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;format&quot;&gt;Format&lt;&#x2F;h2&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#eff1f5;color:#4f5b66;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;[subject]_should_[expected_behavior]_[optional_when_condition]
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;components&quot;&gt;Components&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;strong&gt;Subject&lt;&#x2F;strong&gt;: The component, feature, or system under test&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Examples: &lt;code&gt;user&lt;&#x2F;code&gt;, &lt;code&gt;api&lt;&#x2F;code&gt;, &lt;code&gt;filter&lt;&#x2F;code&gt;,&lt;code&gt;time_report&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;&lt;strong&gt;Expected Behaviour&lt;&#x2F;strong&gt;: What should happen, described as an action or outcome&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Examples: &lt;code&gt;return_success&lt;&#x2F;code&gt;, &lt;code&gt;validate_input&lt;&#x2F;code&gt;, &lt;code&gt;fail_with_error&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;&lt;strong&gt;Optional When Condition&lt;&#x2F;strong&gt;: Include only when necessary for clarity or disambiguation&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Format: &lt;code&gt;when_[condition]&lt;&#x2F;code&gt;&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Examples: &lt;code&gt;when_input_valid&lt;&#x2F;code&gt;, &lt;code&gt;when_user_authenticated&lt;&#x2F;code&gt;&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;guidelines-for-when-conditions&quot;&gt;Guidelines for When Conditions&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;include-when-condition-if&quot;&gt;Include &lt;code&gt;when_[condition]&lt;&#x2F;code&gt; if:&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Essential for understanding&lt;&#x2F;strong&gt;: The condition is crucial to know what&#x27;s being tested&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Multiple variants exist&lt;&#x2F;strong&gt;: Similar tests with different conditions need distinction&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Specific circumstances&lt;&#x2F;strong&gt;: The behavior only occurs under particular conditions&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;omit-when-condition-for&quot;&gt;Omit &lt;code&gt;when_[condition]&lt;&#x2F;code&gt; for:&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Basic&#x2F;default behavior&lt;&#x2F;strong&gt;: Standard functionality that doesn&#x27;t require special conditions&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Self-evident scenarios&lt;&#x2F;strong&gt;: Cases where the expected behavior already implies the context&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Overly obvious conditions&lt;&#x2F;strong&gt;: When the condition adds no meaningful information&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;examples&quot;&gt;Examples&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;strong&gt;Good (concise when appropriate):&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#eff1f5;color:#4f5b66;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;health_check_should_return_200
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;user_should_be_created
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Avoid (unnecessarily verbose):&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#eff1f5;color:#4f5b66;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;health_check_should_return_200_when_request_valid&lt;&#x2F;span&gt;&lt;span&gt;  &#x2F;&#x2F; &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;valid&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot; is implied by 200 response
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;user_should_be_created_when_data_provided&lt;&#x2F;span&gt;&lt;span&gt;  &#x2F;&#x2F; data is obviously needed
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Good (meaningful distinctions):&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#eff1f5;color:#4f5b66;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;user_should_login_successfully_when_credentials_valid
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;user_should_be_rejected_when_credentials_invalid
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;principle&quot;&gt;Principle&lt;&#x2F;h2&gt;
&lt;p&gt;Keep test names &lt;strong&gt;as short as possible while maintaining clarity&lt;&#x2F;strong&gt;. The &lt;code&gt;when_&lt;&#x2F;code&gt; clause is a tool for disambiguation, not a mandatory requirement.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;key-takeaways&quot;&gt;Key Takeaways&lt;&#x2F;h2&gt;
&lt;p&gt;Architecture benefits from being grounded in implementation. Adding a Verification section to your ADRs creates feedback loops and improves their quality.&lt;&#x2F;p&gt;
&lt;p&gt;The verification section serves as more than a checklist. It:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Connects architecture with implementation&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Provides a mechanism for feedback in both directions&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Reveals when decisions aren&#x27;t being acted upon&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Makes ADRs more actionable and effective&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;This grounding helps prevent architectural decisions from remaining theoretical or becoming shelfware.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Notes on 🚀 TDD, Where Did It All Go Wrong</title>
        <published>2024-11-14T00:00:00+00:00</published>
        <updated>2024-11-14T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/notes-on-tdd-where-did-it-all-go-wrong/"/>
        <id>https://hanlho.com/p/notes-on-tdd-where-did-it-all-go-wrong/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/notes-on-tdd-where-did-it-all-go-wrong/">&lt;p&gt;My short notes on &lt;a href=&quot;https:&#x2F;&#x2F;youtube.com&#x2F;watch?v=EZ05e7EMOLM&amp;amp;si=z4CYVriIZOOKwQNV&quot;&gt;🚀 TDD, Where Did It All Go Wrong&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;Still a lot of testing wisdom in this talk (re-watched 6 years after publication).&lt;&#x2F;p&gt;
&lt;h2 id=&quot;focus-on-behaviour&quot;&gt;Focus on behaviour&lt;&#x2F;h2&gt;
&lt;p&gt;Behaviour should be your primary focus when writing tests. The need for a new test should arise from new behaviour or requirements. While TDD is often contrasted with BDD, Kent Beck&#x27;s early work already emphasised the importance of focusing on behaviour.&lt;&#x2F;p&gt;
&lt;p&gt;When writing tests, be mindful of coupling. Your tests should not all fail when you change your code. Write tests against public contracts where possible. In my experience, you can deviate from this rule if your unit under test has a stable internal contract.&lt;&#x2F;p&gt;
&lt;p&gt;This approach leads to better &#x27;refactorability&#x27;. For a practical demonstration, I recommend watching &lt;a href=&quot;https:&#x2F;&#x2F;youtube.com&#x2F;watch?v=1WBIUYJVnok&amp;amp;si=ZiyE8Hvx3U5LsNHR&quot;&gt;TDD &amp;amp; DDD From the Ground Up Live Coding by Chris Simon&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;atdd-tools-are-not-worth-it-if-business-is-not-actively-participating&quot;&gt;ATDD tools are not worth it if business is not actively participating&lt;&#x2F;h2&gt;
&lt;p&gt;While I have promoted the use of ATDD (Acceptance Test Driven Development) tools like Gherkin, I must agree, the burden of translating between natural language and code can be &#x27;horrible&#x27;, to the point where internal frameworks are implemented to manage this complexity.&lt;&#x2F;p&gt;
&lt;p&gt;More importantly, the effort may not be worthwhile without an engaged business analyst or product owner. In my experience, business stakeholders rarely show interest in participating. While I&#x27;ve questioned whether I could have done more to encourage engagement, this talk confirms this is a common challenge. Though disappointing to acknowledge, as the practice appears promising on paper, this seems to be the reality we face.&lt;&#x2F;p&gt;
&lt;p&gt;That said, the consistent style these tools promote can still be valuable in test writing.&lt;&#x2F;p&gt;
&lt;p&gt;If you had success using Gherkin or similar tool I would be interested in learn how.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;do-not-forget-about-refactor&quot;&gt;Do not forget about Refactor&lt;&#x2F;h2&gt;
&lt;p&gt;The TDD-cycle is Red-Green-&lt;strong&gt;Refactor&lt;&#x2F;strong&gt;. First fix the problem, then improve the design. The central idea is to decouple thinking about the problem from thinking about the design. The Refactor step is where the design is improved and is an integral part of the cycle.&lt;&#x2F;p&gt;
&lt;p&gt;This methodical approach leads to more maintainable code, contrasting with the approach of the &#x27;Duct-tape Programmer&#x27; (this talk) or &#x27;Tactical Tornado&#x27; (&lt;a href=&quot;https:&#x2F;&#x2F;www.goodreads.com&#x2F;book&#x2F;show&#x2F;39996759-a-philosophy-of-software-design?from_search=true&amp;amp;from_srp=true&amp;amp;qid=VdXFhGejkS&amp;amp;rank=1&quot;&gt;Ousterhout&lt;&#x2F;a&gt;) approach.&lt;&#x2F;p&gt;
&lt;p&gt;When you can improve the design without changing tests, you have achieved good decoupling.&lt;&#x2F;p&gt;
</content>
        
    </entry>
</feed>
