<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
    <title>HanLHo. - Fractional Architect &amp; Software Product Engineer - tdd</title>
    <link rel="self" type="application/atom+xml" href="https://hanlho.com/tags/tdd/atom.xml"/>
    <link rel="alternate" type="text/html" href="https://hanlho.com"/>
    <generator uri="https://www.getzola.org/">Zola</generator>
    <updated>2026-02-04T00:00:00+00:00</updated>
    <id>https://hanlho.com/tags/tdd/atom.xml</id>
    <entry xml:lang="en">
        <title>AI’s Opportunity: Pacing Control Loops with Development</title>
        <published>2026-02-04T00:00:00+00:00</published>
        <updated>2026-02-04T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/ais-opportunity-pacing-control-loops-with-development/"/>
        <id>https://hanlho.com/p/ais-opportunity-pacing-control-loops-with-development/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/ais-opportunity-pacing-control-loops-with-development/">&lt;p&gt;What caught my attention in the book &lt;em&gt;Vibe Coding&lt;&#x2F;em&gt; by Gene Kim and Steve Yegge is the idea that, as LLMs and coding agents change how we build software, control loops—tests, reviews, and other signals that tell you whether a change behaves as expected—should be faster and more integrated into development feedback loops than before. My intuition says this makes perfect sense.&lt;&#x2F;p&gt;
&lt;p&gt;For example, when there is a dedicated test stage or a QA role that tests after the fact, that role inevitably struggles to keep up with the speed of development. Over time, this makes it increasingly difficult to sustain a &#x27;testing after the fact&#x27; organisation of quality.&lt;&#x2F;p&gt;
&lt;p&gt;So how do we solve this?&lt;&#x2F;p&gt;
&lt;p&gt;Some may think that introducing AI by implementing it at the test level after the fact, could be the solution. However, at the rate of development I see and read about, this approach will be hard to keep up with. One either has to accept not fully taking advantage of what AI can help development with, or rethink how testing is integrated into the development process.&lt;&#x2F;p&gt;
&lt;p&gt;Put bluntly: if AI lets you produce a feature in hours but the first meaningful acceptance signal only arrives days later in a separate stage, quality assurance become the bottleneck.&lt;&#x2F;p&gt;
&lt;p&gt;To me, the logical consequence is a stronger shift towards automated quality controls, including acceptance tests and code reviews at the least. I refer to acceptance tests here as writing executable specifications of expected behaviour (in domain language) before or alongside the code. This implies that testing &lt;em&gt;has&lt;&#x2F;em&gt; to move earlier in the development chain &lt;em&gt;because&lt;&#x2F;em&gt; of AI.&lt;&#x2F;p&gt;
&lt;p&gt;AI is an opportunity to start writing acceptance tests if you have not yet. It pushes us to invest time in strategic test design, testing against stable contracts, testing from a behavioural point of view, and isolating test descriptions from the actual implementation.&lt;&#x2F;p&gt;
&lt;p&gt;Put differently, the shift in development practices that LLMs are causing should inspire &lt;em&gt;more&lt;&#x2F;em&gt; adherence to testing best practices, not less. That is, if you want to keep on adding new features, fix and prevent bugs, and keep up the pace of development.&lt;&#x2F;p&gt;
&lt;p&gt;More broadly, to keep benefiting from AI over time, we should shift towards tightly coupled feedback loops embedded in everyday development. This is not limited to testing but also applies to, for example, reviews. In that sense, AI doesn’t remove quality practices; it raises the stakes if you don’t have them.&lt;&#x2F;p&gt;
&lt;p&gt;If testing &#x27;shifts left&#x27;, team structures must change as well. This evolution points towards smaller, more autonomous teams where testing, development, and feedback are inseparable rather than sequential.&lt;&#x2F;p&gt;
&lt;p&gt;AI presents us with an opportunity: not faster quality control after the fact, but to design systems, processes, and teams that make quality the fastest path forward, so control loops can keep up with the increasing pace of development loops.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Notes on Preparatory Refactoring</title>
        <published>2024-12-06T00:00:00+00:00</published>
        <updated>2024-12-06T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/notes-on-preparatory-refactoring/"/>
        <id>https://hanlho.com/p/notes-on-preparatory-refactoring/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/notes-on-preparatory-refactoring/">&lt;p&gt;Notes inspired by Emily Bache&#x27;s short Youtube video &lt;a href=&quot;https:&#x2F;&#x2F;youtu.be&#x2F;kHwzVlXTOw8?si=4q5GpbGYioN1VE_4&quot;&gt;Design Better Code with Preparatory Refactoring in TDD | Demo&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;A preparatory refactoring is a refactoring to make future changes to accommodate new requirements easy. By definition, current behavior should not be changed. Instead of using tests to drive changes, you use them to verify you are not breaking existing behavior.&lt;&#x2F;p&gt;
&lt;p&gt;A practical tip: add a pending test as a mental note to remember the goal of your refactoring. Say you have a new requirement coming in. TDD&#x27;s mantra Red-Green-Refactor would require you to write a failing test first. However, it turns out, the current design is not really well-suited to accommodate this change. This means you may be facing Red status for a while if you decide to keep the test in. In addition, when you start refactoring to change the design, you may accidentally break existing functionality. This means you will have tests that are failing expectedly &lt;em&gt;and&lt;&#x2F;em&gt; unexpectedly. When refactoring you would ideally only see the tests fail when you made a mistake. An in-between approach could be: add the test for the new behavior, see it fail, set it to &#x27;pending&#x27; and then proceed with the refactoring.&lt;&#x2F;p&gt;
&lt;p&gt;Green-Refactor-Green instead of Red-Green-Refactor. Preparatory refactoring does not mean we cannot use small steps to do the refactoring. Make a small change to the code, see the test still pass, continue; if a test fails roll back or fix. Green-Refactor-Green, keep an eye on your pending test.&lt;&#x2F;p&gt;
&lt;p&gt;Two further resources which include an example of how one could approach such a refactoring:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Video: &lt;a href=&quot;https:&#x2F;&#x2F;youtu.be&#x2F;kHwzVlXTOw8?si=4q5GpbGYioN1VE_4&quot;&gt;Design Better Code with Preparatory Refactoring in TDD | Demo&lt;&#x2F;a&gt; by Emily Bache.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Blog post: &lt;a href=&quot;https:&#x2F;&#x2F;martinfowler.com&#x2F;articles&#x2F;preparatory-refactoring-example.html&quot;&gt;An example of preparatory refactoring&lt;&#x2F;a&gt; by Martin Fowler.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Notes on TDD &amp; DDD From the Ground Up Live Coding</title>
        <published>2024-11-14T00:00:00+00:00</published>
        <updated>2024-11-14T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/notes-on-tdd-ddd-from-the-ground-up-live-coding/"/>
        <id>https://hanlho.com/p/notes-on-tdd-ddd-from-the-ground-up-live-coding/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/notes-on-tdd-ddd-from-the-ground-up-live-coding/">&lt;p&gt;Some short notes from Chris Simon&#x27;s Talk &lt;a href=&quot;https:&#x2F;&#x2F;youtube.com&#x2F;watch?v=1WBIUYJVnok&amp;amp;si=ZiyE8Hvx3U5LsNHR&quot;&gt;TDD &amp;amp; DDD From the Ground Up Live Coding&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;When choosing the right testing level, developers face an important trade-off. Higher-level tests provide better coverage for refactoring, but make it harder to pinpoint the exact location of failures. Finding the right balance is crucial for maintainable tests.&lt;&#x2F;p&gt;
&lt;p&gt;Domain modelling through traditional entity diagrams can lead to hidden assumptions. Event storming, by contrast, helps create more explicit models that better reflect how systems change over time. This approach brings us closer to the actual domain and helps reveal important details we might otherwise miss.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Notes on 🚀 TDD, Where Did It All Go Wrong</title>
        <published>2024-11-14T00:00:00+00:00</published>
        <updated>2024-11-14T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://hanlho.com/p/notes-on-tdd-where-did-it-all-go-wrong/"/>
        <id>https://hanlho.com/p/notes-on-tdd-where-did-it-all-go-wrong/</id>
        
        <content type="html" xml:base="https://hanlho.com/p/notes-on-tdd-where-did-it-all-go-wrong/">&lt;p&gt;My short notes on &lt;a href=&quot;https:&#x2F;&#x2F;youtube.com&#x2F;watch?v=EZ05e7EMOLM&amp;amp;si=z4CYVriIZOOKwQNV&quot;&gt;🚀 TDD, Where Did It All Go Wrong&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;Still a lot of testing wisdom in this talk (re-watched 6 years after publication).&lt;&#x2F;p&gt;
&lt;h2 id=&quot;focus-on-behaviour&quot;&gt;Focus on behaviour&lt;&#x2F;h2&gt;
&lt;p&gt;Behaviour should be your primary focus when writing tests. The need for a new test should arise from new behaviour or requirements. While TDD is often contrasted with BDD, Kent Beck&#x27;s early work already emphasised the importance of focusing on behaviour.&lt;&#x2F;p&gt;
&lt;p&gt;When writing tests, be mindful of coupling. Your tests should not all fail when you change your code. Write tests against public contracts where possible. In my experience, you can deviate from this rule if your unit under test has a stable internal contract.&lt;&#x2F;p&gt;
&lt;p&gt;This approach leads to better &#x27;refactorability&#x27;. For a practical demonstration, I recommend watching &lt;a href=&quot;https:&#x2F;&#x2F;youtube.com&#x2F;watch?v=1WBIUYJVnok&amp;amp;si=ZiyE8Hvx3U5LsNHR&quot;&gt;TDD &amp;amp; DDD From the Ground Up Live Coding by Chris Simon&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;atdd-tools-are-not-worth-it-if-business-is-not-actively-participating&quot;&gt;ATDD tools are not worth it if business is not actively participating&lt;&#x2F;h2&gt;
&lt;p&gt;While I have promoted the use of ATDD (Acceptance Test Driven Development) tools like Gherkin, I must agree, the burden of translating between natural language and code can be &#x27;horrible&#x27;, to the point where internal frameworks are implemented to manage this complexity.&lt;&#x2F;p&gt;
&lt;p&gt;More importantly, the effort may not be worthwhile without an engaged business analyst or product owner. In my experience, business stakeholders rarely show interest in participating. While I&#x27;ve questioned whether I could have done more to encourage engagement, this talk confirms this is a common challenge. Though disappointing to acknowledge, as the practice appears promising on paper, this seems to be the reality we face.&lt;&#x2F;p&gt;
&lt;p&gt;That said, the consistent style these tools promote can still be valuable in test writing.&lt;&#x2F;p&gt;
&lt;p&gt;If you had success using Gherkin or similar tool I would be interested in learn how.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;do-not-forget-about-refactor&quot;&gt;Do not forget about Refactor&lt;&#x2F;h2&gt;
&lt;p&gt;The TDD-cycle is Red-Green-&lt;strong&gt;Refactor&lt;&#x2F;strong&gt;. First fix the problem, then improve the design. The central idea is to decouple thinking about the problem from thinking about the design. The Refactor step is where the design is improved and is an integral part of the cycle.&lt;&#x2F;p&gt;
&lt;p&gt;This methodical approach leads to more maintainable code, contrasting with the approach of the &#x27;Duct-tape Programmer&#x27; (this talk) or &#x27;Tactical Tornado&#x27; (&lt;a href=&quot;https:&#x2F;&#x2F;www.goodreads.com&#x2F;book&#x2F;show&#x2F;39996759-a-philosophy-of-software-design?from_search=true&amp;amp;from_srp=true&amp;amp;qid=VdXFhGejkS&amp;amp;rank=1&quot;&gt;Ousterhout&lt;&#x2F;a&gt;) approach.&lt;&#x2F;p&gt;
&lt;p&gt;When you can improve the design without changing tests, you have achieved good decoupling.&lt;&#x2F;p&gt;
</content>
        
    </entry>
</feed>
