What caught my attention in the book Vibe Coding by Gene Kim and Steve Yegge is the idea that, as LLMs and coding agents change how we build software, control loops—tests, reviews, and other signals that tell you whether a change behaves as expected—should be faster and more integrated into development feedback loops than before. My intuition says this makes perfect sense.
For example, when there is a dedicated test stage or a QA role that tests after the fact, that role inevitably struggles to keep up with the speed of development. Over time, this makes it increasingly difficult to sustain a 'testing after the fact' organisation of quality.
So how do we solve this?
Some may think that introducing AI by implementing it at the test level after the fact, could be the solution. However, at the rate of development I see and read about, this approach will be hard to keep up with. One either has to accept not fully taking advantage of what AI can help development with, or rethink how testing is integrated into the development process.
Put bluntly: if AI lets you produce a feature in hours but the first meaningful acceptance signal only arrives days later in a separate stage, quality assurance become the bottleneck.
To me, the logical consequence is a stronger shift towards automated quality controls, including acceptance tests and code reviews at the least. I refer to acceptance tests here as writing executable specifications of expected behaviour (in domain language) before or alongside the code. This implies that testing has to move earlier in the development chain because of AI.
AI is an opportunity to start writing acceptance tests if you have not yet. It pushes us to invest time in strategic test design, testing against stable contracts, testing from a behavioural point of view, and isolating test descriptions from the actual implementation.
Put differently, the shift in development practices that LLMs are causing should inspire more adherence to testing best practices, not less. That is, if you want to keep on adding new features, fix and prevent bugs, and keep up the pace of development.
More broadly, to keep benefiting from AI over time, we should shift towards tightly coupled feedback loops embedded in everyday development. This is not limited to testing but also applies to, for example, reviews. In that sense, AI doesn’t remove quality practices; it raises the stakes if you don’t have them.
If testing 'shifts left', team structures must change as well. This evolution points towards smaller, more autonomous teams where testing, development, and feedback are inseparable rather than sequential.
AI presents us with an opportunity: not faster quality control after the fact, but to design systems, processes, and teams that make quality the fastest path forward, so control loops can keep up with the increasing pace of development loops.