This post captures my current thinking on how LLMs are impacting software development, particularly around software quality and engineering discipline.

My main observation: most of the best practices we've relied on for years are just as important—maybe even more so—in an LLM-assisted development environment. Working with LLMs requires more discipline and attention to fundamentals, not less.

When using LLMs, there is a heightened risk of losing understanding: of the problem domain, the underlying technology, and the implementation details. Code can become messy quickly without careful attention, review, and guidance. While this is certainly true, we didn't need LLMs for this to happen. Why else have so many projects failed historically? Why is technical debt a topic in most projects?

The critical difference with LLMs is the increased risk and temptation of velocity. We move too fast and skip the practices that help us maintain and change software in the future. Discipline and rigour have become more important than ever.

These practices are becoming MORE crucial in an LLM-assisted workflow:

  • Codebase quality. This matters for LLM agents too, because they learn from existing code. A clean, well-organised codebase helps agents perform better; inconsistencies lead to poorer results. An LLM will mimic what's already there.

  • Feedback loops and testing. If an LLM is helping you write code, you need reliable ways to verify it hasn't broken anything. A well-designed, automated test suite that's easy to extend and interpret helps maintain understanding and control of implemented functionality.

  • Well-designed boundaries and contracts both within and outside your application. These allow you to constrain, shape, isolate, and test the work an LLM produces.

  • Managing risk and technical debt. Be intentional and explicit about where you rely on LLMs and where you don't. Document these decisions. Maintain a technical debt log with risk assessments and timelines.

  • Documentation of past decisions. Keep a history of architectural decisions through decision logs and ADRs, and ensure the LLM you're working with is aware of them. I've had LLMs point out inconsistencies in the codebase or flag how new change requests conflict with past decisions.

Taken together, these practices are what make LLM-assisted development sustainable rather than brittle.

I don't think the skill gap in building and delivering software will ultimately be about prompt cleverness. LLM agents will be genuinely helpful tools, and working effectively with them will be an accelerator. However, as we rely on them more to write code—even when we review that code carefully—the most important work increasingly becomes the disciplined practice of boxing them in with testing, architecture, and contracts.

Avoid painting yourself into a corner. In an LLM-assisted workflow, that means being deliberate about where you let agents move fast, and where you slow them down with guardrails. LLMs make it easier to move fast, and easier to get stuck.