Introduction - Complexity Trap
I’ve had a front-row seat to how AI programs are being rolled out across large enterprises. From early proofs to ambitious enterprise-wide platforms, I keep noticing the same pattern emerge: teams are reaching for complexity far too early.
This isn’t a tooling problem, it’s a systems thinking problem. A fundamental truth about systems that John Gall captured and has been quoted on since 1981.
“A complex system that works is invariably found to have evolved from a simple system that worked.”
But in the current AI gold rush, that wisdom is being ignored.
Over-Engineering Epidemic
I’ve been privileged to have a front-row seat to this at LAB3, as one of Microsoft’s original 70 Global AI Cloud Partners, and the only Australian one in the program at launch! Across industries and programs, we’ve supported some of the earliest and most exciting enterprise-grade AI initiatives on Azure and a few pattern show up again and again, yet are rarely mentioned:
-
Complication of architecture — Solutions assume complexity before value is proven
-
Using the latest frameworks…— Simply because they’re new or interesting
-
Missing functional and non-functional requirements — No clear articulation of needs, constraints, or success metrics
-
Forgetting cloud-native architectural lessons — Ignoring everything we’ve learned about simplicity, scalability, and modularity over the past decade
Time and time again in recent conversations I see engineers, developers and architects reaching straight for the sophisticated tooling of Agentic Frameworks, complicated multi-agent workflows with vectors of vectors before there is a clear and validated picture of the underlying use case and how this is required to be realised.
I understand how this keeps happening, the tooling is inherently exciting, interesting and new. The possibilities feel endless, and as engineers, it’s easy for us to be pulled in towards what’s impressive, instead of what’s impactful.
This isn’t, however, just about picking the wrong tools or frameworks; it’s about the fundamental misunderstanding of how successful systems are built. The allure of ‘solve complexity’ or ‘building for scale’ becomes a justification, that may never be needed. Teams spend weeks wiring up complex frameworks to enable these elaborate architectures that whilst look impressive in PowerPoint struggle to demonstrate tangible outcomes.
At the heart of this epidemic, is the root cause. Which I believe to be Hyper-Specialisation, but that is a rant for another time.
Simple Wins
The most successful projects I’ve been part of whether introducing cloud platforms, implementing microservices, or building AI systems all have one thing in common:
They started almost embarrassingly simple.
One of the earliest examples for AI: A single function, one Azure OpenAI API Call, and a basic teams bot to summarise meeting notes in a specific style and format.
No vector database. No multi-agent anything. No complex orchestration.
Not because these aren’t valuable. But because value could be realised with a focused, lightweight solution that solves a problem felt every day.
And it worked, it gained traction quickly because it solved a real pain point. Users adopted it immediately because it worked reliably. And here’s the key: once we had proven value and understood the usage patterns, then we evolved the complexity. We added better context handling, integrated with SharePoint, and eventually built out more sophisticated workflows, but only after the simple system had demonstrated its worth.
This is Gall’s Law in action. The slightly more complex, valuable system we have today evolved from that basic Azure Function that worked from day one.
Microsoft Ecosystem Advantage
Microsoft’s AI ecosystem is perfectly positioned to support every level of complexity when it comes to AI systems. It does this by offering a seamless path from simple to sophisticated with compatible steps inbetween.
- M365 Copilot: Low-code entry exposes users within tooling such as Word, Excel and Teams.
- Copilot Studio: A co-creation platform which enables power users to create their own copilots in a low-code manner.
- Azure AI Agent Service: Connects core pieces of Azure AI Foundary in a single managed runtime, to simplify onboarding more complicated AI driven workflows.
- Azure AI Foundry: An enterprise grade hub for AI initatives enabling deployment patterns, safety controls, evaluation, runtime governance etc.
- Semantic Kernel: Full control framework giving developers complete access enabling complex, domain specific copilots leveraging skill chaining, memory management, agentic flows and more.
Microsoft recently held Build at the end of May, where they release several major updates that reinforce this layered ecosystem:
-
Azure AI Agent Service (GA) now integrates Semantic Kernel and AutoGen into a unified SDK, complete with multi-agent orchestration, Model Context Protocol (MCP) support, and enhanced observability like cost, performance, and safety tracing    .
-
Copilot Studio now supports multi-agent orchestration, MCP standards, and cross-channel publishing (including SharePoint and WhatsApp), along with richer analytics dashboards.
-
Copilot Tuning in Microsoft 365 enables makers to fine-tune models with enterprise data, without requiring data science teams.
-
New identity and governance capabilities include Microsoft Entra Agent IDs, Purview integration, and pay-as-you-go billing for careful cost management.
-
Semantic Kernel v1.0 roadmap now supports seamless integration with Azure AI Agent Service and AutoGen, includes workflow orchestration, and enhances the VS Code experience.
Microsoft’s ecosystem answers Gall’s Law directly: build working simplicity first—then evolve into sophisticated value. The new Build announcements accelerate that evolutionary path.
Embracing Strategic Simplicity
Simplicity isn’t naïve, it’s strategic. In an environment where AI capabilities are evolving rapidly, starting simple provides the agility to adapt as both technology and business requirements change as do their expectations. Complex systems are harder to modify, harder to debug, and harder to explain to stakeholders who need to understand the value being delivered.
Companies winning with AI aren’t the ones who started with the most sophisticated architectures; they’re the ones delivering consistent value to end users. They start with focused solutions, with defined value goals, gather real-world feedback and let that usage data drive the systems evolution.
This doesn’t mean avoiding complexity, it means earning the right to complexity through proven value. When your simple system is serving hundreds or thousands of users and you’ve identified specific bottlenecks or features to add, that’s when sophisticated tooling becomes a strategic advantage rather than speculative over-engineering.
Final Thoughts
AI is moving fast, and the pressure to build comprehensive solutions is real, and with every lackluster ROI for an AI PoC this pressure grows. But Gall’s Law reminds us that sustainable success comes from evolution, not revolution.
Microsoft’s ecosystem gives us the tools to start lean and scale smart, we just need the discipline to use them this way.
The next time you’re designing any system, but especially an AI capability. Ask yourself:
What’s the simplest version that could deliver value?
Start there, win early. Then let complexity evolve from working and valued simplicity.