Remember the early pitfalls of microservices? Point-to-point integrations, tight coupling disguised as modularity, cascading failures. Sound familiar? Technology has a tendency to repeat the same architectural implementation missteps across different developments. We saw this clearly during the initial phases of the microservices era, and we’re witnessing it again today as organizations rush to integrate Agentic AI into business use cases. Ali Pourshahid, Chief Engineering Officer, Solace, explains that for agentic AI to bring business benefits, agents need to hang loose in an event-driven architecture that enables agents to evolve independently, allowing different teams to build and deploy specialized agents without the baggage of complex dependencies. 


Early microservices architectures relied heavily on synchronous, point-to-point communication. Service A called Service B, which called Service C, creating intricate webs of dependencies. What appeared to be a distributed system was actually a “distributed monolith” – technically separate services that were functionally inseparable. When one service experienced latency or failure, cascading effects rippled throughout the entire system. Teams found themselves coordinating deployments across dozens of services, and debugging became a nightmare spanning multiple systems.

Then came the turning point. We started to decouple the services using event-driven architecture (EDA). EDA maximizes the agility of microservices as it liberates data from being ‘at rest’ (for example, stuck in a database behind an API) to being ‘in motion’ (consumable in real-time as events happen). Instead of services calling each other directly, they started communicating through event brokers. This shift transformed rigid, fragile systems into resilient, scalable platforms. It also allowed microservices to evolve independently, teams gained autonomy, and systems became more fault-tolerant.

Overengineering of Agentic AI frameworks

But history seems to be repeating itself. The same ‘distributed monolith’ patterns that plagued microservices are starting to emerge in agentic AI implementations.

Organizations are building AI systems with multiple agents, but they’re connecting them through point-to-point integrations and client <-> server architecture patterns. Just as with early microservices, this approach creates the illusion of modularity while maintaining tight coupling under the hood.

Consider a typical enterprise AI assistant that needs to handle customer inquiries. It might involve a sentiment analysis agent, a knowledge retrieval agent, a decision-making agent, and a response generation agent. If these agents are orchestrated through synchronous calls or shared state, they create the same fragility we experienced with early microservices and quickly result in many point-to-point connections that need to be configured, maintained, and managed.

McKinsey sums up the situation perfectly. It assesses that realising the full potential of Agentic AI “will require a new paradigm for AI architecture—the agentic AI mesh—capable of integrating both custom-built and off-the-shelf agents.”

Agentic AI needs to hang loose!

EDA provides a robust solution by decoupling agents from one another using an event broker, or a network of brokers we call an event mesh, avoiding the tight coupling that plagued early microservices deployments.

By using an event broker, agents can communicate asynchronously without knowing who they’re talking to or when a response will arrive. This loose coupling enables independent evolution of agents, allowing different teams to build and deploy their specialised agents without coordinating complex dependencies. This approach makes the entire system more resilient—if one agent fails, its events simply queue up, and the rest of the system remains operational.

Instead of making direct calls, agents publish events (such as a change in state, like “customer inquiry received” or “sentiment analysis complete”) and subscribe to the events they need to process.

This event-driven architectural backbone is essential to enabling uninhibited Agentic AI deployments:

Production AI systems must be bulletproof

When a customer-facing AI assistant processes thousands of requests per hour, individual agent failures cannot bring down the entire system. EDA provides natural fault isolation – if a specialized analysis agent crashes, its events queue up while other agents continue processing. The system degrades gracefully rather than failing catastrophically.

Horizontal scaling becomes trivial. Need more capacity for document processing? Simply add more instances of document processing agents that consume from the same event stream. No reconfiguration, no service discovery complexity – just elastic scaling based on demand.

The need for Asynchronous Realities – humans and agents react in unpredictable timelines

Agentic AI systems are inherently asynchronous. LLM responses can vary from milliseconds to minutes depending on model load, query complexity, and model type. Agent tasks have vastly different execution times – a simple data lookup might complete instantly, while a complex analysis could take several minutes. Human interactions operate on entirely unpredictable timelines.

EDA embraces this reality. Instead of blocking while waiting for responses, agents publish events when they complete tasks and subscribe to events they can process. This pattern enables more robust sequential workflows and enables parallel execution paths. A customer service AI system, for example, can simultaneously have one agent analysing sentiment, another retrieving customer history, and a third generating response options – all working in parallel and coordinating through events.

Loose coupling

Just as with microservices, loose coupling is critical for Agentic AI systems. Different teams often develop specialized agents using different frameworks, languages, and deployment strategies. Event-driven communication allows these diverse agents to collaborate without tight dependencies.

Consider an enterprise with agents built using different frameworks – e.g. some using Solace Agent Mesh, others using LangChain, CrewAI, and custom-built agents for proprietary systems. In an EDA, each agent simply publishes its capabilities and subscribes to relevant events, regardless of its underlying implementation.

Dynamic workflows & agent registry

One of the most powerful aspects of event-driven agentic AI is the ability to support dynamic workflows. Unlike systems with hardcoded process flows, agents can register their capabilities at runtime, and orchestration can adapt based on available agents and changing requirements.

Imagine a document analysis system where new specialized agents are being added – perhaps a new agent for analyzing financial documents or another for processing legal contracts. In an event-driven system, these agents simply announce their capabilities, and the orchestrator agent can immediately incorporate them into relevant workflows without system changes or redeployments. In other words, the system incrementally and instantly becomes smarter.

Complete observability

Debugging distributed AI systems is notoriously difficult. Where did a request get stuck? Which agent made a particular decision? Why did a workflow take an unexpected path?

Event-driven systems provide complete visibility because every interaction is captured as an event with full context, timestamps, and traceability.

This observability is crucial for compliance and auditability in enterprise AI systems. Every decision, every data access, and every agent interaction is traceable, enabling organisations to understand and verify AI behaviour in production. Picture a visualizer that shows all these interactions and flows and allows you to fully understand how the system works, the lineage of the output, in order to make it all more explainable and trustable.

Seamless integration

Enterprise AI systems must integrate with existing infrastructure, data sources, and business processes. EDA allows subscribing to events, regardless of the technology stack or deployment model.

For example, a legacy CRM system can trigger AI workflows by publishing customer events. A modern data lake can then feed real-time information to agents through event streams. External APIs can be wrapped with simple event adapters, making them available to the entire AI ecosystem without complex integration code.

Taking the road less travelled for Agentic AI success

EDA for agentic AI systems is crucial because it promotes loose coupling, allowing agents to operate independently and communicate asynchronously. This approach prevents the same architectural pitfalls that plagued early microservices, which were characterized by brittle, tightly-coupled, and fragile point-to-point connections.

Organizations that embrace event-driven architecture early will build more resilient, scalable, and maintainable AI systems. EDA enables greater resilience, scalability, and observability, which are essential for robust, enterprise-ready Agentic AI.


Ali Pourshahid is Solace’s Chief Engineering Officer, leading the engineering teams at Solace. Ali is responsible for the delivery and operation of Software and Cloud services at Solace. He leads a team of incredibly talented engineers, architects, and User Experience designers in this endeavor. Since joining, he’s been a significant force behind the PS+ Cloud Platform, Event Portal, and Insights products. He also played an essential role in evolving Solace’s engineering methods, processes, and technology direction.

Before Solace, Ali worked at IBM and Klipfolio, building engineering teams and bringing several enterprise and Cloud-native SaaS products to the market. He enjoys system design, building teams, refining processes, and focusing on great developer and product experiences. He has extensive experience in building agile product-led teams.

Ali earned his Ph.D. in Computer Science from the University of Ottawa, where he researched and developed ways to improve processes automatically. He has several cited publications and patents and was recognized a Master Inventor at IBM.

TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.

Featured image: Steve Johnson on Unsplash

The climate tech paradox: Why Asia’s biggest market gets the least capital