Helping AI Systems Talk to the Outside World

As AI systems move beyond answering questions and start doing things, booking flights, checking inventory, querying databases, they run into a practical limitation. Large language models are text-based, and most real-world systems are not. Something needs to sit in between and translate intent into action.
There isn’t a single best answer here, and the landscape is still evolving. What exists today reflects different assumptions about how AI agents should interact with software, and it’s likely that tomorrow will bring more options or refinements.

Why Tools Exist at All

Even with large context windows, models can’t hold everything. Entire databases, live systems, and constantly changing data simply don’t fit. Rather than forcing everything into context, most production setups give models the ability to ask for what they need, when they need it.
In that setup, the model becomes more of an orchestrator. It reasons about a task, decides what information or action is required, and then reaches out to an external system to get it. The question isn’t whether this is needed, it’s how that connection should work.

An AI-Native Way to Describe Capabilities

One approach that’s emerged recently is to describe tools in a way that language models can understand directly. Instead of exposing only technical method signatures, the system also provides natural language descriptions of what a tool does and when it might be useful.
This makes it easier for an AI agent to discover capabilities at runtime. The agent doesn’t need to be hard-coded with knowledge of every service in advance. It can ask what’s available, read the descriptions, and adapt. That flexibility can be helpful in environments where tools change frequently or where agents are expected to operate more autonomously.
The trade-off is that this approach tends to favor readability and adaptability over raw performance. It’s designed first for understanding, not throughput.

A More Traditional, High-Performance Path

Long before AI agents entered the picture, distributed systems already had a solution for fast, reliable service communication. Remote procedure call frameworks were built to move structured data efficiently between services, often at very high scale.
These systems excel at speed and reliability. They’re well understood, widely deployed, and optimized for predictable interactions. What they don’t provide out of the box is semantic guidance. An AI agent can see what methods exist, but not why or when it should call them. Bridging that gap usually requires an additional translation layer that maps natural language intent to specific calls. This isn’t a flaw so much as a reflection of different design goals.

Different Assumptions, Different Strengths

The contrast between these approaches is less about which is better and more about what each assumes. Some protocols assume the caller already knows exactly what it wants to do. Others assume the caller needs help figuring that out.
In practice, many systems already mix these ideas. A language-friendly interface might help an agent discover what’s possible, while a more traditional protocol handles the heavy lifting once the decision is made. As workloads grow, performance characteristics start to matter more. As systems become more dynamic, discoverability starts to matter more.

A Moving Target

It’s worth being cautious about drawing hard conclusions here. Agent architectures are still young, model capabilities are changing quickly, and infrastructure patterns are adjusting in response. What feels like the right abstraction today may look different a year from now as models get better at reasoning, context windows grow, or new standards emerge.
For now, the most useful mindset seems to be pragmatic rather than prescriptive. These protocols are tools, not ideologies. Each solves a slightly different problem, and many real systems will likely end up using more than one.
The important part isn’t picking the “winning” approach. It’s understanding what assumptions you’re making, and being ready to revisit them as both the technology and the needs around it continue to change.
Previous
Previous

The Role of Workflow Integration in Enterprise AI Adoption

Next
Next

Two Practical Ways of Giving AI Access to Knowledge