Outcome-Based Tools, the Right Way to Build an MCP Server

Compare a naive endpoint-wrapping MCP server with an outcome-based design, and see why bundling tool calls into purposeful functions produces better answers with fewer tokens.

When you start building an MCP server, it is tempting to expose every API endpoint as its own tool, but that approach quietly burns tokens and forces the model to guess the right sequence of calls.

  • A tool per endpoint floods the context window with duplicate descriptions and raw JSON payloads.
  • Outcome-based tools bundle the required calls into a single, purposeful function that returns a clean result.
  • Parallel-friendly design lets independent calls run side by side instead of forcing the model to chain them manually.

This lesson is a preview from our Building Your First MCP Server and Client Course Online. Enroll in a course for detailed lessons, live instructor support, and project-based training.

The difference between a frustrating MCP server and a useful one usually comes down to how the tool surface is shaped. A bad server treats the protocol as a thin wrapper over HTTP. A good server treats it as an opportunity to design outcomes that an LLM can actually reason about, with the work of fetching, filtering, and combining data hidden inside the server where it belongs.

What the Bad Version Looks Like

Imagine an MCP server for a SpaceX-style API with nine different tools, one per endpoint: list launches, get launch details, list rockets, get rocket details, list crew, and so on. Each tool ships with its own description that must be loaded into context. Each endpoint returns a sprawling JSON document full of internal IDs, duplicate Wikipedia links, image URLs, and unit conversions the model does not need.

Calling a single endpoint like list launches can return more than 180,000 tokens of raw data. On a model with a 200,000 token window, that one call is enough to fill the entire working memory before the question is even answered. On top of that, the model now has to pick the right sequence of tool calls on its own. List launches, then pick the latest one, then grab the launch ID, then look up the rocket, then the launchpad, then the crew. Without explicit sequencing logic, it is gambling every time.

What the Good Version Looks Like

The outcome-based approach flips the design around. Instead of asking, what endpoints do I have, ask what outcomes do my users want? A mission briefing tool can internally fetch the launch, rocket, launchpad, and crew data, combine them into a tidy markdown summary, and return that as a single payload. A next launch tool can surface just the upcoming flight with the context a user would actually read. A rocket profile tool can describe a given vehicle and the missions that have used it.

Because the server handles the orchestration, the LLM does not have to reason about order of operations, and the number of tokens that flow into the context window collapses dramatically. When calls do not depend on each other, the server can run them in parallel, further improving latency and reliability.

Designing for Outcomes

When you sit down to plan your tool surface, it helps to list the questions your users are actually going to ask. Who is on the crew of the last crewed mission? When is the next launch? What rockets has this astronaut flown on? Each of those questions maps cleanly to an outcome-based tool, and each of those tools becomes a small pipeline inside your server. That pipeline can pull from multiple endpoints, strip out noise, and return only the fields that matter.

The result is a server that respects the model's attention, keeps costs down, and produces consistent answers. Think in terms of what the user wants to know, not in terms of what the upstream API happens to expose.

The biggest lesson for new MCP builders is to design in outcomes, not endpoints. Wrap the complexity inside your server, hand the model clean payloads, and sequence or parallelize calls where it makes sense. Do that, and the MCP layer becomes a genuine upgrade to the data it wraps, rather than a liability sitting between a language model and its context budget.

How to Learn AI

Build practical, career-focused skills in AI through hands-on training designed for beginners and professionals alike. Learn fundamental tools and workflows that prepare you for real-world projects or industry certification.