Frequently Asked Questions

Find answers to common questions about MCPLambda.

General

What is the Model Context Protocol (MCP)?

The Model Context Protocol (MCP) is an open standard that enables AI models (like Claude, GPT-4, etc.) to securely interact with external tools, APIs, and data sources. Think of it as a universal driver for AI agents.

Why should I use MCPLambda instead of running my own server?

Deploying MCP servers manually involves managing infrastructure, setting up secure networking, handling stateful connections, and configuring secrets. MCPLambda automates all of this, providing a “Vercel-like” experience for backend agent infrastructure.

Deployment & State

What is the difference between stateful and stateless deployments?

  • Stateless: The server doesn’t remember anything between requests. This is ideal for simple data fetching or time-conversion servers.
  • Stateful: The server needs to maintain data across restarts (e.g., a local database or a session history). On MCPLambda, you can request persistent storage for stateful servers with a single click.

How do I connect my agent to my newly deployed MCP server?

Simply copy the unique Deployment URL from your dashboard and plug it into your MCP client (like Cursor, VSCode, or Claude Desktop). MCPLambda provides a single endpoint for your server, serving either Streamable HTTP or SSE depending on your transport configuration. Most modern AI agents support these protocols natively via their configuration files or settings menus.

Pricing & Resources

What are “Compute Units”?

Compute Units are our way of providing a flexible resource pool. Instead of buying a fixed number of servers, you get a pool of units (e.g., 10 units on the Team plan). You can use these units to deploy any combination of Small (1 unit), Medium (2 units), or Large (4 units) servers.

Can I upgrade my plan later?

Yes, you can upgrade or downgrade your plan at any time. Changes are prorated.

What happens if I go over my unit limit?

We will notify you when you approach your limit. If you exceed it, we provide flexible overage pricing based on the size of the extra server profiles you deploy.

Security

What runs my MCP server under the hood?

MCPLambda runs every deployment on ToolHive, an open-source runtime purpose-built for MCP servers. ToolHive wraps each server in an isolated container with minimal permissions, network access filtering, and an SSE proxy so container ports are never exposed directly to the network.

How are my API keys stored?

Your API keys are stored as Secrets in our encrypted database. They are decrypted on demand only when needed in the dashboard UI and are securely injected into your running deployment as environment variables — ToolHive handles the final step of making them available inside the sandboxed container without surfacing them to the host.

Is my code private?

Yes. MCPLambda provides isolated environments for every deployment. For GitOps flow, we use temporary build environments that are wiped after the container image is created. Every running server lives in its own ToolHive-managed container, so workloads cannot observe or reach each other.


Still have questions? Reach out to our team at hello@mcplambda.io or join our community Discord.