Introduction
Welcome to the MCPLambda Documentation.
What is MCPLambda?
The artificial intelligence landscape is shifting towards autonomous agents. The Model Context Protocol (MCP) is the open standard that connects these AI models to external tools, databases, and APIs.
However, building agents is easy; managing the infrastructure to power them is hard. MCP servers are often stateful, requiring persistent connections and complex orchestration.
MCPLambda is a first-of-its-kind Platform-as-a-Service (PaaS) engineered to bridge this gap. It provides a “Vercel-like” experience for AI infrastructure, allowing you to deploy, manage, and scale MCP servers in seconds.
Why MCPLambda?
- Infrastructure-as-Code to Production-in-Seconds: Deploy directly from package managers (
npx,uvx), Git repositories, or Docker images. - Stateful by Design: Seamlessly handle long-lived connections and persistent storage requirements without manual Kubernetes configuration.
- Security First: Fine-grained tool management and secure secret injection.
- Developer Velocity: Abstract away the complexities of Kubernetes, networking, and storage so you can focus on building agents.
Core Concepts
To get the most out of MCPLambda, it’s helpful to understand a few key concepts:
- Deployments: A running instance of your MCP server.
- Projects: Logical groupings of deployments, secrets, and team members.
- Compute Units: A flexible resource model that scales with your needs (Small, Medium, and Large profiles).
- Transport: How your agent communicates with the server (Standard I/O, Streamable HTTP, or SSE).