# Introduction

> PromptRails is an AI agent orchestration platform for building, deploying, and monitoring LLM-powered applications.

Source: https://0.0.0.0:8080/docs/introduction

# What is PromptRails?

PromptRails is a platform for building, deploying, and monitoring AI agents. It provides a complete infrastructure layer between your application and LLM providers, giving you full control over prompt engineering, agent orchestration, observability, and cost management.

Whether you are prototyping a single chatbot or running a fleet of production agents across multiple LLM providers, PromptRails provides the tools to manage the entire lifecycle.

## Key Features

- **Agent Orchestration** -- Build agents using five distinct execution strategies: simple, chain, multi-agent, workflow, and composite. Compose complex AI pipelines from reusable building blocks.

- **Prompt Management** -- Version-controlled prompts with Jinja2 templating, input/output schemas, model assignment, and caching. Promote and roll back versions without code changes.

- **Multi-Provider LLM Support** -- Connect to OpenAI, Anthropic, Google Gemini, DeepSeek, Fireworks, xAI, and OpenRouter through a unified credential system.

- **Tracing and Observability** -- OpenTelemetry-style distributed tracing with 14 span kinds. Track every LLM call, tool invocation, guardrail evaluation, and data source query with full cost and latency breakdowns.

- **Guardrails** -- 14 built-in scanner types for input and output validation including toxicity detection, PII filtering, prompt injection prevention, and more. Configure block, redact, or log actions per scanner.

- **MCP Tools** -- First-class Model Context Protocol support. Connect external APIs, data sources, built-in functions, and remote MCP servers as tools for your agents.

- **Data Sources** -- Query PostgreSQL, MySQL, BigQuery, Snowflake, Redshift, MSSQL, ClickHouse, or static files directly from your agents using versioned, parameterized query templates.

- **Memory System** -- Five memory types (conversation, fact, procedure, episodic, semantic) with vector embedding support and semantic search for context-aware agents.

- **Scoring and Evaluation** -- Score executions and spans with numeric, categorical, or boolean metrics. Support for manual scoring, API-based scoring, and LLM judge automated evaluation.

- **Human-in-the-Loop Approvals** -- Pause agent execution at configurable checkpoints and require human approval before continuing. Integrate approval workflows via webhooks.

- **Cost Tracking** -- Automatic per-execution and per-span cost calculation across all LLM providers. Workspace-wide cost summaries and per-agent cost analysis.

- **Agent UI Deployments** -- Build and deploy interactive dashboards backed by your agents, prompts, and data sources. Multi-page layouts with a 12-column grid system and optional PIN protection.

- **A2A Protocol** -- Agent-to-Agent communication via Google's A2A protocol with agent cards, JSON-RPC messaging, and task lifecycle management.

- **SDKs and CLI** -- Official Python and JavaScript/TypeScript SDKs, a full-featured CLI, and an MCP server for IDE integration with Claude Desktop, Cursor, and Windsurf.

## How It Works

PromptRails handles the full lifecycle of AI agents — from development to production. The platform provides:

- **A web dashboard** for building and managing agents, prompts, and data sources
- **REST APIs** for programmatic access to all platform features
- **Real-time tracing** with detailed execution breakdowns
- **Secure credential storage** with encrypted secrets

All credentials are encrypted at rest and never exposed in API responses.

## Getting Started

The fastest way to start using PromptRails is to install one of the SDKs and execute your first agent:

- [Quickstart Guide](/docs/quickstart) -- Get up and running in under 5 minutes
- [Python SDK](/docs/python-sdk) -- Full Python SDK reference
- [JavaScript SDK](/docs/javascript-sdk) -- Full JavaScript/TypeScript SDK reference
