AI Governance
Policy Mesh
Deterministic AI control plane for auditable model routing across local and cloud providers.
What it is
Policy Mesh is a lightweight AI control plane that routes LLM requests between local and cloud providers based on explicit, deterministic policy.
It acts as a governed gateway for AI usage, enabling teams to control where prompts are processed, enforce decision logic, and generate audit evidence without exposing raw data.
Problem
Teams adopting LLMs lack control over where prompts are processed, how routing decisions are made, how to enforce cost, security, and compliance constraints, and what audit trail exists after a request is executed.
Most integrations rely on implicit logic or application-level decisions, which makes behavior difficult to reason about and hard to audit.
Approach
Policy Mesh centralizes AI routing decisions behind a single service.
- evaluates each request against policy rules
- deterministically selects a provider
- executes the request
- records a structured audit event
- exposes metrics for observability
This separates application logic from governance and control, allowing consistent enforcement across systems.
Key Capabilities
Deterministic Policy Engine
Routes between Ollama and OpenAI using sensitivity signals, prompt characteristics, and configurable defaults, with explicit reason codes for every decision.
Audit-First Design
Persists structured audit events including prompt hash, metadata, decision, latency, and status without storing raw prompts by default.
Operational Observability
Exposes Prometheus-compatible metrics and structured logs for request volume, provider usage, latency, and failures.
API-Driven Control Plane
FastAPI service with endpoints for chat, health, audit retrieval, and metrics, with OpenAPI documentation.
Configurable Development Model
Supports configurable routing across local and cloud providers, with Docker-based dependencies for repeatable development.
Current State
- End-to-end request flow implemented (/v1/chat) including decision, execution, audit persistence, and response handling.
- Integration tests validate the full request lifecycle.
- CI pipeline builds, tests, and runs the containerized service with health validation.
- Metrics and audit endpoints are operational.
Why it matters
Policy Mesh demonstrates how AI usage can be treated as a platform capability, not an application concern.
It provides a foundation for governance and compliance controls, cost-aware model routing, configurable AI provider strategies, and standardized AI access patterns across teams.
Technology
- Python
- FastAPI
- Pydantic
- Postgres
- Alembic
- Ollama
- OpenAI
- Prometheus metrics
- Docker Compose
- GitHub Actions
- pytest
Links