Architecture Overview

Praxis has a distributed architecture designed for monitoring and controlling AI agents across multiple systems. Let's walk through how the pieces fit together.

The Big Picture

                              ┌─────────────────┐
                              │   Web Browser   │
                              │  (React SPA)    │
                              └────────┬────────┘
                                       │ HTTP/WebSocket
                              ┌────────▼────────┐
                              │      Web        │
                              │ (HTTP Server)   │
                              └────────┬────────┘
                                       │ Internal
                              ┌────────▼────────┐
                              │    Service      │
                              │  (Backend)      │
                              └────────┬────────┘
                                       │ RabbitMQ (AMQP)
              ┌────────────────────────┼────────────────────────┐
              │                        │                        │
       ┌──────▼──────┐          ┌──────▼──────┐          ┌──────▼──────┐
       │    Node     │          │    Node     │          │    Node     │
       │ (Target A)  │          │ (Target B)  │          │ (Target C)  │
       └─────────────┘          └─────────────┘          └─────────────┘

Components

Node

The node runs on target systems where AI agents are installed. It's the "eyes and hands" of Praxis on each endpoint.

What it does:

  • Fingerprints installed agents
  • Performs reconnaissance on agent configurations and sessions
  • Intercepts traffic between agents and LLM backends
  • Creates and manages sessions with agents
  • Provides PTY terminal access to the system

Key characteristics:

  • Stateless - all persistent data lives on the service
  • Single binary, no dependencies
  • Communicates with service over RabbitMQ

See Node Architecture for details.

Service

The service is the central backend that coordinates everything.

What it does:

  • Tracks all connected nodes and their agents
  • Stores configuration, operation definitions, and chain workflows
  • Manages the semantic operations queue
  • Executes chains by orchestrating multi-step workflows
  • Persists intercepted traffic and recon results
  • Handles LLM provider integrations

Key characteristics:

  • Persistent storage (SQLite default, PostgreSQL for production)
  • Stateful - knows about all nodes and their state
  • Runs the operation manager and chain executor

See Service Architecture for details.

Web

The web component serves the frontend and provides the API.

What it does:

  • Serves the React single-page application
  • Provides WebSocket endpoint for real-time communication
  • Handles HTTP requests for static assets
  • Bridges between browser clients and the service

Key characteristics:

  • React/TypeScript frontend with Tailwind CSS
  • WebSocket for bidirectional communication
  • Builds into the binary (embedded assets)

See Web Architecture for details.

Communication

No direct client↔node traffic

The service is the only component that talks to nodes. Clients (CLI, web, external ACP tools) speak to the service; the service forwards to the relevant node over RabbitMQ. This keeps access control, session routing, and request correlation in one place and means node failure modes never leak into clients.

 CLI ─▶ RabbitMQ ─▶ Service ─▶ RabbitMQ ─▶ Node
        Web SPA ─▶
        External ACP client ─▶

ACP (Agent Client Protocol)

Each node exposes a single ACP server (node/src/acp_server/) over RabbitMQ. That one endpoint is how every local agent on the node is driven — the connector to use is selected per-session via _meta.praxis.connector on the session/new request. Multiple concurrent sessions are supported on the same node, each with its own freshly-built Lua VM.

The service-side proxy (service/src/acp_node_proxy.rs) routes frames:

  • External client → service → _meta.praxis.nodeId → target node.
  • Node → service → originating client (by correlated client_id).
  • Service's internal orchestrator → node, using a svc_* pseudo-client-id so responses are consumed in-process instead of being forwarded.

Recon is a custom ACP extension (_praxis/recon) plus four file-op extensions (_praxis/read_file, _praxis/write_file, _praxis/grep_files, _praxis/write_session_content). The node advertises them in InitializeResponse._meta.extensions along with the connector catalog.

RabbitMQ

All communication between nodes, service, and web clients flows through RabbitMQ:

QueueDirectionPurpose
NodeSignalNode → ServiceRegistration, traffic, recon results, outbound ACP frames
Node_{id}Service → NodeCommands, parser responses, inbound ACP frames
NodeBroadcastService → All NodesRefresh requests (fanout exchange)
ClientSignalClient → ServiceUI requests, inbound ACP frames
Client_{id}Service → ClientDirect responses, outbound ACP frames
ClientBroadcastService → All ClientsState updates (fanout exchange)

RabbitMQ provides:

  • Reliable message delivery
  • Decoupling between components
  • Easy scaling (nodes can come and go)
  • Persistence for messages in flight

Message Flow Example

Here's what happens when a CLI driver runs a prompt over ACP:

  1. CLI (ACP proxy) → ClientSignalService
  2. Service (AcpNodeProxy) sees _meta.praxis.nodeId, forwards the raw JSON-RPC frame via Node_{id}Node
  3. Node (NodeAcpServer) processes session/new / session/prompt / etc., running on a per-session Lua VM
  4. Node emits response + session/update notifications on NodeSignal
  5. Service (AcpNodeProxy::forward_to_client) routes them to the originating Client_{id} queue
  6. CLI reads responses from its client queue and emits them on stdout

Data Flow

Intercepted Traffic

Agent ─HTTPS─▶ Proxy ─▶ Node ─RabbitMQ─▶ Service ─▶ Database
                                           │
                                           └─▶ Web ─WebSocket─▶ Browser

Operations

Browser ─▶ Web ─▶ Service ─▶ LLM (planning)
                     │
                     └─▶ Node ─▶ Agent (execution)
                           │
                           └─▶ Output ─▶ Service ─▶ Browser

Database Schema

The service stores everything in a relational database:

  • config - key-value settings (LLM configs, etc.)
  • operation_definitions - saved operation templates
  • semantic_operations - operation execution history
  • chain_definitions - workflow definitions
  • chain_executions - workflow execution history
  • traffic_log - intercepted HTTP traffic
  • intercept_rules - traffic matching rules
  • recon_results - cached reconnaissance data
  • application_logs - centralized logging (controlled by application_logs_enabled)

Deployment Patterns

Development

Single machine running everything:

  • Docker Compose with service, web, and RabbitMQ
  • Node running locally for testing

Production

Separate concerns:

  • Service/Web on central server
  • RabbitMQ (possibly managed service)
  • Nodes deployed to target systems
  • PostgreSQL for the database

Cloud (Azure)

See Azure Deployment:

  • Container Apps for service/web
  • Managed RabbitMQ or Container Instance
  • Azure Database for PostgreSQL