Introduction
Praxis is an open-source research and experimentation platform for discovering, controlling, and orchestrating computer-use AI agents across endpoints.
As AI coding agents become more prevalent - tools that can read files, execute commands, and interact directly with systems - understanding their security properties becomes critical. Praxis helps enrich our understanding of what's possible when you have legitimate access to systems where these agents run, and what that means for endpoint security.
Built by Origin for security research and red team operations.
Why Does This Exist?
AI coding assistants are everywhere now - Claude Code, Codex CLI, Gemini CLI, Microsoft 365 Copilot. These tools can read your files, execute commands, browse the web, and interact with APIs. From a security perspective, they're incredibly interesting.
Praxis started as a question: what can you do if you have access to a system running one of these agents? Not by exploiting vulnerabilities in the agents themselves, but by using the access you already have to see what they're doing and repurpose their capabilities.
This matters for:
- Red teams exploring post-compromise scenarios where AI agents are present
- Security researchers understanding the attack surface these tools create
- Blue teams wanting to know what visibility they have (or don't have) into agent activity
What Can Praxis Do?
| Feature | Description |
|---|---|
| Agent Discovery | Fingerprint and detect computer-use agents on endpoints |
| Reconnaissance | Enumerate tools (MCP servers, skills), configurations, and session histories |
| Config Visibility | View and edit agent configuration files directly |
| Traffic Interception | MITM proxy for agent-to-LLM traffic |
| Agent Dialog | Create interactive sessions with agents |
| Semantic Operations | Define and chain natural language tasks for multi-step automation |
| Chain Automation | Trigger chains automatically on schedules, intercept matches, or new node events |
| Toolkit | Library of built-in offensive operations with chain integration |
| Terminal Access | PTY terminal on remote nodes |
The Three Components
Praxis has three main pieces:
┌───────────────────────────────────────────────────────────┐
│ │
│ Your Browser │
│ (Web UI @ :8080) │
│ │
└─────────────────────────────┬─────────────────────────────┘
│
│
┌─────────────────────────────▼─────────────────────────────┐
│ │
│ Service │
│ (Backend + Database + Operation Manager) │
│ │
└─────────────────────────────┬─────────────────────────────┘
│
│ RabbitMQ
│
┌─────────────────────┴─────────────────────┐
│ │
│ │
┌───────▼───────┐ ┌─────────▼─────────┐
│ │ │ │
│ Node │ │ Node │
│ (Target #1) │ │ (Target #2) │
│ │ │ │
└───────────────┘ └───────────────────┘
Node runs on target systems. It discovers agents, intercepts traffic, handles sessions, and reports back to the service. Nodes are stateless - all the interesting data lives on the service.
Service is the central backend. It stores operation definitions, chain workflows, intercepted traffic, and recon results. It also runs the semantic operations manager that orchestrates agent tasks.
Web is the React frontend that talks to the service over WebSocket. It provides the UI for everything - selecting nodes, viewing agents, running operations, building chains.
Early Release Notice
This is an early release to showcase initial capabilities. It is not yet ready for full-scale red teaming or production use - although you can certainly experiment to your heart's content.
The platform is under active development:
- Some features are incomplete or experimental
- The codebase is evolving rapidly
- This is not designed to be stealthy - it installs root certificates, modifies system settings, and is generally quite noisy
We're releasing early to get feedback and contributions from the community.
Getting Started
Ready to try it out? Head to the Installation guide.
Installation
There are a few ways to get Praxis running. The one-liner scripts are the easiest for getting started; building from source gives you more control.
Quick Install (One-Liner)
These scripts automatically fetch the latest release and set everything up.
Docker (Recommended)
# Linux/macOS
curl -fsSL https://praxis.originhq.com/docker.sh | bash
# Windows
irm https://praxis.originhq.com/docker.ps1 | iex
This clones the latest release, builds with Docker Compose, and starts everything.
Prerequisites
RabbitMQ must be running before starting Praxis. If you're not using Docker (which includes RabbitMQ), install and start it separately:
# Linux
sudo systemctl start rabbitmq-server
# macOS (Homebrew)
brew services start rabbitmq
Arch Linux (AUR)
yay -S praxis
Or with makepkg:
git clone https://aur.archlinux.org/praxis.git
cd praxis
makepkg -si
This installs:
/usr/bin/praxis_service,/usr/bin/praxis_web,/usr/bin/praxis_cli- binaries/usr/share/praxis/nodes/praxis_node_linux- node agent for deployment to targets- Systemd system services (runs as dedicated
praxisuser) /etc/praxis/env- configuration
After installing:
sudo systemctl enable --now rabbitmq
sudo systemctl enable --now praxis
Native Install (Linux/macOS)
curl -fsSL https://praxis.originhq.com/install.sh | bash
This installs Rust if needed, builds from source, and sets up:
~/.praxis/bin/praxis_service- backend service~/.praxis/bin/praxis_web- web server + frontend~/.praxis/bin/praxis_cli- command-line interface~/.praxis/bin/nodes/<platform>/praxis_node- node agent- Systemd user services (Linux) for automatic startup
- PATH is configured automatically
Native Install (Windows)
irm https://praxis.originhq.com/install.ps1 | iex
Removing
To uninstall Praxis (stops services, removes binaries, config, and PATH entries):
# Linux/macOS
curl -fsSL https://praxis.originhq.com/install.sh | bash -s -- --remove
# Windows
irm https://praxis.originhq.com/install.ps1 | iex -- --remove
Pinning a Specific Version
To install a specific version instead of latest:
# Docker (Linux/macOS)
PRAXIS_VERSION=v0.1.0 curl -fsSL https://praxis.originhq.com/docker.sh | bash
# Native (Linux/macOS)
PRAXIS_VERSION=v0.1.0 curl -fsSL https://praxis.originhq.com/install.sh | bash
# Docker (Windows)
$env:PRAXIS_VERSION = "v0.1.0"; irm https://praxis.originhq.com/docker.ps1 | iex
# Native (Windows)
$env:PRAXIS_VERSION = "v0.1.0"; irm https://praxis.originhq.com/install.ps1 | iex
Manual Docker Setup
If you prefer to clone and run Docker manually:
git clone https://github.com/originsec/praxis.git
cd praxis
docker compose up --build
This starts:
- Praxis (service + web) on port 8080
- RabbitMQ on ports 5672 (AMQP) and 15672 (management UI)
- MCP server on port 8585 (when enabled in Settings > MCP Server)
- Claude Bridge CCRv1 on port 8586 (when enabled in Settings > Claude Bridge)
- Claude Bridge CCRv2 on port 8587 (when enabled in Settings > Claude Bridge)
Open http://localhost:8080 and you're in.
To run without the web UI (headless mode for CLI-only usage):
PRAXIS_HEADLESS=1 docker compose up --build
Getting the CLI from Docker
The CLI binary is built into the Docker image and copied to the data volume on startup. Extract it with:
docker cp $(docker compose ps -q praxis):/app/praxis_cli ./praxis_cli
chmod +x ./praxis_cli
./praxis_cli
Note: Run this from the directory containing your
docker-compose.yml. The container name varies by project directory.
To add a macOS node binary to Docker downloads, provide it explicitly (optional):
# Build macOS node binary on macOS
cargo build --release -p praxis_node
# Put it in a local directory
mkdir -p ~/.praxis/bin/nodes/macos-arm64
cp target/release/praxis_node ~/.praxis/bin/nodes/macos-arm64/praxis_node
Then mount it and enable multi-directory lookup:
# docker-compose.override.yml
services:
praxis:
environment:
PRAXIS_NODES_DIRS: /app/nodes,/app/nodes-host
volumes:
- ~/.praxis/bin/nodes:/app/nodes-host:ro
praxis-postgres:
environment:
PRAXIS_NODES_DIRS: /app/nodes,/app/nodes-host
volumes:
- ~/.praxis/bin/nodes:/app/nodes-host:ro
This keeps Linux/Windows defaults unchanged while adding macOS as an opt-in download.
The RabbitMQ management UI at http://localhost:15672 uses credentials praxis/praxis.
Useful Docker Commands
# Run in background
docker compose up -d
# View logs
docker compose logs -f
# Stop everything
docker compose down
# Rebuild after code changes
docker compose up --build
Building from Source
If you want to build natively or contribute to development:
Prerequisites
- Rust 1.75+ (install via rustup)
- Node.js 18+ (for the web frontend)
- Docker (for RabbitMQ, or install it separately)
Build Steps
# Clone the repo
git clone https://github.com/originsec/praxis.git
cd praxis
# Build everything
cargo build --release
This produces four binaries in target/release/:
praxis_service- the backend servicepraxis_web- the HTTP/WebSocket server + frontendpraxis_node- the node agentpraxis_cli- the command-line interface
Running
You'll need RabbitMQ running first:
docker run -d --name rabbitmq \
-p 5672:5672 -p 15672:15672 \
-e RABBITMQ_DEFAULT_USER=praxis \
-e RABBITMQ_DEFAULT_PASS=praxis \
rabbitmq:3-management
Then start the service and web components:
./target/release/praxis_service &
./target/release/praxis_web &
If you used the install script on Linux, the service and web components are managed via systemd user services:
# Start/stop
systemctl --user start praxis
systemctl --user stop praxis
# Check status
systemctl --user status praxis
# View logs
journalctl --user -u praxis-service
journalctl --user -u praxis-web
Praxis starts automatically on login. Edit ~/.config/praxis/env to configure the RabbitMQ URL and other environment variables.
Getting Node Binaries
Nodes need to run on target systems. You have a few options:
From the Web UI
If you're using Docker, precompiled node binaries are bundled with the image. Go to Settings → Service and download the Linux or Windows binary.
From GitHub Releases
Each tagged release publishes node binaries for Linux and Windows:
- Latest Release
praxis_node-linux-x86_64- Linux binarypraxis_node-windows-x86_64.exe- Windows binarypraxis_node-macos-arm64- macOS (Apple Silicon) binary
Building Yourself
# Linux (native)
cargo build --release -p praxis_node
# macOS (Apple Silicon, native)
cargo build --release -p praxis_node
# Windows (cross-compile from Linux)
# Requires: rustup target add x86_64-pc-windows-gnu
# Requires: mingw-w64 toolchain
cargo build --release -p praxis_node --target x86_64-pc-windows-gnu
Running Nodes
Once you have a node binary, run it on the target system:
# Linux
chmod +x praxis_node
./praxis_node
# Windows
praxis_node.exe
By default, nodes connect to RabbitMQ at localhost:5672. To connect to a remote service:
# Linux
PRAXIS_RABBITMQ_URL=amqp://praxis:praxis@your-server:5672 ./praxis_node
# Windows (PowerShell)
$env:PRAXIS_RABBITMQ_URL = "amqp://praxis:praxis@your-server:5672"
.\praxis_node.exe
Version Compatibility
Nodes must match the service version. The RabbitMQ message format can change between versions, so a v0.2 node talking to a v0.1 service might not work correctly.
If you're getting strange errors or nodes aren't showing up, check that versions match.
Next Steps
Once you have the service running and at least one node connected:
- Configure LLM providers - needed for semantic features
- Walk through the Quick Start - see the basic workflow
Configuration
Praxis uses LLMs for several features-semantic operations, tool discovery during recon, traffic summarization. You'll need to configure at least one provider to use these capabilities.
LLM Providers
Go to Settings → LLM Providers in the web UI.
Adding a Model
- Click Add Model
- Select a Provider
- Enter your API Key (optional for local providers — Ollama and Custom)
- For Custom, and optionally for Ollama, set a Base URL
- Click the refresh button to pull available models from the provider (not supported by all providers), or enter the model name manually
- Click Save
Supported Providers
Anthropic, OpenAI, Google (Gemini), Groq, Cerebras, Mistral, xAI, NVIDIA, MiniMax, Moonshot, Fireworks AI, OpenRouter, Ollama (local), Custom (OpenAI-compatible).
Local Model Providers
Two providers are designed for local or self-hosted inference:
Ollama — defaults to http://localhost:11434/v1, so if you are
running a stock Ollama install nothing else is needed. API key is
optional. Model discovery uses Ollama's native /api/tags endpoint, so
the refresh button works even though Ollama is strictly OpenAI-API
compatible for inference. Override the base URL on the model definition
if Ollama is listening elsewhere.
Custom (OpenAI-Compatible) — for vLLM, llama.cpp, LM Studio,
Text-Generation-Inference, or any endpoint that implements
/v1/chat/completions. You must set a base URL on the model definition;
API key is optional. Model discovery probes /models on the configured
base URL.
Feature Assignment
Once you've added models, assign them to features:
Semantic Operations - Used when executing operations through agents. This is the "brain" that orchestrates what the agent should do. Pick something capable.
Semantic Parser - Used during semantic recon to extract tool definitions from config files. Speed matters here since it runs multiple times; a fast model like Haiku or GPT-4o-mini works well.
Traffic Parser - Summarizes intercepted traffic. Again, speed is valuable; you don't need the most powerful model.
Speed vs. Capability
For parser features (Semantic Parser, Traffic Parser), we recommend providers with fast inference:
- Cerebras and Groq have very fast time-to-first-token and overall throughput
- This matters when you're running recon across multiple agents or parsing lots of traffic
For Semantic Operations, capability matters more than raw speed. Use a model that's good at reasoning and tool use.
Environment Variables
Most configuration is done through the web UI, but some things are set via environment variables:
Service
| Variable | Default | Description |
|---|---|---|
PRAXIS_DATABASE_URL | SQLite in home dir | Database connection string |
PRAXIS_RABBITMQ_URL | amqp://praxis:praxis@localhost:5672 | RabbitMQ URL |
Node
| Variable | Default | Description |
|---|---|---|
PRAXIS_RABBITMQ_URL | amqp://praxis:praxis@localhost:5672 | RabbitMQ URL |
Database
By default, Praxis uses SQLite stored at ~/.praxis_operations.db. For PostgreSQL and production deployments, see Database Configuration.
Model Reference Format
When specifying models in operations or chains, use the format:
provider::model
For example:
anthropic::claude-sonnet-4-20250514openai::gpt-4ogroq::llama-3.3-70b-versatile
This lets you override the default model for specific operations that might need more (or less) capability.
Next Steps
With LLMs configured, you're ready to:
- Run through the quick start
- Enable semantic recon for deeper tool discovery
- Execute semantic operations
Quick Start
Let's walk through the basic workflow: connecting a node, discovering an agent, running recon, and executing an operation.
Prerequisites
You should have:
- Praxis service running (via Docker or native build)
- At least one LLM configured (see Configuration)
- A node running on a system with an AI agent installed
Step 1: Check Your Node
Open the web UI at http://localhost:8080. You should see your node in the left sidebar under the node list.
Click on it to select it. The main panel shows:
- Machine name and OS details
- Detected agents - which AI assistants were found
- Status of interception, sessions, etc.
If no agents show up, make sure the target system actually has Claude Code, Codex CLI, Gemini CLI, or another supported agent installed and configured.
Step 2: Select an Agent
From the agent list, click on one to select it. This focuses all operations on that specific agent.
The agent panel shows:
- Name and type
- Session status - whether there's an active session
- Recon data - if you've run reconnaissance
Step 3: Run Reconnaissance
Click the Recon button (or go to the Recon tab). This performs static reconnaissance:
- Discovers MCP servers and other tool integrations
- Lists configuration files and their contents
- Shows session history - past conversations and their locations
- Enumerates project paths where the agent has been used
The results appear in the Recon panel, organized by category.
Semantic Recon
For deeper discovery, click the Discover button to run semantic recon (requires an LLM configured for "Semantic Parser"). This uses the LLM to parse configuration files and extract tool definitions that might not be obvious from static analysis. It also creates sessions and communicates directly with the agent to discover its full capabilities, so it takes longer than static recon.
Step 4: Look Around
With recon data, you can:
View configuration files - Click on any config file to see its contents. Some files can be edited directly (like Claude's config.json or MCP server definitions).
Browse sessions - See what conversations the agent has had, which projects it's worked on.
Check tools - See what MCP servers, skills, or plugins are available to the agent.
Step 5: Create a Session
Click Create Session to start an interactive session with the agent. This spawns the agent process in a controlled context where you can send prompts and receive responses.
Working Directory - You can specify where the agent should operate. This affects what files it can see and work with.
YOLO Mode - When enabled, the agent auto-approves all tool calls without asking for confirmation. Use this for automation, but be careful-it will execute whatever the agent decides to run.
Once the session is created, you can send prompts directly from the Sessions panel.
Step 6: Run an Operation
Operations are predefined tasks you can execute through agents. The library starts empty, so let's create a simple one first.
Create Your First Operation
- Go to Operations → Library
- Click New Operation
- Fill in:
- Name:
hello-world - Category:
test - Description:
A simple test operation - Prompt:
Say hello and tell me what directory you're currently in. - Mode:
one-shot - Timeout:
60
- Name:
- Click Save
Run It
- Go to Operations → Runs
- Click Run Operation
- Select your node and agent
- Choose
test::hello-worldfrom the dropdown - Click Run
The operation executes through your agent. Watch the output in real-time in the Runs tab - you'll see the agent's response appear as it completes.
Operation Modes
- One-shot - sends the prompt directly to the agent and returns the response
- Agent - uses an orchestrating LLM to run multi-turn interactions with the target agent (useful for complex tasks)
For more complex workflows, you can chain multiple operations together with the visual chain builder. See Semantic Operations for details.
Step 7: Enable Interception (Optional)
To see the traffic between the agent and its LLM backend:
- Go to Intercept
- Select your node
- Choose a method:
- Proxy - configures system proxy settings
- VPN - uses a TUN adapter for packet-level routing
- Hosts - modifies the hosts file
- Click Enable
Traffic appears in the Traffic tab. You can see:
- Full request/response bodies
- Prompts and completions
- Tool calls and results
See Interception for details on each method.
What's Next?
- Configure LLM providers for semantic features
- Learn about agent connectors and their capabilities
- Set up traffic interception in detail
- Build operation chains for automation
Nodes & Agents
Understanding how Praxis organizes nodes and agents is key to using the platform effectively.
Nodes
A node represents a system running the Praxis node binary. When you deploy a node to a target machine, it:
- Connects to RabbitMQ
- Registers with the service
- Fingerprints installed AI agents
- Begins listening for commands
Node Identity
Each node gets a unique ID generated on first run. This ID persists across restarts, so the service recognizes when a node reconnects.
The node also reports:
- Machine name - hostname of the system
- OS details - operating system and version
- Agent list - discovered AI agents
- Privileged status - whether the node is running as root/admin
Superuser Mode
When the node runs as root, it can operate as different users based on the selected working directory. Selecting a working directory owned by another user will cause agent sessions to run as that user (with the appropriate HOME environment variable set).
Note: Full superuser support is still under development. Users may notice unexpected behaviour when running sessions as different users from a root node. If you encounter issues, try running the node as the target user directly instead.
Privileged Status
Each node reports whether it is running with elevated privileges. On Linux/macOS this means running as root (UID 0); on Windows this means running as an elevated administrator.
Privileged nodes display a ROOT badge in the web UI and CLI. Some features — particularly interception methods that modify system-level configuration (VPN, Hosts, TPROXY) — require elevated privileges. The web UI will disable the intercept Enable button on non-privileged nodes.
Node List
In the web UI, the left sidebar shows all connected nodes. Click a node to select it. The main panel then shows that node's details and agents.
Bridge Nodes
In addition to deployed nodes, Praxis supports bridge nodes -- virtual nodes created when Claude Code connects directly to the service using the Claude Bridge. Bridge nodes appear in the UI alongside regular nodes but have some differences:
- They only support sessions (no interception, recon, or terminal)
- They are ephemeral -- they disappear when Claude disconnects
- Sessions are automatically active in YOLO mode
- The node type shows as
claude-ccrv1orclaude-ccrv2
Bridge nodes are created by enabling the Claude Bridge in Settings and launching Claude Code with the appropriate environment variables. See Claude Bridge for setup details.
Removing Nodes
If a node disconnects and you want to remove it from the list, click the remove button. This clears the node from the service's tracking. If the node reconnects, it will appear again.
Resetting Nodes
You can reset a node to cancel all in-flight operations and return it to a clean state. Reset will:
- Cancel all running transactions (prompts, recon, etc.)
- Drop every live ACP session and its per-session Lua VM
- Close any terminal session
- Disable interception and restore system settings
- Re-register the node with the service
Use the reset button (↻) in the node card header, the CLI command node reset <id>, or the MCP tool node_reset. The node briefly goes offline during reset and comes back with fresh state. Clients drop their local entries for the reset node immediately and re-pull session/list after a short grace period so the Active Sessions overlay reflects reality.
Agents
Agents are the AI assistants detected on each node. When a node fingerprints successfully, you'll see agents like:
- Claude Code - Anthropic's CLI assistant
- Claude Desktop - Anthropic's desktop app (Windows only)
- Codex CLI - OpenAI's CLI assistant
- Cursor Agent - Cursor's background agent CLI (Linux only)
- Gemini CLI - Google's CLI assistant
- M365 Copilot - Microsoft 365 Copilot (Windows only)
Agent Selection
Click an agent to focus operations on it — recon targets that agent,
actions in the agent's card (config read/write, session create) route to
that agent. A node can host concurrent sessions across any combination
of its agents; the focus is purely a UI convenience, not a routing
constraint. Recon is agent-scoped (_praxis/recon is called with the
agent's short_name), and each session explicitly names its connector
via _meta.praxis.connector on session/new.
Agent States
Fingerprinted — the agent was detected but no session is open.
Session Active — one or more live sessions exist. The card shows a
LIVE indicator and, when applicable, a YOLO tag for auto-approve
sessions. The Sessions panel lists each live session with resume /
discard controls.
Working with Nodes and Agents
Typical Workflow
- Deploy node to target system
- Select node in the UI
- Check agents that were fingerprinted
- Select an agent to work with
- Run recon to see what the agent knows
- Create session for interactive use
Multiple Nodes
When you have multiple nodes:
- Each node appears in the sidebar
- Select one to work with it
- Operations target the selected node/agent
- Traffic interception is per-node
Refreshing
The service periodically requests updates from nodes. You can also:
- Click refresh to update a specific node
- Trigger re-fingerprinting if agents changed
Agent Capabilities
Different agents support different features:
| Feature | Claude Code | Claude Bridge | Claude Desktop | Codex | Cursor | Gemini | M365 Copilot |
|---|---|---|---|---|---|---|---|
| Static Recon | ✓ | - | ✓ | ✓ | ✓ | ✓ | ✓ |
| Semantic Recon | ✓ | - | ✓ | ✓ | ✓ | ✓ | ✓ |
| Sessions | ✓ | ✓ | ✓ | ✓ | ✓ (ACP) | ✓ (ACP) | ✓ |
| Config Editing | ✓ | - | ✓ | ✓ | ✓ | ✓ | - |
| MCP Discovery | ✓ | - | ✓ | ✓ | - | ✓ | - |
| Traffic Intercept | ✓ | - | ✓ | - | ✓ | ✓ | ✓ |
Troubleshooting
Node not appearing
- Check RabbitMQ connection from the node
- Verify PRAXIS_RABBITMQ_URL is correct
- Look at node logs for errors
Agent not fingerprinted
- Ensure the agent is installed and configured
- Check that config files exist in expected locations
- Verify the agent binary is in PATH
Agent disappeared
- The agent may have been uninstalled
- Config files may have moved
- Try refreshing the node
Can't select agent
- Ensure the node is connected
- Check that fingerprinting succeeded
- Look for errors in the node logs
Reconnaissance
Reconnaissance discovers what an AI agent can do-its tools, configuration, and history. This is your window into understanding an agent's capabilities before interacting with it.
Running Recon
With an agent selected:
- Click Recon in the agent panel
- Static recon runs immediately
- Results appear organized by category
For deeper discovery, click Semantic Recon (requires Semantic Parser LLM configured).
What Recon Discovers
Tools
Tools are the capabilities available to the agent. This includes MCP servers (external tool integrations), internal/built-in tools (like file operations, command execution, web browsing), and any extensions or plugins the agent supports. Recon discovers what tools are available, how they're configured, and what parameters they accept.
Configuration
Config files reveal how the agent is set up. This includes settings files (model preferences, permissions, API configurations), tool/server definitions, and instruction files like CLAUDE.md or similar that influence agent behavior. Recon identifies these files and makes their contents viewable and often editable.
Sessions
Session history shows past conversations. Recon discovers session files containing conversation transcripts, project contexts, and timestamps. It also identifies project paths where the agent has been used, giving you visibility into recent activity and what the user has been working on.
Static vs Semantic Recon
Static Recon
Fast discovery based on file parsing:
- Reads known config file locations
- Parses JSON/YAML configurations
- Lists files and directories
- No LLM required
Best for: Quick overview, checking configuration
Semantic Recon
Click the Discover button to run semantic recon. This performs deeper analysis using an LLM:
- Parses complex configurations
- Extracts tool definitions from text
- Identifies capabilities from session transcripts
- Creates sessions and communicates directly with the agent
- Understands context
This takes longer than static recon because it actually interacts with the agent to discover its full capabilities.
Best for: Full capability discovery, understanding what tools do
Semantic recon requires the Semantic Parser LLM to be configured. Choose a model that balances speed and capability - multiple parsing calls may be made so fast inference helps, but the model also needs to be capable enough to extract meaningful information from complex configurations.
Querying Stored Recon Data
After running recon, the results are stored in the service database. You can query specific sections without re-running recon:
MCP tools:
recon_list- list stored recon data (section: all/sessions/tools/projects/configs)recon_config_read- read config file contentrecon_session_read- read session file contentrecon_config_grep- grep config files with regexrecon_session_grep- grep session files with regex
These are useful for quick lookups and for AI agents that need to browse specific recon data without triggering a full scan.
Using Recon Data
View Config Files
Click any config file to see its contents. The viewer shows:
- File path
- Full contents
- Syntax highlighting (JSON, YAML)
Edit Configurations
Some configurations can be edited directly (like Claude's config.json or MCP server definitions):
- Click on a config file
- Make changes in the editor
- Click Save
- Changes are written to disk on the target
This is useful for exploring the offensive impact of configuration changes - adding MCP servers, modifying permissions, changing model settings, or injecting tool configurations.
Caution: Editing configs can break the agent if done incorrectly. The changes persist until the user or agent modifies them again.
View Session History
Click on a session to see the conversation:
- Full transcript with prompts and responses
- Tool calls and results
- Timestamps
This reveals:
- What projects the user worked on
- What questions they asked
- What files were accessed
- Sensitive information mentioned
Tool Discovery Details
MCP Servers
MCP (Model Context Protocol) servers extend agent capabilities. Recon discovers server definitions including stdio commands and arguments, SSE endpoints, and environment variables. It also attempts to connect to each MCP server to pull out the actual tools it provides - giving you visibility into what external capabilities the agent has access to and potential attack surface.
Note that if an MCP server requires specific authentication or environment setup, the tool discovery connection may fail. Praxis does its best to replicate the agent's environment but some servers may not respond.
Internal Tools
Semantic recon discovers built-in agent tools by creating a session and asking the agent directly about its capabilities. The response is then passed through the semantic parser to extract structured tool definitions.
This approach has some pitfalls: the agent may refuse to disclose its tools, provide incomplete information, or the parser may fail to extract tools from the response. The prompt used to ask the agent is defined in the agent connector code and can be customized if needed for better results with specific agents.
Understanding available tools helps you craft effective prompts for operations.
Best Practices
Start with Static
Run static recon first-it's fast and gives you the lay of the land. Then run semantic recon for deeper understanding.
Check Session History
Session history often contains valuable information:
- API keys mentioned in prompts
- File paths discussed
- Security-relevant conversations
Note Interesting Tools
Pay attention to powerful tools:
- Database access
- File system access
- Network capabilities
- Code execution
These are your leverage points for operations.
Compare Before/After
After modifying configs, run recon again to verify changes took effect.
Troubleshooting
No recon data
- Ensure agent is fingerprinted
- Check that config files exist
- Verify node has read permissions
Semantic recon fails
- Check Semantic Parser LLM is configured
- Verify API key is valid
- Look for errors in service logs
Missing MCP servers
- Some agents don't use MCP
- Try semantic recon for deeper discovery
Sessions
Sessions let you interact with AI agents in real-time. When you create a session, Praxis spawns the agent process on the target node and gives you a direct communication channel.
Creating a Session
From the agent detail page:
- Click Create Session
- Optionally enable YOLO Mode
- Wait for the session to initialize
The agent process starts on the target node with a PTY attached. You'll see a session indicator once it's ready.
Session Interface
The session panel shows a conversation view:
- Your messages appear on one side
- Agent responses appear on the other
- Responses are rendered as markdown with syntax highlighting
Type in the input field and press Enter to send a prompt.
YOLO Mode
By default, agents require confirmation before executing potentially dangerous actions. YOLO mode auto-approves everything:
- File operations proceed without confirmation
- Commands execute immediately
- Tool calls run automatically
Use YOLO mode when you want uninterrupted operation execution. Be aware that this removes safety guardrails-the agent will do whatever you ask without asking first.
Session Context
Sessions can be created with context:
Working Directory - The directory where the agent operates. This affects file paths and command execution. When running semantic operations or chains from an agent with an active session, the session's working directory is used.
Prompt Timeout - Maximum time in seconds a single prompt can run before the agent process is killed. Defaults to the service-wide prompt_timeout_secs setting (600 seconds). Can be overridden per-session using the --timeout (-T) flag in the CLI.
Session ID - A unique identifier for tracking the session. Used internally for message routing.
What Happens During a Session
Clients (CLI, web, external ACP tools) never talk to the node directly.
Each prompt is an Agent Client Protocol
(ACP) JSON-RPC frame that travels CLI/Web → RabbitMQ → service → RabbitMQ
→ node. The node runs a single ACP server that multiplexes all its
connectors; the target connector is selected per-session via
_meta.praxis.connector on session/new, and subsequent frames for the
returned sessionId are routed by the service proxy automatically.
When you send a prompt:
session/promptis forwarded to the node that owns the session- The node's per-session Lua VM handles the prompt — invoking the
connector's PTY (
claude-code,codex,m365-copilot) or the connector's embedded ACP subprocess (cursor,gemini) - Streaming updates (
session/updatenotifications) flow back as the agent generates text, calls tools, and builds plans - The final
session/promptresponse carries astopReason(end_turnorcancelled)
Streaming Sessions (ACP)
All sessions are wrapped in ACP externally, but for agents that natively
speak ACP inside the node (currently Cursor and Gemini) you also get
typed streaming updates end-to-end. Regardless of the underlying
transport, session/update notifications relay:
- Text chunks — incremental output as the agent generates its response
- Tool calls — tool name and input displayed as the agent invokes tools
- Tool results — output from each tool call (with error highlighting)
- Plans — the agent's execution plan with step status tracking
- Permission requests — when the agent needs approval for an action (interactive sessions only)
- Token usage — prompt/completion token counts updated in real time
Cancellation goes through session/cancel (a JSON-RPC notification, no
response) — Ctrl+C in the TUI or the Cancel button in the web UI sends
it. The in-flight session/prompt then resolves with
stopReason: "cancelled" and any partial output is preserved in the
conversation history.
Session IDs
Sessions created on a node (via the node's ACP server) are raw UUIDs. Sessions hosted directly on the service — the orchestrator, MCP-driven sessions, and external ACP bridges — are prefixed by caller type so a client can filter the orchestrator session list to its own entries:
CLI_— created by the TUI's orchestratorWEB_— created by the web UI's orchestratorACP_— created by an external ACP client
Session Messages
The UI tracks messages per session:
- Messages persist while the session is active
- Conversation history shows the full exchange
- You can export the session transcript
Ending a Session
Click Close Session (web), or use Ctrl+C / d on the sessions list
(TUI) to terminate. This sends session/close to the node, which drops
the per-session Lua VM and any owned subprocess. Only the targeted
session is affected — any other live sessions on the same connector keep
running.
Sessions and Operations
Semantic operations always create their own dedicated session. When an
operation runs it calls session/new, executes, and then closes. Because
each ACP session owns its own Lua VM (and, where applicable, its own ACP
subprocess or PTY), operations run concurrently with interactive sessions
on the same agent without interfering.
Bridge Sessions
When Claude Code connects to Praxis via the Claude Bridge, a session is created automatically as part of the connection. Bridge sessions differ from regular sessions:
- The session starts immediately when Claude connects (no manual creation needed)
- Permissions are always bypassed (YOLO mode) since the bridge sets
bypassPermissionsduring handshake - Only one prompt can be in-flight at a time
- Closing the session sends an
end_sessionrequest to Claude and terminates the connection - The virtual node is deregistered when the session ends
Bridge sessions are otherwise used the same way -- you can send prompts, run operations, and include them in chains.
Multiple Sessions
A single node can host any number of concurrent ACP sessions across any
combination of connectors. Each session/new returns a fresh sessionId,
and every session gets its own isolated per-session Lua VM built from
bytecode compiled once at connector-load time, so there is no global
state shared between sessions even when they target the same connector.
Listing and resuming
The clients refresh their view of live sessions by calling session/list
on each connected node. The CLI does this on first connect, when you
open the Nodes window (Ctrl+L), and ~1.5s after a node reset; the web
UI does it when a node card mounts and again whenever the node reports a
new last_update. Any server-side sessions the client hadn't yet seen —
for example a session left alive across a CLI restart — are merged into
the local sessions list and become resumable.
In the TUI
Ctrl+W in the Nodes window toggles the Active Sessions overlay. It
lists every live session with node, agent, session id preview, status
(idle / working), and how long ago it was created.
Enterresumes the selected sessiondorDeldiscards (sendssession/cancelif the session is mid-prompt, thensession/close)EscorCtrl+Wdismisses the overlay
Inside a chat view, Esc or Ctrl+W pauses the session (hides the
chat; the session stays alive on the node and can be resumed from the
overlay). Ctrl+C cancels the in-flight prompt when the agent is
working, and closes the session when the agent is idle. The status bar
shows an N sessions counter when any concurrent sessions are live.
In the web UI
Each node card has a Sessions panel listing every ACP session the web
client knows about for that node. Hover actions let you resume (open the
agent modal) or discard (send session/close) a session. Multiple agent
session modals can be open side-by-side on the same node card — one per
connector — so you can drive Claude Code, Codex, and Cursor sessions in
parallel from a single node.
Troubleshooting
Session won't create
- Check the agent binary exists on the node
- Verify the node is connected
- Look at node logs for spawn errors
Messages not appearing
- Ensure the session is active (check the indicator)
- Try refreshing the page
- Check WebSocket connection status
Session hangs
- The agent may be waiting for input
- Check if YOLO mode should be enabled
- Try sending a simpler prompt
Unexpected responses
- Remember the agent has full system access
- Previous conversation context affects responses
- Try closing and creating a fresh session
Terminal
The terminal feature gives you direct shell access to nodes. This is a full PTY terminal - a separate shell on the target system.
Opening a Terminal
From a node:
- Click the Terminal button
- A terminal panel opens
- You have a shell on that node
The terminal uses xterm.js for rendering, so you get proper terminal emulation with colors, cursor movement, and escape sequences.
What You Can Do
This is a real shell. You can:
- Run commands on the target system
- Navigate the filesystem
- View and edit files
- Run scripts
- Check system status
The shell runs as the same user that runs the Praxis node.
Terminal vs Agent Session
These are different things:
| Terminal | Agent Session |
|---|---|
| Direct shell access | AI agent interaction |
| Raw commands | Natural language prompts |
| System-level | Agent-level |
| No AI involved | AI processes requests |
Use the terminal for direct system work. Use sessions for agent interaction.
Use Cases
Debugging - Check logs, inspect files, verify the node is working correctly.
Preparation - Set up environments, install dependencies, configure the system before running operations.
Manual Operations - Sometimes you just need a shell. The terminal is there when you need it.
Verification - After an operation runs, verify the results directly.
Terminal Persistence
The terminal session persists while you have the panel open. Closing the panel ends the shell session. There's no background persistence-this is an interactive terminal.
Limitations
- One terminal per node at a time
- Runs as the node's user
- Subject to the node's environment and permissions
Troubleshooting
Terminal won't connect
- Verify the node is online
- Check RabbitMQ connectivity
- Look at node logs
Commands not working
- Check the node's environment
- Verify PATH settings
- Ensure required tools are installed
Display issues
- Terminal size may need adjustment
- Some applications may not render correctly
- Try simpler commands to verify basic function
Interception
Traffic interception lets you see the communication between AI agents and their LLM backends. You can watch prompts being sent, responses coming back, and tool calls being made.
How It Works
┌─────────┐ ┌─────────────┐ ┌─────────────┐
│ Agent │──HTTPS──│ Praxis │──HTTPS──│ LLM API │
│ │ │ Proxy │ │ │
└─────────┘ └──────┬──────┘ └─────────────┘
│
▼
┌─────────────┐
│ Captured │
│ Traffic │
└─────────────┘
Praxis acts as a man-in-the-middle:
- Installs a root CA certificate
- Generates certificates for target domains
- Terminates TLS and captures traffic
- Re-encrypts and forwards to the real destination
Interception Methods
Praxis supports four methods for routing traffic through the proxy. Each has tradeoffs.
Proxy Mode
How it works: Configures system proxy settings so applications route HTTP/HTTPS through the Praxis proxy.
Setup:
- Linux: Sets
HTTP_PROXYandHTTPS_PROXYenvironment variables - Windows: Modifies registry proxy settings
Advantages:
- Easiest to set up
- Works without elevated privileges
- Minimal system changes
Disadvantages:
- Only captures HTTP/HTTPS
- Some applications ignore proxy settings
- May conflict with existing proxy configuration
Best for: Quick setup, applications that respect proxy settings
VPN Mode
How it works: Creates a TUN network adapter and routes specific IPs through it at the packet level.
Platform support: Windows only. For Linux, use TPROXY mode instead (more efficient, no userspace packet processing).
Setup:
- TUN device created (wintun on Windows)
- Intercept domains resolved to IP addresses
- Routes added for those IPs through the TUN
- Packet engine performs NAT to redirect to proxy
- Proxy connects to real server, bypassing TUN via interface binding
Internal details:
- TUN uses IP 10.255.0.1, virtual client uses 10.255.0.100
- Packet engine maintains NAT table mapping client connections to proxy
- Proxy bypasses TUN by binding to the real network interface's IP (not 10.255.0.1)
- Packet engine distinguishes proxy traffic (src != 10.255.0.1) and passes it through
Advantages:
- Captures traffic from all applications
- Works even if apps ignore proxy settings
- More comprehensive coverage
Disadvantages:
- Windows only (use TPROXY on Linux)
- Requires elevated privileges (admin)
- More complex setup
Best for: Comprehensive capture on Windows, applications that bypass proxy
Hosts Mode
How it works: Modifies the hosts file to redirect target domains to localhost where the proxy listens.
Setup:
- Adds entries to
/etc/hosts(Linux) orC:\Windows\System32\drivers\etc\hosts(Windows) - Flushes DNS cache
Advantages:
- Simple mechanism
- Works for static domains
- No packet-level complexity
Disadvantages:
- Requires elevated privileges
- Only works for known domains
- Doesn't handle DNS load balancing
- Applications using custom DNS may bypass
Best for: Simple setups with known domains
TPROXY Mode (Linux only)
How it works: Uses iptables TPROXY to transparently redirect traffic to the proxy at the kernel level.
Setup:
- IPv6 disabled system-wide (restored on cleanup)
- Intercept domains resolved to IP addresses
- iptables mangle rules added to mark packets to target IPs (mark 0x1)
- Policy routing configured to route marked packets to loopback
- TPROXY rule redirects packets to proxy port
- Proxy uses
SO_ORIGINAL_DSTto get real destination - Proxy's outbound connections marked with bypass mark (0x2) to skip iptables rules
Internal details:
- Uses iptables mangle table with PREROUTING chain
- Bypass rule:
-m mark --mark 0x2 -j RETURNplaced before intercept rules - Proxy sets
SO_MARK=0x2on outbound sockets to avoid routing loop - Policy routing table 100 handles marked packets
Advantages:
- No TUN device or userspace packet processing
- Lower overhead than VPN mode
- Standard Linux networking (works with any kernel supporting TPROXY)
- Works for all TCP traffic to target IPs
Disadvantages:
- Linux only
- Requires elevated privileges (root or
CAP_NET_ADMIN) - Modifies iptables rules (may conflict with existing firewall)
- Temporarily disables IPv6 (IPv4 only)
Best for: Linux systems needing efficient kernel-level interception
Privilege Requirements
Most interception methods (VPN, Hosts, TPROXY) require the node to be running with elevated privileges (root on Linux/macOS, administrator on Windows). The Proxy method can work without elevated privileges.
Nodes report their privilege status automatically. In the web UI, the intercept Enable button is disabled on non-privileged nodes — you must restart the node with elevated privileges before enabling interception. Privileged nodes display a ROOT badge in the Nodes window.
Enabling Interception
- Go to Intercept in the web UI
- Select your node (must be running privileged for VPN/Hosts/TPROXY methods)
- Choose a method (Proxy, VPN, Hosts, or TPROXY)
- Click Enable
The node will:
- Create and install a root CA certificate
- Generate leaf certificates for intercept domains
- Start the proxy server
- Configure system based on chosen method
Viewing Traffic
Traffic Tab
The Traffic tab shows captured requests:
| Column | Description |
|---|---|
| Time | When the request occurred |
| Agent | Which agent made the request |
| Method | HTTP method (GET, POST) |
| URL | Full request URL |
| Status | Response status code |
WebSocket traffic is also supported - messages are coalesced into a single row per connection.
HTTP/2 and gRPC traffic is fully supported with frame-level interception.
Request Details
Click a row to see details:
Request:
- Full headers
- Request body (JSON formatted)
- Content type
Response:
- Status code
- Headers
- Response body (JSON formatted)
For LLM APIs, you'll see:
- The prompts being sent
- Tool call requests
- Model responses
- Token usage
Protocol Support
HTTP/1.1
Standard HTTP traffic is fully captured with request/response headers and bodies.
WebSocket
WebSocket connections are detected via HTTP 101 upgrade responses. Individual frames are captured and grouped by connection URL in the UI.
HTTP/2 and gRPC
The proxy provides frame-level HTTP/2 interception for services using HTTP/2 (including gRPC streaming):
Detection: HTTP/2 is detected by the connection preface (PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n)
Captured Frames:
H2_HEADERS- Request/response headers (HPACK encoded)H2_DATA- Request/response body data
Frame Relay: All frame types are forwarded bidirectionally:
- SETTINGS, WINDOW_UPDATE (flow control)
- PING (keep-alive)
- RST_STREAM (stream reset)
- GOAWAY (connection close)
gRPC Streaming: Full support for bidirectional streaming RPCs. Both client-to-server and server-to-client data frames are captured as they flow.
UI Display: HTTP/2 traffic is grouped by URL (similar to WebSocket), showing:
- Total frame count
- Send/receive counts
- Total bytes transferred
- Individual frames expandable with payload preview
Path Extraction: The proxy extracts the :path pseudo-header from HPACK-encoded HEADERS frames to provide URL context for DATA frames in the same stream.
Traffic Rules
Rules let you match and process specific traffic.
Creating Rules
- Go to Intercept → Rules
- Click New Rule
- Configure:
- Name - identifier for the rule
- Pattern - regex to match
- Direction - send, receive, or both
- Scope - all traffic or specific node/agent
- Summarization prompt - optional LLM analysis
Rule Matching
When traffic matches a rule:
- Entry is tagged with the rule
- Matches viewable separately
Semantic Parsing
Rules can include a summarization prompt for semantic analysis. When a rule matches and has a summarization prompt configured, the Traffic Parser LLM processes the matched traffic - extracting prompts, summarizing responses, detecting tool calls, and highlighting key information.
Use rules to:
- Flag specific API calls
- Track sensitive operations
- Collect API keys
- Monitor for specific content
Disabling Interception
Click Disable to stop interception. This:
- Removes the installed certificate
- Restores proxy settings (if modified)
- Cleans hosts file entries (if modified)
- Removes iptables TPROXY rules (if used)
- Stops the proxy server
Shared IP Passthrough
When multiple domains share the same IP address (e.g., claude.ai and api.anthropic.com both resolve to 160.79.104.10), traffic to non-intercepted domains may route through the proxy.
The proxy handles this transparently:
- Extracts SNI (Server Name Indication) from TLS ClientHello
- Checks if the domain should be intercepted
- For non-intercepted domains, tunnels traffic through without TLS termination
- Uses the same bypass mechanisms to connect to the real server
This ensures non-intercepted domains continue to work normally even when sharing IPs with intercepted domains.
Security Considerations
Certificate Trust
The generated root CA must be trusted by the system for HTTPS interception to work. This is done automatically but:
- Some applications have their own certificate stores
- Users may notice certificate changes
- Security tools may alert on unknown CAs
Credential Exposure
Intercepted traffic may contain:
- API keys in headers
- Authentication tokens
- Sensitive prompts and responses
Handle captured data appropriately.
Detection
Interception is not stealthy:
- Root CA installed in system store
- System proxy modified (Proxy mode)
- Hosts file modified (Hosts mode)
- Network adapter created (VPN mode)
- iptables rules modified (TPROXY mode)
This tool is designed for research, not covert operations.
Troubleshooting
Traffic not appearing
- Verify interception is enabled
- Check the agent uses intercepted domains
- Try a different interception method
- Ensure proxy certificate is trusted
Certificate errors
- Some apps have pinned certificates
- Node.js: Set
NODE_EXTRA_CA_CERTS - Python: Set
REQUESTS_CA_BUNDLE - Browsers may need manual cert import
VPN mode fails
- Windows only (Linux support in development)
- Requires Administrator privileges
- Check for conflicting VPN software
TPROXY mode fails
- Linux only
- Requires root or
CAP_NET_ADMINcapability - Verify iptables is available:
which iptables - Check for conflicting mangle rules:
iptables -t mangle -L - Ensure
route_localnetcan be enabled on loopback - Check policy routing:
ip rule listandip route show table 100
IPv6 connectivity issues during interception
TPROXY mode temporarily disables IPv6 system-wide (net.ipv6.conf.all.disable_ipv6=1) because:
- TPROXY rules only handle IPv4 traffic
- IPv6 traffic would bypass interception
IPv6 is automatically restored when interception is disabled. If the node crashes without cleanup, restore manually:
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0
Performance issues
- Large traffic volumes can slow things down
- Consider filtering to specific domains
- Use rules to reduce stored traffic
Log Query
The Log Query feature provides a KQL-like query interface for exploring and correlating data across Praxis virtual tables (captured traffic, events, recon results, nodes, agents, operation history, etc). The syntax is inspired by Kusto Query Language but only a subset of KQL is implemented — not all features or functions from the full Kusto specification will work. Write queries in the code editor, execute them with Ctrl+Enter, and browse paginated results.
Available Tables
AgentLogs
Discovered agents across all nodes (in-memory).
| Column | Description |
|---|---|
| timestamp | Last update time |
| node_id | Node identifier |
| agent_short_name | Agent short name |
| agent_name | Agent display name |
| version | Agent version (if known) |
EventLogs
Centralized application log entries from service, web, and nodes. Requires application_logs_enabled to be set to true in settings.
| Column | Description |
|---|---|
| timestamp | When the log entry was recorded |
| source | Origin category: "service", "web", or "node" |
| source_id | Instance identifier (e.g. node UUID, web client ID; empty for service) |
| level | Log level: error, warn, info, debug, trace |
| target | Log target/module (may be null) |
| message | Log message text |
SemanticOperationChainLogs
Chain execution history, including per-element state and final outputs. The elements and outputs columns contain JSON — use contains() to search within them.
| Column | Description |
|---|---|
| timestamp | When the chain execution was created |
| execution_id | Chain execution identifier |
| chain_id | Chain definition identifier |
| chain_name | Chain display name |
| node_id | Node that executed the chain |
| agent_short_name | Agent that executed the chain |
| status | Execution status: Queued, Running, Completed, Failed, Cancelled |
| elements | Per-element execution state (JSON) |
| outputs | Final outputs from termination elements (JSON) |
| started_at | When execution started |
| ended_at | When execution ended (null if still running) |
NodeLogs
Currently connected nodes (in-memory).
| Column | Description |
|---|---|
| timestamp | Last update time |
| node_id | Node identifier |
| machine_name | Machine hostname |
| os_details | Operating system details |
| intercept_active | Whether interception is active |
SemanticOperationLogs
Semantic operation execution history, including results and summaries. The operation_spec column contains the full operation definition as JSON — use contains() to search within it.
| Column | Description |
|---|---|
| timestamp | When the operation was created |
| operation_id | Operation identifier |
| node_id | Node that executed the operation |
| agent_short_name | Agent that executed the operation |
| status | Operation status: Queued, Running, Completed, Failed, Cancelled |
| operation_spec | Full operation specification (JSON) |
| start_time | When the operation started |
| end_time | When the operation ended (null if still running) |
| summary | Brief summary of actions taken |
| result | Actual findings/data/output |
| chain_execution_id | Parent chain execution ID (null if standalone) |
ReconLogs
Summary of reconnaissance results per node+agent.
| Column | Description |
|---|---|
| timestamp | When recon was performed |
| node_id | Node identifier |
| agent_short_name | Agent short name |
| is_semantic | Whether this was a semantic recon |
| mcp_server_count | Number of MCP servers discovered |
| skill_count | Number of skills discovered |
| internal_tool_count | Number of internal tools discovered |
| config_count | Number of config items discovered |
| session_count | Number of sessions discovered |
| project_path_count | Number of project paths discovered |
ReconMetadataLogs
User identities and API keys extracted from agent configurations.
| Column | Description |
|---|---|
| timestamp | When recon was performed |
| node_id | Node identifier |
| agent_short_name | Agent short name |
| entry_type | "user_identity" or "api_key" |
| value | The identity or key value |
ReconSessionLogs
Sessions discovered during reconnaissance.
| Column | Description |
|---|---|
| timestamp | When recon was performed |
| node_id | Node identifier |
| agent_short_name | Agent short name |
| session_id | Session identifier |
| context_path | Project/context path |
| last_modified | When the session was last modified |
| message_count | Number of messages in the session |
ReconToolLogs
Individual tools discovered during reconnaissance (MCP tools, skills, internal tools).
| Column | Description |
|---|---|
| timestamp | When recon was performed |
| node_id | Node identifier |
| agent_short_name | Agent short name |
| tool_type | Type: "mcp", "skill", or "internal" |
| server_name | MCP server name (null for skills/internal) |
| tool_name | Tool name |
| tool_description | Tool description |
| transport | MCP transport type (null for skills/internal) |
TrafficLogs
Intercepted HTTP traffic stored in the database.
| Column | Description |
|---|---|
| timestamp | When the traffic was captured |
| traffic_id | Traffic entry ID (join key for TrafficMatchLogs) |
| node_id | Node that captured the traffic |
| agent_short_name | Agent associated with this traffic |
| intercept_method | Method used (proxy, vpn, hosts, tproxy) |
| direction | send or receive |
| method | HTTP method (GET, POST, etc.) |
| url | Full URL |
| host | Host/domain |
| request_headers | Request headers as JSON |
| request_body | Request body as text |
| response_status | HTTP response status code |
| response_headers | Response headers as JSON |
| response_body | Response body as text |
TrafficMatchLogs
Traffic that matched intercept rules, joined with traffic details.
| Column | Description |
|---|---|
| timestamp | When the match occurred |
| traffic_id | ID of the matched traffic entry (join key for TrafficLogs) |
| node_id | Node that captured the traffic |
| agent_short_name | Agent associated with this traffic |
| rule_id | ID of the matching rule |
| rule_name | Name of the matching rule |
| summary | LLM-generated summary (if rule has summarization prompt) |
| method | HTTP method |
| url | Full URL |
| host | Host/domain |
| direction | send or receive |
| response_status | HTTP response status code |
Supported KQL Operators
| Operator | Description | Example |
|---|---|---|
where | Filter rows | TrafficLogs | where host contains "openai" |
project | Select columns | TrafficLogs | project timestamp, url, host |
project-away | Remove columns | TrafficLogs | project-away request_body, response_body |
sort / order | Sort rows | TrafficLogs | sort timestamp |
take / limit | Limit rows | TrafficLogs | take 50 |
top | Top N by column | TrafficLogs | top 10 by timestamp |
extend | Add computed columns | TrafficLogs | extend url_length = strlen(url) |
count | Count rows | TrafficLogs | count |
distinct | Unique values | TrafficLogs | distinct host |
summarize | Aggregate | TrafficLogs | summarize count() by host |
join | Join two tables | TrafficLogs | join (TrafficMatchLogs) on traffic_id |
Join supports qualified keys when column names differ between tables:
LeftTable | join (RightTable) on $left.col_a == $right.col_b
Supported Expressions
- Comparisons:
==,!=,<,>,<=,>= - Logical:
and,or,not - String functions:
contains,startswith,endswith,has,strlen,tolower,toupper - Null checks:
isnotempty(),isnull(),isempty() - Aggregations (in summarize):
count(),sum(),avg(),min(),max(),dcount() - Type conversion:
tostring(),toint(),tolong()
Example Queries
// List recent traffic
TrafficLogs | take 20
// Find traffic to a specific host
TrafficLogs | where host contains "api.openai.com" | project timestamp, method, url, response_status
// Count traffic by host
TrafficLogs | summarize count() by host
// List all connected nodes
NodeLogs
// Find available agents
AgentLogs | where available == true
// Find all MCP tools across agents
ReconToolLogs | where tool_type == "mcp" | project agent_short_name, server_name, tool_name
// List API keys found in recon
ReconMetadataLogs | where entry_type == "api_key"
// Correlate traffic matches with rules
TrafficMatchLogs | project timestamp, rule_name, url, summary | take 50
// Join traffic with matches to see matched URLs with rule names
TrafficLogs | join (TrafficMatchLogs) on traffic_id | project timestamp, url, rule_name, summary
// Find traffic with large responses
TrafficLogs | where response_status == 200 | project timestamp, url, host | take 100
// View recent error logs
EventLogs | where level == "error" | take 50
// Count log entries by source
EventLogs | summarize count() by source
// List completed operations with results
SemanticOperationLogs | where status == "Completed" | project timestamp, agent_short_name, summary, result | take 50
// Find failed operations
SemanticOperationLogs | where status == "Failed" | project timestamp, operation_id, agent_short_name, result
// Count operations by status
SemanticOperationLogs | summarize count() by status
// Find operations that are part of a chain
SemanticOperationLogs | where isnotempty(chain_execution_id) | project timestamp, operation_id, chain_execution_id, summary
// List chain executions
SemanticOperationChainLogs | project timestamp, chain_name, status, outputs | take 20
// Find completed chains with their outputs
SemanticOperationChainLogs | where status == "Completed" | project timestamp, chain_name, outputs
Query Execution
SQL Pushdown
Tables backed by the database (EventLogs, TrafficLogs, TrafficMatchLogs, SemanticOperationLogs, SemanticOperationChainLogs) benefit from automatic SQL pushdown. When the executor encounters leading where and take/limit operators in a query pipeline, it translates KQL expressions directly into SQL WHERE clauses with parameterized queries. This means the database handles filtering before rows are loaded into memory, enabling efficient queries over large datasets.
The following KQL constructs are translated to SQL:
- Comparisons:
==,!=,<,>,<=,>=become SQL comparison operators - Logical:
and,orbecome SQL AND/OR - String functions:
contains/hasbecomeLOWER(col) LIKE '%value%',startswithbecomesLIKE 'value%',endswithbecomesLIKE '%value' - Null checks:
isnull()/isempty()becomeIS NULL OR = '',isnotnull()/isnotempty()becomeIS NOT NULL AND != '' - Case functions:
tolower(),toupper()become SQLLOWER(),UPPER() - Utility:
strlen()becomesLENGTH(),tostring()becomesCAST(... AS TEXT),toint()/tolong()becomeCAST(... AS INTEGER),now()binds the current UTC timestamp
User-provided string values in LIKE patterns are escaped to prevent SQL wildcard injection (% and _ are matched literally).
If any expression in the leading where clauses cannot be translated to SQL (e.g. an unsupported function), the executor falls back to fetching all rows with just a LIMIT and applies all filtering in memory. Operators that appear after a non-pushable operator (like project, extend, summarize) always run in memory.
In-memory tables (NodeLogs, AgentLogs) and JSON-expanded tables (ReconLogs, ReconToolLogs, etc.) are always materialized fully and filtered in memory.
Result Limits
Results are capped by the log_query_row_limit setting, which defaults to 10,000,000 rows. This limit can be configured in Settings > Service > Event Logging. The total_count field reflects the actual count before capping. Use take or limit to reduce result size for large tables.
KQL Parser
The Log Query feature uses a vendored fork of the kqlparser crate (v0.0.4, Apache-2.0) for parsing KQL syntax. The vendored copy lives in service/src/log_query/parser/ and includes fixes for multiline join expressions and native $left/$right join key syntax. Only the subset of KQL operators and functions listed above are supported; unsupported constructs will return an error.
Orchestrator
The Orchestrator is an interactive AI agent that can autonomously manage nodes, agents, sessions, operations, and chains across the Praxis network. Unlike semantic operations (which run predefined tasks), the Orchestrator is a free-form conversational interface where you give high-level goals and the AI figures out the steps.
Prerequisites
Before using the Orchestrator, you need:
-
MCP Server enabled — Go to Settings > MCP Server and enable it. The Orchestrator connects to the MCP server as a client to access all Praxis tools.
-
Orchestrator LLM configured — Go to Settings > LLM Providers and configure a model definition, then assign it to the Orchestrator feature in the Feature Selection section.
If the MCP server is not enabled when you start a session, you'll see an error message directing you to the settings page.
Starting a Session
- Click Orchestrator in the sidebar
- Click New Session
- The Orchestrator connects to the MCP server and fetches available tools
- Type your goal or question in the input box
What It Can Do
The Orchestrator has access to all Praxis MCP tools:
- Node management — List nodes, select nodes, request info updates
- Agent control — List agents, select agents, run recon (static and semantic), query stored recon data (sessions, projects, tools)
- Sessions — Create sessions, send prompts, close sessions
- Operations — List, run, monitor, and cancel semantic operations
- Chains — List, run, monitor, and cancel chain workflows
- Traffic — Search intercepted traffic with regex patterns
Plus two local tools:
- wait — Sleep for a specified duration (useful when polling operation status)
- report_plan — Show a step-by-step execution plan with progress tracking
Example Prompts
Simple exploration:
List all connected nodes and their agents
Multi-step task:
Connect to the first available node, select the Claude Code agent, create a YOLO session, and ask it to list the files in the current directory
Operation execution:
Run the recon::system_info operation on all active nodes and report the results
Monitoring:
Check the status of all running operations and cancel any that have been running for more than 5 minutes
Thinking Mode
When using a model that supports extended thinking (e.g. Claude Sonnet/Opus with thinking enabled), the Orchestrator surfaces the model's reasoning steps inline. Thinking blocks appear in a collapsed section before the final response, showing the chain of reasoning the model used to arrive at its answer.
Thinking mode is enabled automatically when the configured Orchestrator model supports it and has thinking enabled in its API parameters. No separate configuration is needed in Praxis.
Plan Tracking
The Orchestrator can break complex tasks into steps and show progress via the report_plan tool. When the AI calls this tool, you'll see a plan panel with step descriptions and their current status (not started, in progress, done).
Token Usage
Token usage is displayed after each LLM call, showing prompt tokens, completion tokens, and totals. This helps monitor costs when using commercial API providers.
Session Controls
- Cancel — Stops the current inference but keeps the session alive. Useful if the AI is going in the wrong direction.
- Stop — Ends the session entirely. You'll need to start a new session to continue.
Model Recommendations
The Orchestrator requires a capable model that can follow tool-calling instructions reliably:
Recommended:
- Anthropic: Claude Sonnet 4 or Claude Opus 4
- OpenAI: GPT-4o
- Google: Gemini 1.5 Pro
Not recommended:
- Smaller/faster models (Haiku, GPT-4o-mini) — these often fail to follow the tool calling format or hallucinate results
How It Differs from Semantic Operations
| Aspect | Orchestrator | Semantic Operations |
|---|---|---|
| Interface | Interactive chat | Predefined tasks |
| Scope | Full Praxis network | Single node/agent |
| Tools | All MCP tools | session_prompt only (agent mode) |
| Use case | Ad-hoc exploration, complex multi-node tasks | Repeatable, automated tasks |
The Orchestrator is best for exploration, debugging, and complex ad-hoc tasks. Semantic operations are better for repeatable workflows that you want to run consistently.
Troubleshooting
"MCP server is not enabled"
Go to Settings > MCP Server and enable it. The Orchestrator requires the MCP server to function.
"Failed to connect to MCP server"
- Verify the MCP server is running (check the Settings page for status)
- Check that the configured port is not in use by another process
- Look at service logs for MCP server startup errors
Tools not executing
- Ensure you're using a capable model (see recommendations above)
- Check the tool execution results for error messages
- Verify nodes are connected and agents are available
Session disconnects
The MCP client connection is tied to the Orchestrator session. If the MCP server restarts, you'll need to start a new Orchestrator session.
Semantic Operations
Semantic operations are predefined tasks that run through AI agents. You define what you want to happen in natural language, and Praxis handles the execution.
What's a Semantic Operation?
An operation is a task specification:
- Name - Identifier for the operation
- Prompt - What you want the agent to do
- Mode - How to execute (one-shot or agent)
- Timeout - How long to wait
- YOLO Mode - Auto-approve actions
Think of operations as reusable prompts with execution settings.
Execution Modes
One-Shot Mode
Sends a single prompt to the agent and waits for a response.
How it works:
- Create a session (if needed)
- Send the operation prompt
- Wait for the agent to respond
- Return the response
- Close the session (if we created it)
Best for: Simple tasks, single actions, quick checks.
Agent Mode
Uses an orchestrating LLM to run multi-turn interactions with the target agent.
How it works:
- Orchestrator LLM receives the operation prompt
- Orchestrator generates a prompt for the target agent
- Target agent responds
- Orchestrator evaluates and decides next action
- Loop continues until complete or max iterations reached
Best for: Complex tasks, multi-step operations, tasks requiring judgment.
The orchestrator is a separate LLM (configured in Settings as "Semantic Ops" LLM) that manages the interaction. It has access to a session_prompt tool to communicate with the target agent.
Model Requirements
Agent mode requires a sufficiently capable model for the orchestrator. The model must be able to:
- Follow complex multi-step instructions
- Output tool calls in the correct JSON format
- Wait for tool results before proceeding
- Avoid hallucinating results
Recommended models:
- Anthropic: Claude Sonnet 4 or Claude Opus 4
- OpenAI: GPT-4o or GPT-4 Turbo
- Google: Gemini 1.5 Pro
Not recommended for agent mode:
- Smaller/faster models (Haiku, GPT-4o-mini, Llama 8B) - these often fail to follow tool calling instructions correctly and may hallucinate results
- Models without strong instruction-following capabilities
If you're seeing issues with tool calling or hallucinated results, try switching to a more capable model.
Agent Mode Architecture
The orchestrator uses a system prompt that defines its behavior:
Prompt Location: service/src/prompts/semantic_op_agent.prompt
The system prompt is embedded at build time using Rust's include_str! macro. This means:
- Prompts are part of the compiled binary
- No runtime configuration of prompts is needed or supported
- Changes require recompilation
The orchestrator prompt is combined with:
- Tool calling instructions (
common/src/prompts/tool_calling.prompt) - Task completion instructions (
common/src/prompts/task_completion.prompt)
These define the JSON format the orchestrator uses to call tools and signal completion:
{"tool": "session_prompt", "args": {"text": "..."}}
{"complete": true, "summary": "...", "result": "..."}
Creating Operations
Operations are stored in the library:
- Go to Operations → Library tab
- Click New Operation
- Fill in the details:
- Name and description
- Operation prompt
- Mode (one-shot or agent)
- Timeout value
- YOLO mode setting
- Save
Operations are stored in the database and available across sessions.
Running Operations
From the Library
- Go to Operations → Library
- Find the operation
- Click Run
- Select node and agent
- Watch execution in the Runs tab
From an Agent
- Open an agent's detail page
- Go to the Ops tab
- Click Run Operation
- Select from available operations
Monitoring Execution
The Runs tab shows all running and completed operations:
| Column | Description |
|---|---|
| Name | Operation being executed |
| Node/Agent | Where it's running |
| Status | Running, Completed, Failed, Cancelled |
| Started | When execution began |
Click a run to see details:
- Full execution output
- Iteration history (agent mode)
- Final result or error
Operation Output
Each operation produces output:
One-shot mode - The agent's response to your prompt.
Agent mode - Full transcript of the orchestrator's iterations:
- Prompts sent to target agent
- Responses received
- Orchestrator's reasoning
- Final result
Built-in Operations
Praxis comes with some predefined operations for common tasks. You can use these as-is or as templates for your own.
YOLO Mode in Operations
When YOLO mode is enabled for an operation:
- The target agent session is created with auto-approve
- Actions execute without user confirmation
- The entire operation runs hands-off
This is useful for automated scenarios but removes safety checks.
Model Override
Operations can specify a different model than the default:
- Override the Semantic Ops LLM for specific operations
- Use faster models for simple operations
- Use more capable models for complex tasks
Cancellation
Running operations can be cancelled:
- Find the operation in Runs
- Click Cancel
- The operation terminates
Cancellation is best-effort-if the agent is mid-action, that action may complete.
Timeouts
Each operation has a timeout:
- One-shot: Time to wait for agent response
- Agent mode: Total time for all iterations
When timeout is reached, the operation fails with a timeout error.
Chaining Operations
Operations can be combined into chains for complex workflows. A chain is a graph of operations with connections defining execution order and session groups controlling how sessions are shared.
Visual Chain Builder
Praxis includes a visual chain builder using React Flow:
- Go to Operations → Library
- Click New Chain
- Drag operations onto the canvas
- Connect outputs to inputs
- Configure session groups
- Save the chain
Chain Structure
Every chain starts with a Trigger element. Elements with no outgoing connections are terminal — their output becomes the chain's final output. Between the trigger and terminal elements, you build processing workflows using various block types.
Element Types
Chains support several element types:
Trigger - Every chain must start with a trigger. The in-canvas trigger element represents the manual trigger (click "Run" to start the chain). For automated triggers, see Chain Triggers below.
Operation - Executes a semantic operation from your library. Select an existing operation by name. The operation runs against the target agent and its output flows to the next element.
Transform - An LLM-powered transformation step. Takes input from the previous element and applies a prompt to transform it. Useful for extracting specific data, reformatting output, or summarizing information.
GenericPrompt - Sends a prompt directly to the agent session (not through an orchestrator). Simpler than an operation — just sends the prompt and captures the response.
Memory Store - Stores incoming data under a named key for later retrieval. The data passes through unchanged to downstream elements.
Memory Retrieve - Retrieves previously stored data by key. Useful for accessing earlier results later in the chain.
Loop - Controls iteration in the chain. Configure max_iterations on the element. On each pass through the loop, if iterations remain, the output fires and routes back to an earlier element creating a cycle. When iterations are exhausted, no output fires — execution stops at that branch.
Conditional Connections
Connections between elements can have conditions:
- Always (default) - The connection always fires when the source completes
- On Success - Fires only when the source element completes successfully
- On Failure - Fires only when the source element fails
This enables branching workflows with error handling paths.
Per-Block Configuration
Operation, Transform, and GenericPrompt elements support per-block configuration overrides:
- Max Runtime - Timeout in seconds for this specific element
- YOLO Mode - Enable auto-approve for this element's session
- Working Directory - Override the working directory
- Require All Inputs - When disabled, a merge-point element runs as soon as any upstream input arrives (instead of waiting for all branches). Useful in conditional chains where not all paths execute.
Building a Chain
-
Add a Trigger - Drag a Trigger element onto the canvas. This is your starting point.
-
Add Processing Elements - Add Operations, Transforms, GenericPrompts, Memory blocks, or Loops as needed. Connect them by dragging from one element's output handle to another's input handle.
-
Ensure Terminal Elements - At least one element must have no outgoing connections. Its output becomes the chain's result.
-
Configure Elements - Double-click each element to configure:
- Operations: Select which operation to run
- Transforms: Write the transformation prompt
- Memory blocks: Set the memory key
- Loops: Set max iterations
- Set model overrides if needed
-
Assign Session Groups - Group elements that should share an agent session (see below).
Session Groups
Session groups control how agent sessions are managed across chain elements. Elements that interact with agents (Operations, Transforms, GenericPrompts) can be assigned to session groups.
Assigning Session Groups:
- Select an element in the chain editor
- Click "Assign Session Group" or select an existing group
- Elements in the same group share a color indicator
Same Session Group - Elements share an agent session:
- The first element creates the session
- Subsequent elements reuse it
- Session closes after the last element completes
- Context and state persist between elements
Different Session Groups - Elements get isolated sessions:
- Each group has its own session
- Clean separation, no shared context
- Useful for independent operations
No Session Group - Element gets a fresh session just for itself.
Why Session Groups Matter:
Agent sessions maintain conversation context. If you run an operation that navigates to a directory, the next operation in the same session starts in that directory. Use session groups when:
- Operations build on each other's state
- You want to maintain conversation context
- Sequential steps depend on previous actions
Use separate groups when:
- Operations should be isolated
- You want clean slate for each operation
- Running parallel independent tasks
Chain Execution
When running a chain:
- The executor builds a dependency graph from connections
- Finds operations with no dependencies (starting points)
- Executes ready operations (possibly in parallel)
- Marks completed, finds newly ready operations
- Repeats until all complete or one fails
Operations without dependencies on each other can run simultaneously. The executor identifies these and runs them in parallel.
┌─────┐
│Start│
└──┬──┘
│
┌───┴───┐
│ │
┌──▼──┐ ┌──▼──┐
│Op A │ │Op B │ ← These run in parallel
└──┬──┘ └──┬──┘
│ │
└───┬───┘
│
┌──▼──┐
│Op C │ ← This waits for both A and B
└─────┘
Monitoring Chains
Chain executions appear in the Runs tab alongside individual operations. Click a chain execution to see individual element status, output from each operation, and timing information.
Chain Cancellation
You can cancel a running chain from the Runs tab. Cancellation stops queuing new operations and lets running operations complete (or cancels them).
Use Cases
Sequential Operations - Run operations in order, each building on the previous: enumerate capabilities, identify target, execute action, verify result.
Parallel Reconnaissance - Run multiple recon operations simultaneously, then combine results.
Staged Operations - Build up context across operations with shared sessions, maintaining state throughout.
Chain Best Practices
- Plan session groups carefully - shared sessions maintain context but accumulate state
- Handle failures - if an operation fails, the chain stops
- Test incrementally - run individual operations first, then combine
- Keep chains focused - one chain, one goal
Chain Triggers
Chains can be executed automatically via triggers. While the in-canvas Trigger element represents manual execution, chain triggers are separate configurations that automate when and how a chain fires. Triggers are managed from two places: the Triggers panel at the bottom of the chain builder, and the Triggers tab on the Operations page.
Trigger Types
Scheduled - Fires on a time-based schedule. Two schedule modes are available:
- Interval - Fires every N minutes (e.g., every 60 minutes). The next fire time is computed from the last fire time.
- Daily At - Fires once per day at a specific hour and minute (UTC). If the time has already passed today, the next fire is scheduled for tomorrow.
Scheduled triggers can be recurring (fire repeatedly) or one-shot (fire once and then auto-disable).
Intercept Match - Fires when intercepted traffic matches a specific intercept rule. You specify the rule ID, and whenever traffic triggers that rule, the chain executes. Intercept-match triggers have a 60-second debounce window to prevent rapid repeated firings.
New Node - Fires whenever a new node registers with the service. There is a 10-second delay after registration to allow agent discovery to complete before the chain executes.
Creating Triggers
From the chain builder:
- Open a saved chain in the chain editor
- Expand the Triggers panel at the bottom of the editor
- Click Add Trigger
- Select the trigger type and configure its settings
- Configure the Target Spec (see Flexible Targeting below)
- Click Save
The trigger is immediately active once saved. Each chain can have multiple triggers.
Managing Triggers
The Triggers tab on the Operations page shows all configured triggers across all chains. From here you can:
- See the chain name, trigger type, configuration summary, and target spec for each trigger
- Toggle triggers on/off with the ON/OFF button
- View when a trigger last fired and when it will next fire
- Delete triggers
Trigger Engine
The service runs a trigger engine that polls for due scheduled triggers every 30 seconds. When a trigger fires:
- The engine loads the chain definition
- Resolves the target spec into concrete node/agent pairs
- Executes the chain against each resolved target (fan-out)
- Updates the trigger's
last_fired_attimestamp - For scheduled triggers, computes the next fire time (or disables if non-recurring)
Event-based triggers (Intercept Match, New Node) fire immediately in response to the event rather than on a polling schedule.
Flexible Targeting
By default, chains run against a single node and agent. The TargetSpec system allows chains to target multiple nodes and agents simultaneously using filters.
Target Spec Fields
| Field | Description | Default |
|---|---|---|
| Node IDs | Specific node IDs to target | Empty (all nodes) |
| OS Filter | Case-insensitive substring match on the node's OS details | None |
| Agent Short Names | Specific agent types to target | Empty (all available agents) |
| Include Triggering Node | For event triggers: ensure the node that caused the event is included | Off |
When a trigger fires, the target spec is resolved against the current set of registered nodes:
- Start with all registered nodes
- Filter by specific node IDs (if any specified)
- Filter by OS substring (if specified)
- For each remaining node, select agents matching the agent filter
- Skip agents that are not currently available
If no targets match, the trigger logs a warning and the chain does not execute.
Target Spec Editor
The target spec editor appears when creating triggers in the chain builder and when using advanced targeting in the run modal. It provides:
- Node multi-select - Pick specific nodes from the connected nodes list, or leave empty for all nodes
- OS filter - Free text field for OS substring matching (e.g., "Windows", "Linux", "Ubuntu")
- Agent multi-select - Pick specific agent types, or leave empty for all available agents
- Include triggering node - Checkbox shown for event triggers (New Node, Intercept Match) to ensure the triggering node is always included even if it would otherwise be filtered out
Fan-Out Execution
When a chain targets multiple node/agent pairs, the executor performs a fan-out: it creates a separate chain execution for each resolved target. Each execution runs independently and appears as its own entry in the Runs tab.
Advanced Targeting in Run Modal
The run modal for chains includes an Advanced Targeting toggle. When enabled, instead of selecting a single node and agent, you configure a full target spec. This allows manual one-off fan-out runs without needing to set up a trigger.
Troubleshooting
Operation stuck
- Check if YOLO mode should be enabled
- Verify the agent session is responsive
- Try a simpler prompt
Unexpected results
- Review the full output
- Check if the prompt is clear enough
- Consider using agent mode for complex tasks
Timeouts
- Increase the timeout value
- Simplify the operation
- Check if the agent is responding at all
Tool calling not working (agent mode)
Symptoms: The orchestrator outputs tool calls but they don't execute, or execution completes immediately without actually running the tool.
- Switch to a more capable model - smaller models often fail to follow the tool calling format correctly. Use Claude Sonnet/Opus, GPT-4o, or Gemini 1.5 Pro
- Check the operation output for malformed JSON in tool calls
- Verify the model is outputting the correct format:
{"tool": "session_prompt", "args": {"text": "..."}}
Hallucinated or fabricated results
Symptoms: The operation completes with results that look plausible but are entirely made up - the orchestrator never actually called the remote agent.
This happens when a model outputs both a tool call AND a completion signal in the same message, fabricating results instead of waiting for the real tool response.
- Use a more capable model - this is almost always caused by using a model that doesn't follow instructions well
- Check the full operation output - if you see a tool call immediately followed by a completion signal with results, the model hallucinated
- Recommended: Claude Sonnet 4+, GPT-4o, or Gemini 1.5 Pro
- Avoid: Smaller/faster models like Haiku, GPT-4o-mini, or small open-source models for agent mode orchestration
Toolkit
The Toolkit provides a library of built-in offensive operations that run directly against target agents. Each tool is a self-contained operation with its own configuration and execution logic, managed through the Toolkit page in the web UI.
Accessing the Toolkit
Go to Toolkit in the sidebar. The page lists all available tools with their descriptions and configuration options.
Running a Tool
- Select a tool from the list
- Configure any required parameters
- Select the target node and agent
- Click Run
Execution results appear inline on the Toolkit page.
Chain Integration
Toolkit operations can be used as elements in operation chains. When building a chain, toolkit operations are available from the element palette alongside standard operations. This allows you to compose toolkit operations with transforms, memory, and other chain elements into automated workflows.
Managing Tools
Tools are managed at the service level. The Toolkit page provides full CRUD access — you can view, configure, and execute tools from a single interface.
MCP Server
Praxis exposes its capabilities via a Model Context Protocol (MCP) server over SSE transport. This server is built into the Praxis service and provides tool access for both external AI agents and the built-in Orchestrator.
Overview
The MCP server serves two purposes:
-
Orchestrator backend — The built-in Orchestrator connects to the MCP server as a client to access all Praxis tools. This is how the Orchestrator coordinates operations across nodes and agents.
-
External AI agent integration — Any MCP-compatible AI assistant (Claude Code, Cursor, Windsurf, etc.) can connect to the same server to control Praxis programmatically.
Enabling the MCP Server
The MCP server is controlled via service settings:
- Go to Settings > MCP Server (web UI or CLI Settings window)
- Toggle Enable to turn on the server
- Configure the port (default:
8585)
The SSE endpoint is available at http://localhost:{port}/sse.
Note: The MCP server must be enabled for the Orchestrator to function. If disabled, the Orchestrator will display an error directing you to enable it.
When running with Docker, port 8585 is exposed by default. To use a different port:
PRAXIS_MCP_PORT=9090 docker compose up --build
Then update the port in Settings > MCP Server to match.
AI Agent Integration
MCP-compatible AI assistants can connect to the Praxis SSE server to control the entire C2 network. This enables AI agents to discover nodes, run recon, create sessions, execute operations, and search traffic — all through structured tool calls.
Configuration
For any MCP-compatible client, point it at the SSE endpoint:
{
"mcpServers": {
"praxis": {
"url": "http://localhost:8585/sse"
}
}
}
Adjust the host and port to match your deployment. For remote deployments, ensure the MCP port is accessible from the client machine.
Available Tools
The MCP server exposes the following tools:
Node Management
node_list— List all connected nodes (includes privileged status)node_select— Get details for a specific nodenode_reset— Reset a node (cancel operations, close sessions, re-register)
Agent Management
agent_list— List agents on a nodeagent_update— Request agent info refresh
Agents are selected per-session rather than per-node.
session_createand the recon tools each take anagentparameter, so the same node can run concurrent sessions against different agents.
Reconnaissance
All recon tools take a node prefix and an agent short-name.
recon_run— Run static reconnaissance (node,agent)recon_run_semantic— Run semantic reconnaissance, includes internal tools (node,agent)recon_list— List stored recon data (node,agent,section= all/sessions/tools/projects/configs)recon_config_read— Read config file content discovered by recon (node,agent, optionalpath)recon_session_read— Read session file content (node,agent, optionalpath)recon_config_grep— Grep config files with regex (node,agent,pattern, optionalpaths)recon_session_grep— Grep session files with regex (node,agent,pattern, optionalpaths)write_file— Write file content
Sessions
session_create— Create a new ACP session (node,agent, optionalproject,yolo). Returns asession_id.session_prompt— Send a prompt to a session (node,session_id,prompt)session_close— Close a session (node,session_id)
Operations & Chains
op_available— List available operations and chainsop_definition— Show the full definition of an operation or chainop_run— Run an operation or chainop_info— Show full info for an operation or chain executionop_cancel— Cancel a running operation or chain executionop_list— List tracked operations and chain executions
Chain Triggers
trigger_list— List all chain triggerstrigger_create— Create a trigger for a chaintrigger_delete— Delete a trigger by ID prefixtrigger_toggle— Enable or disable a trigger by ID prefix
Traffic
traffic_search— Search intercepted traffic
CLI
The Praxis CLI (praxis_cli) provides both an interactive terminal UI and a non-interactive command-line interface for controlling the Praxis C2 network.
Purpose
The CLI is the primary terminal interface for Praxis. It provides:
- Full-featured interactive terminal UI for hands-on control
- Non-interactive commands for scripting and automation
- Headless environments without browser access
Installation
The CLI is installed automatically with the native installation scripts:
# Linux/macOS
curl -fsSL https://praxis.originhq.com/install.sh | bash
The binary is installed to ~/.praxis/bin/praxis_cli.
When using Docker, the CLI binary is built into the container image and copied to the data volume on startup. You can extract it with:
docker cp $(docker compose ps -q praxis):/app/praxis_cli ./praxis_cli
Note: The container name depends on your project directory. Run this from the directory containing your
docker-compose.yml.
Interactive Terminal UI (Default Mode)
Running praxis_cli with no arguments launches the interactive terminal UI:
$ praxis_cli
The terminal UI provides five main windows, switched with keyboard shortcuts:
Orchestrator (Ctrl+O)
LLM-powered conversation interface for coordinating operations across the Praxis network. Features:
- Real-time streaming responses with tool execution display
- Plan tracking with step visualization
- Token usage statistics
- Command history and conversation scrolling
- Multiple concurrent orchestrator sessions —
Ctrl+Nopens a new one;Ctrl+Wcloses the current one;Ctrl+Alt+Wsaves the transcript Ctrl+Ccancels the in-flight prompt in the active sessionCtrl+Etoggles the tools panel;Ctrl+Alt+Eexpands it fully
Nodes (Ctrl+L)
Node and agent management with integrated session chat and terminal access:
- Node list with status indicators (active/warning/inactive), OS details, and agent counts
- Agent selection and concurrent ACP session management
- Session Chat — direct conversation with agents, with YOLO mode and working directory selection
- Active Sessions overlay (
Ctrl+W) — see every live session across nodes and connectors; Enter to resume,d/Delto discard, Esc to dismiss - Terminal (
Ctrl+Rto create,Ctrl+Tto toggle) — full PTY terminal emulation with scrollback
Inside a chat view, Esc or Ctrl+W pauses the session (leaves it
running on the node; resume from the Active Sessions overlay). Ctrl+C
cancels an in-flight prompt, or closes the session if the agent is idle.
The status bar shows N sessions whenever any concurrent sessions are
live. On first connect, whenever you open the Nodes window, and after a
node reset, the TUI calls session/list on each node to pick up
sessions left alive from previous runs or other clients.
Intercept (Ctrl+I)
Live traffic interception with three tabs (Tab / Shift+Tab to switch):
- Log — incoming traffic streams from every node into a ring buffer.
HTTP entries show individually; WebSocket and HTTP/2 frames group by
(node, url)so streaming endpoints don't flood the list. - Rules — create, edit, delete, and toggle intercept rules (regex patterns with direction and scope). Rules can carry an optional LLM summarisation prompt.
- Matches — matched-traffic review with AI summaries (when a rule has a summarisation prompt).
Log tab
| Key | Action |
|---|---|
Enter | Focus detail pane (then ↑/↓ scrolls detail) |
Esc | Unfocus detail / clear search |
/ | Focus search box (regex, falls back to substring) |
f | Cycle protocol filter: all → http → ws → h2 |
n | Cycle node filter (no popup; Esc clears) |
a | Cycle agent filter |
p | Pause / resume the live stream |
r | Re-request the initial page from the service |
c | Clear ALL traffic (with confirmation) |
H | Cycle body render mode: pretty → raw → hex |
i | Toggle interception on the selected entry's node |
Request and response bodies arrive via a second fetch on selection to keep the broadcast payload small — large bodies load within a few hundred milliseconds after you navigate to an entry.
Rules tab
| Key | Action |
|---|---|
n | Create a new rule |
e | Edit the selected rule |
d | Delete the selected rule (with confirmation) |
Space | Toggle enabled / disabled |
Enter | Jump to the Matches tab filtered to this rule |
r | Refresh the rules list |
The rule form (open via n or e) fields: Name, Regex, Direction
(send / receive / both), Scope (all / node / agent), and an
optional LLM summary prompt. Tab moves between fields, Space /
← / → cycles select-style fields, Ctrl+S saves, Esc cancels.
Matches tab
| Key | Action |
|---|---|
Enter | Focus match detail pane |
f | Cycle rule filter |
Esc | Clear rule filter / unfocus detail |
r | Refresh |
Log Query (Ctrl+G)
KQL-style query interface over captured logs (intercepted traffic, event logs, recon results, operations history, and more — 12 virtual tables in total). See Log Query for the full query reference.
- Multi-line editor with basic KQL keyword highlighting
Ctrl+Enterruns the query; the spinner in the hint line indicates in-flight executionTabopens a context-aware autocomplete popup (tables at start of query, operators after|, columns insidewhere/project/sort, functions & keywords inline).↑/↓navigate,Enteraccepts,Escdismisses?toggles a schema sidebar listing every available table with its columns and descriptionsEscfrom the editor moves focus to the results;ifrom the results moves focus back to the editor
Results pane:
| Key | Action |
|---|---|
↑ ↓ PgUp PgDn g G | Row navigation |
Enter | Expand the selected row into a key/value detail pane (JSON fields pretty-printed) |
/ | Open a row-search filter (substring match across all cells) |
s | Cycle the sort column |
S | Toggle sort direction |
r | Re-run the last query |
Esc | Close expanded row / clear search / return to editor |
Response bodies in TrafficLogs and JSON columns like
ToolkitActionsLog.details_json auto-pretty-print in the detail pane.
Operations (Ctrl+P)
Operation and chain management with three tabs (Tab / Shift+Tab to switch):
- Executions — live tracking of running/queued/completed operations and chains with duration timers
- Library — browse operation and chain definitions with search filtering and detail view
- Triggers — automated chain firing rules, same feature set as the web UI
Common actions:
- Create new operations inline
- Run operations with node/agent selection and YOLO mode
- Create, edit, enable/disable and delete chain triggers
Triggers tab
Triggers fire a chain on a schedule, when an intercept rule matches, or when a new node connects. Each trigger picks a target chain, a trigger type, and a target spec (nodes + agents, with an optional OS substring filter and, for event triggers, an "include triggering node" toggle).
| Key | Action |
|---|---|
Enter | Toggle enabled/disabled for the selected trigger |
Ctrl+N | New trigger |
Ctrl+E | Edit selected trigger |
Ctrl+D | Delete selected trigger |
In the trigger form, ↑/↓ or Tab/Shift+Tab move between fields, ←/→ cycle picker options, Space/Enter toggle checkboxes and list items, Ctrl+S saves, and Esc cancels. The form is fully mouse-driven: click a row to focus/toggle it, click Ctrl+S/Esc in the hint bar to save or cancel.
Settings (Ctrl+S)
Configuration management:
- LLM — model definitions, provider selection, API keys, and feature assignment (orchestrator, semantic ops, semantic parser, traffic parser)
- Service — MCP server toggle, MCP port, Claude Bridge settings (CCRv1/CCRv2 enable and port configuration), logging, log query row limits, prompt timeout
- About — connection info
Mouse Support
The TUI supports mouse interactions across all windows:
- Click — select items in lists, tabs, and interactive elements
- Double-click — activate items (e.g. open an operation, select a node)
- Drag — scroll through lists and content areas
- Scroll wheel — scroll through lists, chat history, and scrollable content
Mouse interactions work alongside keyboard controls in all windows and popups.
Global Keybindings
| Key | Action |
|---|---|
Ctrl+O | Orchestrator window |
Ctrl+L | Nodes window |
Ctrl+I | Intercept window |
Ctrl+P | Operations window |
Ctrl+S | Settings window |
Ctrl+T | Toggle terminal mode |
Ctrl+Q | Quit |
Ctrl+W is window-scoped: in Nodes it toggles the Active Sessions
overlay (or pauses the current chat session), in Orchestrator it closes
the active orchestrator session.
Non-Interactive Mode
One-Shot Commands
Use -C to run a single command and exit:
praxis_cli -C "node list"
praxis_cli -C "session create --node abc123 --agent codex --yolo"
Direct Subcommands
Subcommands can also be passed directly:
praxis_cli node list
praxis_cli session create --node abc123 --agent codex --yolo
Available Commands
Node Management:
node list # List all connected nodes
node select <prefix> # Select node by ID prefix
node reset <prefix> # Reset a node
Agent Management:
agent list --node <prefix> # List agents on a node
agent update --node <prefix> # Request agent info update
agent config read --node <prefix> --agent <name> <path> # Read config file
agent config write --node <prefix> <path> <contents> # Write config file (agent-independent)
agent config grep --node <prefix> --agent <name> <path> <pattern> # Grep config file
agent session read --node <prefix> --agent <name> <file> # Read session file
agent session grep --node <prefix> --agent <name> <file> <pattern> # Grep session file
Session Management:
session create --node <prefix> --agent <name> [--yolo] [--project <path>] [--timeout <secs>]
session prompt --node <prefix> <text>
session close --node <prefix>
Every command that needs an agent takes --agent explicitly; ACP
sessions are per-agent, so the same node can host concurrent sessions
under different agents.
Non-interactive mode persists a single session id per node in
~/.praxis/cli.json — session create stores it, session prompt and
session close read it. The interactive TUI runs concurrent in-memory
sessions and does not share state with the non-interactive subcommands.
Global Options
| Option | Description | Default |
|---|---|---|
-r, --rabbitmq | RabbitMQ URL | amqp://praxis:praxis@localhost:5672 |
-t, --timeout | Connection/command timeout in seconds | 600 |
-C, --command | Run a single command and exit | - |
--acp | Run as an ACP bridge (stdin/stdout proxy) | - |
--clear | Clear local state and exit | - |
--status | Check service connection status | - |
The RabbitMQ URL can also be set via the PRAXIS_RABBITMQ_URL environment variable.
ACP Bridge Mode
The CLI can act as an Agent Client Protocol bridge, exposing the Praxis service as a standard ACP agent over stdin/stdout. This allows any ACP-compatible client to interact with Praxis.
praxis_cli --acp
In this mode the CLI:
- Reads NDJSON JSON-RPC requests from stdin
- Forwards them to the Praxis service via RabbitMQ
- Writes JSON-RPC responses and notifications to stdout as NDJSON
- Only forwards responses to requests it originated (filters out other clients' traffic)
This means any ACP client can use Praxis as its agent. For example, using acpx:
acpx --agent 'praxis_cli --acp' 'list agents'
The bridge connects with an acp_ prefixed client ID, so sessions created through it get ACP_ prefixed session IDs.
Local State
The CLI stores persistent state in ~/.praxis/cli.json. This file contains:
- client_id: A unique identifier for this CLI instance, used for RabbitMQ queue routing
The client ID is generated on first run and reused for subsequent executions.
To reset local state:
praxis_cli --clear
Agent Connectors Overview
Agent connectors are the modules that let Praxis interact with specific AI agents. Each connector knows how to fingerprint, intercept, and communicate with a particular agent type.
What Connectors Do
A connector handles four main capabilities:
Fingerprinting - Detecting whether an agent is installed, finding its executable path, and extracting its version. The helpers.find_executable Lua helper searches PATH, explicit directories, and version manager installations. Version is extracted by running --version and parsing the output.
Interception - Knowing which domains the agent talks to so traffic can be captured.
Reconnaissance - Discovering the agent's configuration, tools, and session history. This includes parsing config files, finding MCP server definitions, and locating past conversations.
Sessions - Creating interactive sessions where prompts can be sent and responses received. Different agents need different approaches-CLI agents can be spawned in a PTY, browser-based agents need DevTools or UI automation.
Current Connectors
| Connector | Agent | Platform | Session Mode | Type |
|---|---|---|---|---|
claude-bridge | Claude Code (inbound) | Any | CCRv1 (WS) / CCRv2 (HTTP+SSE) | Native |
claudecode | Claude Code CLI | Linux, Windows | CLI (PTY) | Lua |
claudedesktop | Claude Desktop | Windows only | DevTools (Electron) | Lua |
codex | Codex CLI (OpenAI) | Linux, Windows | CLI | Lua |
cursor | Cursor Agent CLI | Linux only | CLI | Lua |
gemini | Gemini CLI | Linux, Windows | CLI | Lua |
m365copilot | Microsoft 365 Copilot | Windows only | DevTools | Lua |
Want to add support for another agent? Contributions welcome! See Adding New Connectors.
Note: Agent implementations change over time. Connectors may break when agents update and will require maintenance to work with the latest versions.
The Trait System
Connectors implement a set of Rust traits:
#![allow(unused)] fn main() { // Required: core agent functionality trait Agent { fn name(&self) -> &str; fn short_name(&self) -> &str; async fn do_fingerprint(&self) -> bool; // cached for 60s when available fn version(&self) -> Option<String>; // extracted during fingerprinting fn create_session(&self, context: &SessionContext) -> Option<Arc<dyn AgentSession>>; // ... } // Required for sessions: session management trait AgentSession { fn session_id(&self) -> &Uuid; fn transact(&self, prompt: &str) -> Result<String>; fn close(&self); // ... } // Optional: traffic interception support trait AgentIntercept { fn intercept_domains(&self) -> Vec<&str>; fn intercept_url_pattern(&self) -> Option<&str>; } // Optional: reconnaissance support trait AgentRecon { async fn perform_recon(&self, is_semantic: bool) -> Option<ReconResult>; } }
Feature Support
Not all agents support all features. The core capabilities - fingerprinting, traffic interception, static recon, semantic recon, and sessions - are supported by most connectors. However, some features depend on how the agent works:
Config editing requires the agent to have a file-based configuration that can be modified. CLI agents typically store settings in JSON files that can be edited directly. Browser-based agents often don't expose their configuration in an editable format.
MCP discovery only applies to agents that support the Model Context Protocol for tool extensions.
Lua-Based Connectors
In addition to compiled Rust connectors, Praxis supports writing agent connectors in Lua. Lua scripts are stored in the service database and pushed to nodes via the agent registry.
Default Scripts
Default Lua agent scripts live in the agents/ directory at the project root. These are embedded into both the node and service binaries at build time:
- Node: Scripts from
agents/are compiled into the node binary and loaded on startup as fallback connectors. - Service: Scripts are embedded and seeded into the
lua_agent_scriptsdatabase table on first startup. Built-in scripts are tagged with the current Praxis version.
When Praxis is upgraded to a newer version, built-in scripts are automatically updated to the latest version. User-added scripts are never modified by updates.
Built-in vs User Scripts
Scripts are tagged as either built-in or user. Built-in scripts ship with Praxis and are automatically updated when the service version changes. User scripts are created through the web UI or uploaded manually and are never overwritten by updates.
Built-in scripts show a "builtin" badge in the web UI script list.
Note: If you need to customize a built-in script, the recommended approach is to:
- Create a new script with your modifications (Settings > Agents > Upload or create new)
- Disable the original built-in script using the toggle in the script list
- Your custom script will be used instead and won't be overwritten on updates
Editing a built-in script directly is possible but not recommended, as your changes will be replaced on the next Praxis update.
Disabling Scripts
Scripts can be individually enabled or disabled via the toggle icon in the script list. Disabled scripts are not sent to nodes, so the agents they define won't be available. This is useful for:
- Temporarily removing an agent without deleting the script
- Replacing a built-in script with a custom version
- Testing by toggling scripts on and off
Managing Scripts
Lua agent scripts can be managed through the Agents tab in the Settings page of the web UI. From there you can:
- View and edit existing scripts
- Upload new
.luascripts - Enable or disable individual scripts
- Delete scripts
- Reset all scripts back to the built-in defaults
When scripts are modified in the database, the service broadcasts an agent registry update to all connected nodes so they reload the latest scripts.
Adding New Connectors
Want to add support for another agent? See Adding New Connectors for a step-by-step guide.
For Rust connectors, the basic process is:
- Create a directory under
node/src/agent_connectors/ - Implement the
Agenttrait - Add fingerprinting logic
- Implement interception domains (if applicable)
- Add reconnaissance (parsing config, finding sessions)
- Implement session management
- Register in the factory
For Lua connectors, add a .lua file to the agents/ directory or upload it through the web UI.
Connector Selection
When a node starts, it runs fingerprinting for all registered connectors. Any agent that fingerprints successfully gets added to the node's agent list and reported to the service. Agent version is also extracted and displayed in the web UI.
Fingerprint results are cached for 60 seconds when the agent is available. Agents that are not found are re-checked on every cycle so they are discovered as soon as they are installed.
All connectors (Claude Code, Claude Desktop, Codex, Cursor, Gemini, M365 Copilot) are Lua-based and loaded from embedded scripts or the service database. GUI-based agents like Claude Desktop (Electron) and M365 Copilot (WebView) use the praxis.cdp_* native API and praxis.devtools Lua library for Chrome DevTools Protocol interaction.
Development Builds
In debug builds, the environment variable PRAXIS_IGNORE_SERVICE_AGENTS controls whether the node uses Lua scripts pushed from the service or only its embedded scripts. It defaults to 1 (ignore service scripts) for development convenience. Set it to 0 to test service-managed scripts:
PRAXIS_IGNORE_SERVICE_AGENTS=0 cargo run --bin praxis_node
Adding New Connectors
This guide walks through creating a connector for a new AI agent.
Prefer Lua connectors for all agents. Lua scripts are easier to write, can be updated at runtime via the web UI without recompiling, and share common helpers for executable discovery, version extraction, and multi-user support. For browser-based agents, the praxis.devtools Lua library and praxis.cdp_* native API provide Chrome DevTools Protocol support (see M365 Copilot as an example). Use Rust connectors only when you need OS-level capabilities that aren't exposed through the Lua API.
Lua Connector (Recommended)
Lua agent scripts live in agents/ at the project root and are embedded into binaries at build time. They can also be uploaded via the web UI (Settings > Agents).
Tip: Scripts uploaded or created through the web UI are tagged as user scripts and won't be overwritten by Praxis updates. If you want to customize a built-in script, create a copy with your changes and disable the original.
CLI Agents vs Browser-Based Agents
For CLI agents (e.g. Claude Code, Gemini CLI), use praxis.command_run / praxis.command_run_handle to spawn processes and interact via stdin/stdout. For agents that support the Agent Client Protocol (ACP), use the praxis.acp_* APIs for long-lived subprocess sessions with real-time streaming (see ACP Sessions below).
For browser-based agents (e.g. M365 Copilot), use the praxis.devtools library and praxis.cdp_* native API to drive the agent via Chrome DevTools Protocol. See DevTools-Based Agents below.
Script Structure
A Lua connector returns a table with name, short_name, and callback functions. For CLI agents, follow the same high-level structure used by agents/gemini.lua:
local helpers = require("praxis.helpers")
local AGENT_NAME = "Example AI"
local AGENT_SHORT_NAME = "exampleai"
local INTERCEPT_DOMAINS = { "api.exampleai.com" }
local function verify_binary(path)
local result = praxis.command_run({ program = path, args = { "--version" } })
if result.success then
local version = (result.stdout or ""):match("(%d[%d%.%-a-zA-Z]*)")
return true, version
end
return false, nil
end
local function pick_path()
return helpers.find_executable({
name = "exampleai",
global_dirs = {
default = { "/usr/local/bin", "/usr/bin" },
},
home_dirs = {
default = { "${HOME}/.local/bin" },
windows = { "${USERPROFILE}\\.local\\bin" },
},
verify = verify_binary,
})
end
return {
name = AGENT_NAME,
short_name = AGENT_SHORT_NAME,
fingerprint = function(_ctx)
local process_path, process_version = pick_path()
return {
available = process_path ~= nil,
process_path = process_path,
version = process_version,
}
end,
-- Optional: traffic interception domains.
intercept_domains = function(_ctx)
return INTERCEPT_DOMAINS
end,
-- Optional but recommended: reconnaissance.
-- Use run_standard_recon + declarative recon_config.
recon = function(ctx)
return helpers.run_standard_recon(ctx, recon_config)
end,
-- Required for sessions.
create_session = function(ctx)
return {
handle = praxis.uuid_v4(),
process_path = ctx.process_path,
working_dir = ctx.working_dir,
yolo_mode = ctx.yolo_mode == true,
}
end,
session_transact = function(_ctx, state, prompt)
local result = praxis.command_run_handle({
program = state.process_path,
args = { "--prompt", "-" },
cwd = state.working_dir,
stdin = prompt,
}, state.handle)
return { response = result.stdout or "", state = state }
end,
session_close = function(_ctx, state)
-- Cleanup if needed.
end,
}
Recommended pattern for recon config (same style as Gemini/Cursor/ClaudeCode):
local recon_config = {
home_dir = ".exampleai",
home_configs = {
{ path = ".exampleai/settings.json", type = "global_settings", mcp = true },
},
project_markers = { "/.exampleai/settings.json" },
project_configs = {
{ path = ".exampleai/settings.json", type = "project_settings", mcp = true },
},
mcp_parsers = {
default = helpers.parse_mcp_from_json_flexible,
},
auth_check = path_has_valid_auth,
session_discovery = discover_sessions_for_home,
session_fns = {
create = run_create_session,
transact = run_session_transact,
close = run_session_close,
},
}
Key points:
reconreceives a context object:recon = function(ctx) ... end- Semantic vs non-semantic recon is driven by
ctx.is_semanticinside helpers - Avoid mutable global process state; return
process_pathfromfingerprintand consume it viactx.process_path - Every ACP session gets its own Lua VM loaded from compiled bytecode, so Lua globals are not shared between sessions. Keep all per-session state in the
statetable returned bycreate_session— do not stash it in module-level Lua variables expecting to read it back insession_transact.
helpers.find_executable Config
The find_executable helper searches for an agent binary in 4 phases:
- PATH search via
praxis.find_executables(name)- searches the system PATH - Global directories - explicit absolute paths (e.g.
/usr/local/bin) - Home directories - templates expanded per user home (e.g.
${HOME}/.local/bin) - Glob patterns - for version manager installations (e.g. nvm, mise)
On Windows, .cmd is tried before .exe for each directory. The verify function receives a candidate path and returns (passed, version).
Config fields:
name(string) - executable name for PATH search and path constructionglobal_dirs(table) -{ default = {...}, windows = {...} }absolute directorieshome_dirs(table) - same shape, directory templates with${HOME}etc.glob_paths(table) - full glob patterns (wildcards embedded in path)verify(function) -fn(path) -> passed, version
OS resolution: tbl[os_name] or tbl.default or {} where os_name is "linux", "macos", or "windows".
Available Lua APIs
The praxis global provides:
- Filesystem:
path_exists,path_join,read_file,walk_files,glob_files - Commands:
command_run,command_run_handle,command_abort_handle - ACP:
acp_start,acp_create_session,acp_prompt,acp_close - Environment:
os_name,user_homes,env_get,expand_path - Process:
find_executables,kill_processes_by_name - CDP:
cdp_spawn_and_connect,cdp_connect,cdp_evaluate,cdp_click,cdp_type_text,cdp_press_key,cdp_wait_for_element,cdp_find_elements,cdp_close,cdp_process_id - Utilities:
json_decode,toml_decode,uuid_v4,now_unix,sleep_ms,log_info,log_warn
The helpers module (require("praxis.helpers")) provides find_executable, expand_path, starts_with, ends_with, dedup, parse_json, parse_toml, user_homes_with_dir, for_each_user_home_coalesce, run_standard_recon, collect_configs, extract_mcp_servers, and parser helpers such as parse_mcp_from_json, parse_mcp_from_json_flexible, and parse_mcp_from_toml.
The devtools module (require("praxis.devtools")) provides connect, transact, and close for browser-based agents using Chrome DevTools Protocol. See DevTools-Based Agents below.
Deploying
- Embedded: Add the
.luafile toagents/and rebuild. It will be compiled into both node and service binaries. - Runtime: Upload via Settings > Agents in the web UI. The script is stored in the service database and pushed to all connected nodes.
ACP Sessions (Streaming Agents)
For agents that support the Agent Client Protocol (ACP), sessions use a long-lived subprocess with JSON-RPC 2.0 over NDJSON stdio. Praxis uses the agent-client-protocol crate internally, providing typed ClientSideConnection communication with Client trait callbacks for real-time streaming updates (text chunks, tool calls, plans, permission requests).
ACP Lua API
| Function | Arguments | Returns | Description |
|---|---|---|---|
praxis.acp_start | spec table | handle (string) | Spawn an ACP subprocess and perform the initialize handshake |
praxis.acp_create_session | handle, cwd | session_id (string) | Create an ACP session with a working directory |
praxis.acp_prompt | handle, prompt, yolo, interactive | response (string) | Send a prompt and wait for the streamed response. yolo auto-approves permission requests; interactive forwards them to the user |
praxis.acp_close | handle | — | Close the ACP session and terminate the subprocess |
The acp_start spec table:
| Field | Type | Description |
|---|---|---|
program | string | Path to the agent executable |
args | table | Command-line arguments (e.g. { "acp" } or { "--acp" }) |
cwd | string | Working directory for the subprocess |
Example
create_session = function(ctx)
local acp_handle = praxis.acp_start({
program = ctx.process_path,
args = { "--acp" },
cwd = ctx.working_dir or "",
})
local session_id = praxis.acp_create_session(acp_handle, ctx.working_dir or "")
return {
acp_handle = acp_handle,
acp_session_id = session_id,
yolo_mode = ctx.yolo_mode == true,
interactive = ctx.interactive == true,
}
end,
session_transact = function(_ctx, state, prompt)
local response = praxis.acp_prompt(
state.acp_handle, prompt,
state.yolo_mode or false,
state.interactive or false
)
return { response = response, state = state }
end,
session_close = function(_ctx, state)
if state.acp_handle then
praxis.acp_close(state.acp_handle)
end
end,
During acp_prompt, streaming updates (text, tool calls, tool results) are automatically forwarded to the client (TUI or web UI) in real time. The function blocks until the full response is assembled and returns the final text.
DevTools-Based Agents (Browser Automation)
For agents that run in a browser or WebView (e.g. M365 Copilot), Praxis provides a CDP (Chrome DevTools Protocol) stack. The architecture has three layers:
your_agent.lua ← Agent-specific: CSS selectors, response parsing
↓ uses
require("praxis.devtools") ← Generic transact loop, connect/close lifecycle
↓ uses
praxis.cdp_* ← Native Rust: CDP connection, JS eval, DOM ops
The devtools Module
require("praxis.devtools") provides three functions:
| Function | Description |
|---|---|
devtools.connect(config) | Spawn a process with a debug port, connect via CDP, return a handle string |
devtools.transact(handle, adapter, prompt) | Send a prompt and poll for response using the adapter's selectors |
devtools.close(handle) | Close the CDP connection and terminate the process tree |
The connect config table:
| Field | Type | Description |
|---|---|---|
process_path | string | Path to the executable |
debug_port_env_var | string | Environment variable for the debug port argument |
debug_port_format | string | Format string, e.g. "--remote-debugging-port={}" |
base_port | number | Base port number (random offset added) |
port_range | number | Range for random port selection (default 778) |
kill_existing | bool | Kill existing processes first (default true) |
use_hidden_desktop | bool | Spawn on hidden desktop on Windows (default true). In debug builds, PRAXIS_NOT_HIDDEN defaults to 1 (visible); in release builds it defaults to 0 (hidden). |
The Adapter Table
The transact function takes an adapter table that defines how to interact with the specific agent's UI:
local my_adapter = {
-- CSS selector for the text input element (required)
input_selector = '#chat-input',
-- CSS selector for response message elements (required)
message_selector = 'div.response-message',
-- Check response state by running JS in the page (required)
-- Returns: { response = string|nil, is_generating = bool, has_new_messages = bool }
check_response_state = function(handle, initial_count)
local result = praxis.cdp_evaluate(handle, [[
(function() {
var messages = document.querySelectorAll('div.response-message');
var text = '';
if (messages.length > 0) {
text = messages[messages.length - 1].innerText.trim();
}
var loading = document.querySelector('.loading-indicator');
return {
responseText: text,
messageCount: messages.length,
isGenerating: loading !== null
};
})()
]])
local count = (result and result.messageCount) or 0
local generating = (result and result.isGenerating) or false
local text = (result and result.responseText) or ""
local response = nil
if count > initial_count and not generating and #text > 0 then
response = text
end
return {
response = response,
is_generating = generating,
has_new_messages = count > initial_count,
}
end,
-- Optional: wait for submit button to be enabled before pressing Enter
wait_for_submit_ready = function(handle)
praxis.cdp_wait_for_element(handle, 'button.send:not([disabled])', 50, 100)
end,
}
Full Example
Here is an M365-style DevTools-based agent template:
local helpers = require("praxis.helpers")
local devtools = require("praxis.devtools")
local AGENT_NAME = "My DevTools Agent"
local AGENT_SHORT_NAME = "mydevtools"
local PROCESS_NAME = "MyAgent.exe"
local INPUT_SELECTOR = '#chat-input'
local MESSAGE_SELECTOR = 'div.assistant-message'
local SEND_BUTTON_SELECTOR = 'button[aria-label=\"Send\"]:not([aria-disabled=\"true\"])'
local STOP_BUTTON_SELECTOR = 'button[aria-label=\"Stop generating\"]'
local my_adapter = {
input_selector = INPUT_SELECTOR,
message_selector = MESSAGE_SELECTOR,
check_response_state = function(handle, initial_count)
local js = "(function() {"
.. "var msgs = document.querySelectorAll('" .. MESSAGE_SELECTOR .. "');"
.. "var text = '';"
.. "if (msgs.length > 0) {"
.. " var last = msgs[msgs.length - 1];"
.. " text = (last.innerText || last.textContent || '').trim();"
.. "}"
.. "var stopBtn = document.querySelector('" .. STOP_BUTTON_SELECTOR .. "');"
.. "return { responseText: text, messageCount: msgs.length, isGenerating: stopBtn !== null };"
.. "})()"
local result = praxis.cdp_evaluate(handle, js)
local message_count = (result and result.messageCount) or 0
local is_generating = (result and result.isGenerating) or false
local response_text = (result and result.responseText) or ""
local has_new_messages = message_count > initial_count
local response = nil
if has_new_messages and not is_generating and #response_text > 0 then
response = response_text
end
return {
response = response,
is_generating = is_generating,
has_new_messages = has_new_messages,
}
end,
wait_for_submit_ready = function(handle)
praxis.cdp_wait_for_element(handle, SEND_BUTTON_SELECTOR, 100, 100)
end,
}
local function post_initialize(handle, _working_dir)
-- Wait for the chat UI to be ready.
praxis.cdp_wait_for_element(handle, INPUT_SELECTOR, 30, 300)
-- Optional: click mode toggle, open fresh chat, dismiss banners, etc.
-- pcall(praxis.cdp_click, handle, 'button[data-testid=\"new-chat\"]')
end
local function run_create_session(ctx)
praxis.kill_processes_by_name(PROCESS_NAME)
praxis.sleep_ms(500)
local cdp_handle = devtools.connect({
process_path = ctx.process_path,
debug_port_env_var = "WEBVIEW2_ADDITIONAL_BROWSER_ARGUMENTS",
debug_port_format = "--remote-debugging-port={}",
base_port = 9222,
port_range = 778,
})
post_initialize(cdp_handle, ctx.working_dir)
return {
handle = cdp_handle,
cdp_handle = cdp_handle,
working_dir = ctx.working_dir,
process_id = praxis.cdp_process_id(cdp_handle),
}
end
local function run_session_transact(state, prompt)
local response = devtools.transact(state.cdp_handle, my_adapter, prompt)
return { response = response, state = state }
end
local function run_session_close(state)
if state and state.cdp_handle then
devtools.close(state.cdp_handle)
end
end
local function do_recon(ctx)
if praxis.os_name() ~= "windows" then
return nil
end
local internal_tools = {}
if ctx.is_semantic == true then
internal_tools = helpers.discover_internal_tools(
{ process_path = ctx.process_path, working_dir = nil },
{ create = run_create_session, transact = run_session_transact, close = run_session_close }
)
end
return {
tools = { internal_tools = internal_tools, mcp_servers = {}, skills = {} },
project_paths = {},
metadata = nil,
}
end
local function do_fingerprint()
if praxis.os_name() ~= "windows" then
return nil
end
local paths = praxis.find_executables(PROCESS_NAME) or {}
if #paths > 0 then
return paths[1]
end
return nil
end
return {
name = AGENT_NAME,
short_name = AGENT_SHORT_NAME,
fingerprint = function(_ctx)
local path = do_fingerprint()
return { available = path ~= nil, process_path = path }
end,
recon = function(ctx)
return do_recon(ctx)
end,
create_session = function(ctx)
return run_create_session(ctx)
end,
session_transact = function(_ctx, state, prompt)
return run_session_transact(state, prompt)
end,
session_close = function(_ctx, state)
run_session_close(state)
end,
}
Session State Keys
For CDP sessions to support abort and cleanup, the session state returned by create_session should include:
handle— used by the Rust session layer for command abort lookupcdp_handle— the CDP connection handle string (cleaned up by Rust on drop)process_id— the spawned process PID (killed by Rust on abort or drop)
CDP API Reference
Low-level functions available on the praxis global:
| Function | Arguments | Returns | Description |
|---|---|---|---|
cdp_spawn_and_connect | config table | handle string | Spawn process, connect via CDP |
cdp_connect | port (number) | handle string | Connect to existing DevTools endpoint |
cdp_evaluate | handle, js (string) | value | Execute JavaScript, return result |
cdp_find_elements | handle, selector | count (number) | Count matching DOM elements |
cdp_click | handle, selector | — | Click an element |
cdp_type_text | handle, text | — | Insert text via CDP InsertText (handles emojis) |
cdp_press_key | handle, selector, key | — | Press a key on an element |
cdp_wait_for_element | handle, selector, retries, delay_ms | bool | Poll for element existence |
cdp_close | handle | — | Close connection, terminate process |
cdp_process_id | handle | number or nil | Get PID of spawned process |
Rust Connector (for native/OS-level agents)
Use this approach only when Lua cannot access the required OS capabilities.
Step 1: Create the Directory Structure
Create a new directory under node/src/agent_connectors/:
node/src/agent_connectors/
├── exampleai/
│ ├── mod.rs # Main agent implementation
│ ├── fingerprint.rs # Fingerprinting logic
│ ├── intercept.rs # Interception domains
│ ├── recon.rs # Reconnaissance
│ └── session.rs # Session management
├── factory.rs
├── mod.rs
└── traits.rs
Step 2: Implement the Agent Trait
In mod.rs:
#![allow(unused)] fn main() { mod fingerprint; mod intercept; mod recon; mod session; pub use session::ExampleAISession; use crate::agent_connectors::traits::{Agent, AgentIntercept, AgentRecon, AgentSession}; use async_trait::async_trait; use common::SessionContext; use once_cell::sync::OnceCell; use std::collections::HashMap; use std::sync::{Arc, Mutex}; use uuid::Uuid; const AGENT_NAME: &str = "ExampleAI"; const AGENT_SHORTNAME: &str = "exampleai"; pub struct ExampleAIAgent { pub(crate) process_path: OnceCell<String>, // // Per-session state keyed by the ACP session_id handed in by // the node's ACP server. Nothing is shared between sessions. // sessions: Mutex<HashMap<Uuid, Arc<dyn AgentSession>>>, } impl ExampleAIAgent { pub fn new() -> Self { Self { process_path: OnceCell::new(), sessions: Mutex::new(HashMap::new()), } } } #[async_trait] impl Agent for ExampleAIAgent { fn name(&self) -> &str { AGENT_NAME } fn short_name(&self) -> &str { AGENT_SHORTNAME } fn as_intercept(&self) -> Option<&dyn AgentIntercept> { Some(self) // Return None if no interception support } fn as_recon(&self) -> Option<&dyn AgentRecon> { Some(self) // Return None if no recon support } async fn do_fingerprint(&self) -> bool { self.do_fingerprint_impl().await } fn create_session_with_id( &self, context: &SessionContext, session_id: Uuid, ) -> Option<Arc<dyn AgentSession>> { match ExampleAISession::new(self.process_path.get().cloned(), context, session_id) { Ok(session) => { let session_arc: Arc<dyn AgentSession> = Arc::new(session); self.sessions.lock().unwrap().insert(session_id, Arc::clone(&session_arc)); Some(session_arc) } Err(e) => { common::log_error!("{}: Failed to create session: {}", AGENT_NAME, e); None } } } fn drop_session(&self, session_id: Uuid) { if let Some(session) = self.sessions.lock().unwrap().remove(&session_id) { session.close(); } } } }
The Agent trait has two session-related hooks:
create_session_with_id(ctx, session_id)— called once persession/newACP request. The node's ACP server chooses thesession_id; the agent must build a session that does not share mutable state with any other session.drop_session(session_id)— called onsession/close(and on node reset). Release per-session resources keyed by that id.
Step 3: Implement Fingerprinting
In fingerprint.rs:
#![allow(unused)] fn main() { use super::ExampleAIAgent; use std::path::PathBuf; impl ExampleAIAgent { pub(crate) async fn do_fingerprint_impl(&self) -> bool { // Check for config file if let Some(config_path) = find_config_file() { common::log_info!("ExampleAI: Found config at {:?}", config_path); // Optionally find and cache the binary path if let Some(binary_path) = find_binary() { let _ = self.process_path.set(binary_path); } return true; } // Check for running process if is_process_running("exampleai") { return true; } false } } fn find_config_file() -> Option<PathBuf> { let home = dirs::home_dir()?; // Check common config locations let paths = [ home.join(".exampleai/config.json"), home.join(".config/exampleai/config.json"), ]; paths.into_iter().find(|p| p.exists()) } fn find_binary() -> Option<String> { which::which("exampleai").ok().map(|p| p.to_string_lossy().to_string()) } fn is_process_running(name: &str) -> bool { // Platform-specific process detection // ... false } }
Step 4: Implement Interception
In intercept.rs:
#![allow(unused)] fn main() { use super::ExampleAIAgent; use crate::agent_connectors::traits::AgentIntercept; impl AgentIntercept for ExampleAIAgent { fn intercept_domains(&self) -> Vec<&str> { vec!["api.exampleai.com"] } fn intercept_url_pattern(&self) -> Option<&str> { // Optional: regex to filter which URLs to capture Some("v1/chat") } } }
Step 5: Implement Reconnaissance
In recon.rs:
#![allow(unused)] fn main() { use super::ExampleAIAgent; use crate::agent_connectors::traits::AgentRecon; use async_trait::async_trait; use common::ReconResult; #[async_trait] impl AgentRecon for ExampleAIAgent { async fn perform_recon(&self, is_semantic: bool) -> Option<ReconResult> { let mut result = ReconResult::default(); // Discover configuration files if let Some(config) = discover_config() { result.config.push(config); } // Discover tools/plugins result.tools = discover_tools(); // Discover session history result.sessions = discover_sessions(); // For semantic recon, use LLM to extract more info if is_semantic { // Request semantic parsing from service // ... } Some(result) } } fn discover_config() -> Option<common::ConfigItem> { // Parse config files, return structured data None } fn discover_tools() -> common::ReconTools { // Find plugins, extensions, MCP servers common::ReconTools::default() } fn discover_sessions() -> Vec<common::SessionItem> { // Find session history files Vec::new() } }
Step 6: Implement Session Management
In session.rs:
#![allow(unused)] fn main() { use crate::agent_connectors::traits::{AgentMode, AgentSession}; use anyhow::Result; use common::SessionContext; use uuid::Uuid; pub struct ExampleAISession { session_id: Uuid, process_path: Option<String>, working_dir: Option<String>, pty: Option<PtyHandle>, // Your PTY abstraction } impl ExampleAISession { pub fn new( process_path: Option<String>, context: &SessionContext, session_id: Uuid, ) -> Result<Self> { // Spawn the agent process let mut cmd = std::process::Command::new( process_path.as_deref().unwrap_or("exampleai") ); if let Some(ref dir) = context.working_dir { cmd.current_dir(dir); } if context.yolo_mode { cmd.arg("--auto-approve"); } // Create PTY and spawn let pty = create_pty_session(cmd)?; Ok(Self { session_id, process_path, working_dir: context.working_dir.clone(), pty: Some(pty), }) } } impl AgentSession for ExampleAISession { fn session_id(&self) -> &Uuid { &self.session_id } fn process_path(&self) -> Option<String> { self.process_path.clone() } fn working_dir(&self) -> Option<String> { self.working_dir.clone() } fn mode(&self) -> AgentMode { AgentMode::Cli } fn transact(&self, prompt: &str) -> Result<String> { // Send prompt to PTY stdin // Wait for and parse response // Return assistant's message if let Some(ref pty) = self.pty { pty.write(prompt)?; let response = pty.read_until_complete()?; Ok(parse_response(&response)) } else { Err(anyhow::anyhow!("No PTY available")) } } fn close(&self) { if let Some(ref pty) = self.pty { pty.close(); } } fn as_any(&self) -> &dyn std::any::Any { self } } }
Step 7: Register in Factory
Update node/src/agent_connectors/factory.rs:
#![allow(unused)] fn main() { use super::exampleai::ExampleAIAgent; // Add import impl AgentFactory { pub fn create_all_agents(&self) -> Vec<Arc<dyn Agent>> { let mut agents: Vec<Arc<dyn Agent>> = Vec::new(); agents.push(Arc::new(ClaudeCodeAgent::new())); agents.push(Arc::new(GeminiAgent::new())); // Add your new agent agents.push(Arc::new(ExampleAIAgent::new())); #[cfg(windows)] agents.push(Arc::new(M365CopilotAgent::new())); agents } } }
Update node/src/agent_connectors/mod.rs:
#![allow(unused)] fn main() { pub mod exampleai; // Add this line }
Step 8: Test
- Build the node:
cargo build -p praxis_node - Run with the target agent installed
- Check fingerprinting works
- Test reconnaissance
- Test session creation and prompts
- Test interception (if implemented)
Tips
Fingerprinting
- Be defensive-check multiple locations
- Handle missing files gracefully
- Log what you find for debugging
Sessions
- Handle terminal control sequences properly
- Parse output carefully-agents have different formats
- Implement proper cleanup on close
Recon
- Start with static discovery
- Add semantic recon for deeper analysis
- Cache results where appropriate
Testing
- Test without the agent installed (should not crash)
- Test with partial configuration
- Test session edge cases (timeouts, errors)
Claude Bridge (CCRv1 / CCRv2)
The Claude Bridge lets Claude Code connect directly to Praxis without a deployed node. Instead of Praxis spawning Claude as a child process, Claude connects inward to the service using Anthropic's Claude Code Router protocol. Each connection registers as a virtual node with an active session.
Overview
Traditional Praxis nodes discover Claude Code on the target machine, fingerprint it, and spawn it in a PTY for sessions. The Claude Bridge reverses this: the Praxis service listens on a port, and Claude Code connects to it as a remote worker. This is useful when:
- Claude is already running (e.g. in an IDE, desktop app, or cloud environment) and you want to bring it under Praxis control
- You want to avoid deploying a full Praxis node to the target machine
- You are building integrations that launch Claude Code with custom environment variables
The bridge implements two protocol versions that correspond to the two transport modes Claude Code supports.
Protocol Versions
CCRv1 (WebSocket)
CCRv1 uses a bidirectional WebSocket connection with newline-delimited JSON (NDJSON). This is the simpler protocol -- Claude connects via ws:// and all messages flow over a single WebSocket.
Default port: 8586
Wire format: Each message is JSON.stringify(msg) + "\n" sent as a WebSocket text frame. Multiple JSON objects may arrive in a single frame.
Handshake:
- Claude opens a WebSocket connection to the bridge
- Bridge sends
initializecontrol request - Claude responds with
control_responseandsystem/init - Bridge sends
set_permission_mode(bypassPermissions) - Bridge registers as a virtual node with the service
CCRv2 (HTTP + SSE)
CCRv2 uses HTTP POST for client-to-server messages and Server-Sent Events (SSE) for server-to-client messages. This is the newer protocol used by Anthropic's cloud infrastructure.
Default port: 8587
Endpoints:
| Endpoint | Method | Purpose |
|---|---|---|
/worker | GET | Returns worker metadata |
/worker | PUT | Worker status updates (idle/processing) |
/worker/events | POST | Batched messages from Claude to bridge |
/worker/events/stream | GET | SSE stream from bridge to Claude |
/worker/internal-events | POST | Internal events (ack with epoch check) |
/worker/heartbeat | POST | Keep-alive (every ~20s from Claude) |
/worker/events/delivery | POST | Event delivery confirmation |
Epoch tracking: CCRv2 uses a worker_epoch integer that appears in every request. If a stale worker reconnects with an old epoch, the server returns 409 Conflict and Claude exits. This prevents ghost sessions from interfering with new ones.
Disconnect detection: If no activity is received for 45 seconds (heartbeats normally arrive every 20s), the bridge treats the worker as disconnected and tears down the session. SSE disconnection also triggers immediate teardown.
Enabling the Bridge
Both bridge versions are disabled by default. Enable them in the web UI under Settings > Claude Bridge, or in the CLI TUI under Settings (Ctrl+S) > Service tab.
| Setting | Default | Description |
|---|---|---|
| CCRv1 Enabled | false | Enable the WebSocket bridge listener |
| CCRv1 Port | 8586 | Port for WebSocket connections |
| CCRv2 Enabled | false | Enable the HTTP+SSE bridge listener |
| CCRv2 Port | 8587 | Port for HTTP connections |
Changes take effect immediately -- the bridge starts or stops without restarting the service.
Connecting Claude Code
To make Claude Code connect to a Praxis bridge instead of Anthropic's servers, launch it with the appropriate environment variables and the --sdk-url flag pointing to your bridge URL, with the specified stream-json I/O formats.
CCRv1 (WebSocket)
$env:CLAUDE_CODE_SESSION_ACCESS_TOKEN = "local-token"
claude --sdk-url ws://localhost:8586 --output-format stream-json --input-format stream-json
The CLAUDE_CODE_SESSION_ACCESS_TOKEN is passed as an Authorization: Bearer header on the WebSocket upgrade request. The Praxis bridge does not validate the token, so any non-empty value works. You can also omit it entirely for CCRv1 -- the WebSocket transport accepts empty auth headers.
CCRv2 (HTTP + SSE)
$env:CLAUDE_CODE_USE_CCR_V2 = "1"
$env:CLAUDE_CODE_WORKER_EPOCH = "1"
$env:CLAUDE_CODE_SESSION_ACCESS_TOKEN = "local-token"
claude --sdk-url http://localhost:8587 --output-format stream-json --input-format stream-json
CCRv2 has stricter requirements:
| Variable | Required | Description |
|---|---|---|
CLAUDE_CODE_USE_CCR_V2 | Yes | Set to "1" to select the SSE+POST transport |
CLAUDE_CODE_WORKER_EPOCH | Yes | Integer epoch (e.g. "1"). Must be present and numeric or Claude exits with missing_epoch |
CLAUDE_CODE_SESSION_ACCESS_TOKEN | Yes | Auth token. Claude exits with no_auth_headers if missing. A dummy value like "local-token" works since the bridge does not validate tokens |
Environment Variable Reference
| Variable | V1 | V2 | Description |
|---|---|---|---|
CLAUDE_CODE_SESSION_ACCESS_TOKEN | optional | required | Bearer token for auth. V1 accepts empty headers. V2 crashes without it. A dummy value works for local bridges. |
CLAUDE_CODE_USE_CCR_V2 | N/A | required | When "1", selects SSE transport. Without it, falls back to WebSocket (V1). |
CLAUDE_CODE_WORKER_EPOCH | N/A | required | Integer epoch for V2 requests. Missing or non-numeric causes missing_epoch error. |
CLAUDE_CODE_ENVIRONMENT_KIND | optional | optional | Set to "bridge" for minor diagnostic effects. Not functionally required. |
Auth Token Resolution
Claude Code resolves auth tokens in this order:
CLAUDE_CODE_SESSION_ACCESS_TOKENenvironment variable- File descriptor via
CLAUDE_CODE_WEBSOCKET_AUTH_FILE_DESCRIPTOR - Well-known file at
CCR_SESSION_INGRESS_TOKEN_PATH(orCLAUDE_SESSION_INGRESS_TOKEN_FILE)
If all return null, V2 crashes and V1 proceeds with empty headers.
How Bridge Nodes Appear
When Claude connects, the bridge registers a virtual node with the service. This node appears in the web UI and CLI just like a deployed node, with some differences:
- Node type:
claude-ccrv1orclaude-ccrv2(shown in the UI) - Machine name: Same as the node type
- Capabilities: Session only (no interception, recon, or terminal)
- Agent: Claude Code (auto-selected, with version reported from the
system/initmessage) - Session: Automatically active in YOLO mode (bypassPermissions)
- Working directory: Reported by Claude's
system/initmessage (the cwd where Claude was launched)
Bridge nodes are ephemeral -- they exist only while Claude is connected. When Claude disconnects, the node is automatically deregistered and disappears from the UI.
Using Bridge Sessions
Once connected, a bridge session works like any other Praxis session. You can:
- Send prompts from the web UI or CLI
- Run semantic operations against the bridge node
- Include bridge nodes in chain workflows
- Use the orchestrator with bridge nodes
The key difference is that permissions are always bypassed (YOLO mode) -- Claude auto-approves all tool calls since the bridge sets bypassPermissions during the handshake.
One session exists per connection. Closing the session from Praxis sends an end_session control request to Claude, which terminates the process. Only one prompt can be in-flight at a time; sending a second prompt while one is active returns an error.
Troubleshooting
Claude exits immediately after connecting
CCRv2: Ensure all three required environment variables are set (CLAUDE_CODE_USE_CCR_V2, CLAUDE_CODE_WORKER_EPOCH, CLAUDE_CODE_SESSION_ACCESS_TOKEN). Missing any of them causes Claude to exit with a specific error.
Both versions: Check that the bridge is enabled and the port is correct. Look at the service logs for connection/handshake errors.
Node appears but no session
The bridge waits up to 30 seconds for the handshake to complete. If Claude does not respond to the initialize control request in time, the session fails. Check Claude's output for errors (API key issues, network problems, etc.).
"Prompt already in-flight" error
Bridge sessions only support one concurrent prompt. Wait for the current response before sending another. If a prompt appears stuck, cancel the transaction or close the session.
Node disappears unexpectedly
Bridge nodes are tied to the connection. If Claude crashes, the network drops, or the process is killed, the node is immediately deregistered. For CCRv2, the 45-second silence timeout also triggers cleanup if heartbeats stop.
CCRv2 epoch mismatch (409)
This means a stale worker is trying to use an old epoch. Increment CLAUDE_CODE_WORKER_EPOCH when relaunching Claude, or simply restart the bridge (toggle the setting off and on).
Claude Code Connector
The Claude Code connector enables interaction with Anthropic's Claude Code CLI agent.
Overview
Claude Code is a command-line AI assistant that can read files, execute commands, and work with code. The connector supports Linux and Windows.
Fingerprinting
The connector looks for Claude Code by checking:
- PATH search - Finding the
claudeexecutable in PATH - Explicit paths - Checking known installation locations (
~/.local/bin/claudeon Linux,%USERPROFILE%\.local\bin\claude.exeon Windows)
The binary is verified by running claude --version and checking the output contains "claude". If found and verified, fingerprinting succeeds and the agent appears in the node's agent list.
Interception
Traffic is intercepted for the domain:
api.anthropic.com
With URL pattern filter:
messages- Only capture requests to the messages endpoint (filters out telemetry)
When interception is enabled, you'll see:
- Prompts sent to the Claude API
- Responses including assistant messages and tool calls
- Token usage and other metadata
Authentication
Claude Code requires authentication to function. During reconnaissance, Praxis validates that valid authentication is configured before including paths in the project list.
Authentication is considered valid if any of the following are true:
-
Environment variables - One of these is set:
ANTHROPIC_API_KEYANTHROPIC_AUTH_TOKENANTHROPIC_FOUNDRY_API_KEYAWS_BEARER_TOKEN_BEDROCK
-
Preferences file - One of these fields is present in
~/.claude.json:oauthAccount- OAuth login credentialsprimaryApiKey- Direct API keyapiKeyHelper- External key provider
Paths without valid authentication are filtered out during reconnaissance. This prevents the UI from showing user homes or projects that cannot actually be used with Claude Code.
Reconnaissance
Static Recon
Static reconnaissance discovers:
Configuration
- Main config file (
~/.claude.jsonor~/.config/claude/config.json) - Permission settings, model preferences, etc.
MCP Servers
- From
~/.claude/mcp.json - Server names, commands, environment variables
- Enabled state
Sessions
- Project directories under
~/.claude/projects/ - Session files with conversation history
- Recent project paths
Semantic Recon
When semantic recon is enabled (requires Semantic Parser LLM), the connector also:
- Parses configuration to extract tool definitions
- Identifies internal Claude tools from session transcripts
- Extracts capability information
Session Management
Sessions are created by spawning Claude Code in a PTY (pseudo-terminal):
┌───────────────────────────────────────────────────────┐
│ Praxis Node │
│ │
│ ┌─────────────────────────────────┐ │
│ │ PTY Session │ │
│ │ │ │
│ │ claude ────────────────────────┼──▶ Claude Process│
│ │ │ │ │
│ │ └─ stdin/stdout │ │
│ └─────────────────────────────────┘ │
└───────────────────────────────────────────────────────┘
Session Context
When creating a session, you can specify:
Working Directory - Where Claude should operate. This affects what files it can see with ls, cat, etc.
YOLO Mode - When enabled, passes --dangerously-skip-permissions and --add-dir (with / on Linux or C:\ on Windows) to Claude, which auto-approves all tool calls and grants access to the filesystem. Without this, Claude asks for confirmation before running commands.
Session Tracking
The connector maintains conversation context across multiple prompts:
- First prompt: Generates a UUID and passes
--session-id <id>to Claude - Subsequent prompts: Passes
--resume <id>to continue the same session
This allows multi-turn conversations where Claude remembers previous context within the session.
Transacting
Sending prompts works by:
- Running Claude with
-pflag and the prompt text - Waiting for Claude to process and respond
- Parsing the response from stdout
- Returning the assistant's message
Config Editing
You can view and edit Claude's configuration files directly from the Praxis UI:
- Main config - Model selection, permissions, API settings
- MCP servers - Add, remove, or modify MCP server definitions
Changes are written back to disk and take effect on the next Claude session.
Tool Discovery
The connector supports both static and semantic recon. Static recon parses configuration files to discover MCP servers and settings. Semantic recon creates a session and queries the agent directly to discover internal tools and capabilities.
Files and Paths
Global (Home Directory)
| File | Path | Content |
|---|---|---|
| Global settings | ~/.claude/settings.json | Global settings |
| Preferences | ~/.claude.json | User preferences |
| Global instructions | ~/.claude/CLAUDE.md | Global instruction file |
| Projects | ~/.claude/projects/ | Session history by project |
Project (Working Directory)
| File | Path | Content |
|---|---|---|
| Project settings | .claude/settings.json | Project-specific settings |
| Local settings | .claude/settings.local.json | Local overrides (not committed) |
| Project instructions | CLAUDE.md | Project instruction file |
| Project MCP | .mcp.json | Project MCP server definitions |
Troubleshooting
"Agent not fingerprinted"
- Ensure Claude Code is installed and configured
- Check that config file exists
- Verify the
claudecommand is in PATH
"Session creation failed"
- Check that Claude Code can run normally from terminal
- Verify API key is configured in Claude's settings
- Look at node logs for detailed errors
"No MCP servers found"
- MCP servers are optional-not all installations have them
- Check
~/.claude/mcp.jsonexists if you've configured servers - Run semantic recon for deeper tool discovery
Claude Desktop Connector
The Claude Desktop connector enables interaction with the Claude Desktop Electron app. Windows only. Experimental.
Warning: This connector is hacky and flaky. It relies on UI Automation to navigate Electron menus, a raw WebSocket CDP connection to the Node.js main process debugger, and a JavaScript proxy to tunnel CDP commands to the renderer. Any Claude Desktop update can break it. Use at your own risk.
Overview
Claude Desktop is an Electron app. Unlike browser-based agents with standard DevTools, Electron's main process debugger must be enabled manually via the app's Developer menu. The connector automates this using Windows UI Automation, then establishes a CDP connection to control the renderer.
Architecture
agents/claudedesktop.lua <- Agent-specific: selectors, UIA flow, config
| uses
praxis.uiautomation <- Lua helper: BFS element search, menu navigation
praxis.devtools <- Lua helper: Electron proxy, transact loop
| uses
praxis.uia_* <- Native Rust: Windows UI Automation bindings
praxis.cdp_* <- Native Rust: Raw WebSocket CDP (Node.js inspector)
How It Works
Session Creation
- Write developer_settings.json — Ensures
allowDevTools: trueso the Developer menu appears - Launch Claude Desktop — Spawns via
spawn_detached(never on hidden desktop — UIA needs a visible window) - Enable debugger via UI Automation — Navigates Menu > Developer > Enable Main Process Debugger using Windows UIA. Uses BFS element search to avoid hangs on Electron's large UIA tree. Retries up to 3 times
- Dismiss Inspector dialogs — Closes any Inspector popup windows that appear after enabling the debugger
- Minimize window — Minimizes after UIA interaction is complete
- Connect to CDP on port 9229 — Uses raw WebSocket (
tokio-tungstenite) instead of chromiumoxide, because Electron's main process debugger is a Node.js inspector endpoint with no pages/tabs - Set up Electron renderer proxy — Injects JavaScript into the main process that uses
webContents.debuggerto proxy CDP commands to the renderer matchingclaude.ai - Post-initialize — Selects Chat/Code mode, waits for input readiness, sends Ctrl+Shift+I for incognito mode
Why Not Just Use DevTools Directly?
Electron's renderer DevTools aren't exposed on a network port by default. The main process debugger (port 9229) is a Node.js inspector, not Chrome DevTools. To reach the renderer, the connector:
- Connects to the main process via raw WebSocket
- Runs
Runtime.evaluateto call Electron'swebContents.debugger.attach()andsendCommand()APIs - Sets up a JavaScript proxy (
globalThis.cdp()) that forwards CDP commands from the main process to the renderer
This is the setup_electron_proxy function in praxis.devtools.
BFS Element Search
The standard uiautomation Rust crate's find_first(Descendants) hangs for 25+ seconds on Electron's large UIA tree. The connector implements breadth-first search (uia_find_bfs) using find_first(Children) at each level, which returns instantly.
Fingerprinting
Searches for claude.exe in:
- PATH
%LOCALAPPDATA%\AnthropicClaude
Verifies it's Claude Desktop (not Claude Code) and extracts the version via PowerShell.
Interception
Traffic is intercepted for:
- Domains:
api.anthropic.com,a-api.anthropic.com - URL pattern:
messages
Working Directories
- Chat (default) — Claude Desktop's chat mode
- Code — Currently disabled (wraps Claude Code, which has a dedicated connector)
Reconnaissance
Config discovery from %APPDATA%\Claude:
claude_desktop_config.json— Global settings, MCP server definitionsconfig.json— App configextensions-blocklist.json— Extension blocklistPreferences— App preferencesdeveloper_settings.json— Developer settingslogs/*.log— Log files
Known Issues
- Session creation is slow (~15-20s) due to UIA menu navigation, Inspector dialog dismissal, and CDP connection handshake
- UIA is fragile — Menu structure changes in Claude Desktop will break the debugger enablement flow
- Response detection may not work — The CSS selectors for message elements and the stop button (
div.contents,button[aria-label="Stop response"]) may not match the current Claude Desktop UI - Cannot run on hidden desktop — UIA requires a visible window for interaction
- Electron updates break things — Any change to the Electron DevTools menu structure, renderer URL, or DOM will require selector updates
Requirements
- Windows — This connector is Windows-only
- Claude Desktop — Must be installed (not Claude Code)
- Visible desktop — UIA interaction requires a visible window;
spawn_detachedis called withuse_hidden_desktop = false
Troubleshooting
"Menu trigger not found: Menu"
The UIA BFS search couldn't find the Menu button. Claude Desktop may have changed its UI structure, or the window didn't load in time.
"URL error: URL scheme not supported"
The CDP connection is trying to use an HTTP URL instead of a WebSocket URL. Check that the Node.js debugger on port 9229 is responding with a valid /json endpoint.
"No pages found" then falls back to raw WebSocket
This is normal. Electron's main process debugger has no pages — the raw WebSocket fallback is the expected path.
Session creation hangs
Check the node logs for which step is stuck. Common culprits:
- UIA menu navigation (enable_debugger)
- Inspector dialog dismissal
- CDP connection (port 9229 not responding)
Codex CLI Connector
The Codex connector enables interaction with OpenAI's Codex CLI agent.
Overview
Codex is OpenAI's command-line coding agent that can execute commands, modify files, and work with code. The connector supports Linux and Windows.
Fingerprinting
The connector looks for Codex by checking:
-
PATH search - Finding the
codexexecutable in PATH -
Explicit paths - Checking known installation locations:
Linux:
/usr/local/bin/codex/usr/bin/codex~/.local/bin/codex~/.npm-global/bin/codex~/.volta/bin/codex
Windows:
%LOCALAPPDATA%\Microsoft\WinGet\Links\codex.exe(WinGet)%APPDATA%\npm\codex.cmd(npm global)%USERPROFILE%\.volta\bin\codex.exe(Volta)%USERPROFILE%\.npm-global\codex.cmd
-
Version managers - Glob patterns for common Node.js version managers:
- Linux:
~/.local/share/mise/installs/node/*/bin/codex,~/.nvm/versions/node/*/bin/codex - Windows:
%APPDATA%\nvm\*\codex.cmd
- Linux:
The binary is verified by running codex --version and checking the output contains "codex". If found and verified, fingerprinting succeeds and the agent appears in the node's agent list.
Interception
Traffic interception is not yet supported for this connector.
Authentication
Codex CLI requires authentication to function. During reconnaissance, Praxis validates that valid authentication is configured before including paths in the project list.
Authentication is considered valid if any of the following are true:
-
Environment variable -
OPENAI_API_KEYis set -
Auth file - The
auth_modefield is present in~/.codex/auth.json
Paths without valid authentication are filtered out during reconnaissance. This prevents the UI from showing user homes or projects that cannot actually be used with Codex.
Reconnaissance
Static Recon
Static reconnaissance discovers:
Configuration
- Global config file (
~/.codex/config.toml) - Authentication credentials (
~/.codex/auth.json) - Project-level config (
.codex/config.toml)
MCP Servers
- From
[mcp_servers.<name>]sections in config.toml - Server names, commands, arguments, URLs
Sessions
- Session history from
~/.codex/history.jsonl - Sessions grouped by
session_idfield - Message counts and timestamps
Project Paths
- Extracted from
[projects."<path>"]sections in config.toml - Used for working directory selection
Semantic Recon
When semantic recon is enabled (requires Semantic Parser LLM), the connector also:
- Creates a temporary session to query the agent
- Discovers internal tools and capabilities
- Extracts tool definitions from agent responses
Session Management
Sessions use the codex exec subcommand for non-interactive execution:
┌───────────────────────────────────────────────────────┐
│ Praxis Node │
│ │
│ ┌─────────────────────────────────────────────────┐ │
│ │ CLI Session │ │
│ │ │ │
│ │ codex exec - ◀────── prompt via stdin │ │
│ │ │ │ │
│ │ └─────────▶ Codex Process │ │
│ └─────────────────────────────────────────────────┘ │
└───────────────────────────────────────────────────────┘
Session Context
When creating a session, you can specify:
Working Directory - Where Codex should operate. Passed via --cd <dir> option on the first prompt.
YOLO Mode - When enabled, passes --dangerously-bypass-approvals-and-sandbox and --add-dir / (Linux) or --add-dir C:\ (Windows) to Codex, which auto-approves all operations and grants full filesystem access. Without this, Codex operates with its default sandbox restrictions.
Session Tracking
The connector maintains conversation context across multiple prompts:
- First prompt: Runs
codex exec -with configuration flags, prompt piped via stdin - Subsequent prompts: Runs
codex exec resume --last -to continue the session
Prompts are piped via stdin using the - argument to avoid argument parsing issues. This allows multi-turn conversations where Codex remembers previous context.
Command Line Flags
The connector uses these flags:
| Flag | Description |
|---|---|
--config history.persistence=none | Disables history persistence |
--config network_access=true | Enables network access |
--skip-git-repo-check | Allows running outside git repositories |
--color never | Disables colored output (exec only) |
--dangerously-bypass-approvals-and-sandbox | YOLO mode - skips all approvals |
--add-dir / or C:\ | YOLO mode - grants full filesystem access (exec only) |
--cd <dir> | Sets working directory (exec only) |
Config Format
Codex uses TOML configuration files. Example ~/.codex/config.toml:
model = "o3"
model_provider = "openai"
[mcp_servers.filesystem]
command = "npx"
args = ["-y", "@anthropic/mcp-server-filesystem", "/home/user"]
[mcp_servers.github]
command = "npx"
args = ["-y", "@anthropic/mcp-server-github"]
env = { GITHUB_TOKEN = "..." }
[projects."/home/user/myproject"]
sandbox = "workspace-write"
Files and Paths
Global (Home Directory)
| File | Path | Content |
|---|---|---|
| Global settings | ~/.codex/config.toml | Global configuration |
| Authentication | ~/.codex/auth.json | API credentials |
| Session history | ~/.codex/history.jsonl | JSONL session log |
Project (Working Directory)
| File | Path | Content |
|---|---|---|
| Project settings | .codex/config.toml | Project-specific settings |
Troubleshooting
"Agent not fingerprinted"
- Ensure Codex is installed:
- npm:
npm install -g @openai/codex - WinGet (Windows):
winget install OpenAI.Codex
- npm:
- Check that the
codexcommand is in PATH - If using a version manager (mise, nvm), ensure Node.js is active
"Session creation failed"
- Check that Codex can run normally from terminal
- Verify API key is configured
- Look at node logs for detailed errors
- Try running
codex exec "hello"manually to test
"stdin is not a terminal" error
- This was fixed by using
codex execinstead of interactive mode - Ensure you're running the latest version of the connector
Cursor Agent Connector
The Cursor connector enables interaction with Cursor's background agent CLI.
Overview
Cursor Agent is Cursor's command-line interface for AI-assisted coding. It provides similar functionality to the Cursor IDE but in a headless CLI form. The connector is Linux-only.
Fingerprinting
The connector looks for the Cursor agent CLI by checking:
- PATH search - Finding the
cursor-agentexecutable in PATH - Explicit paths - Checking known installation locations:
/usr/bin/cursor-agent~/.local/bin/cursor-agent
If found, fingerprinting succeeds and the agent appears in the node's agent list.
Interception
Traffic is intercepted for the following domains:
api.cursor.shagent.api5.cursor.shapi2.cursor.shcursor.sh
The proxy supports subdomain matching, so any subdomain of cursor.sh will be intercepted.
When interception is enabled, you'll see:
- Prompts sent to the Cursor API
- Responses including assistant messages
- Tool calls and results
HTTP/2 and gRPC Support
Cursor uses HTTP/2 with gRPC for its streaming API (e.g., /agent.v1.AgentService/Run). The proxy fully supports HTTP/2 frame-level interception:
- Frame types captured: HEADERS, DATA, SETTINGS, GOAWAY, etc.
- Traffic entries: Logged as
H2_HEADERSandH2_DATAmethods - Stream tracking: Extracts
:pathfrom HPACK headers for URL context - Bidirectional: Both request and response frames are captured
In the web UI, HTTP/2 traffic appears grouped by URL (similar to WebSocket), with individual frames expandable to view payloads.
Session Management
Sessions use the Agent Client Protocol (ACP) -- a JSON-RPC 2.0 protocol over NDJSON stdio. Praxis uses the agent-client-protocol crate's ClientSideConnection for typed, async communication.
┌───────────────────────────────────────────────────────┐
│ Praxis Node │
│ │
│ cursor-agent acp │
│ │ │
│ ├──▶ initialize (InitializeRequest) │
│ ├──▶ session/new → session_id + models │
│ ├──▶ session/prompt → streaming updates │
│ └──▶ session/close → cleanup │
└───────────────────────────────────────────────────────┘
Session Context
When creating a session, you can specify:
Working Directory - Where Cursor should operate.
YOLO Mode - When enabled, tool permission requests are auto-approved.
Interactive Mode - When set (TUI or web sessions), permission requests are forwarded to the user for approval. Non-interactive sessions (MCP, orchestrator) auto-deny permission requests.
Session Creation
cursor-agent acpis spawned as an async subprocess viatokio::process::CommandClientSideConnectionestablished over stdin/stdoutInitializeRequesthandshake establishes the connection and negotiates capabilitiesNewSessionRequestcreates a session with the working directory
Transacting
Sending prompts uses typed ACP requests:
- A
PromptRequestis sent with the prompt text asContentBlock::Text - The agent streams back real-time
SessionUpdatenotifications:AgentMessageChunk,ToolCall,ToolCallUpdate,Plan, andUsageUpdate - Permission requests arrive via the
Clienttrait'srequest_permissioncallback - The prompt completes with a
PromptResponsecontaining aStopReason
Cancellation
Sessions support mid-prompt cancellation:
- A
CancelNotificationis sent to the agent - The agent responds to the original
PromptRequestwithStopReason::Cancelled - Any partial output is preserved in the conversation
Session Cleanup
When a session is closed, Praxis sends CloseSessionRequest via ACP, then terminates the subprocess.
Files and Paths
Session History
| Location | Path | Content |
|---|---|---|
| Chat history | ~/.config/cursor/chats/<project_hash>/<chat_id>/ | Session files |
Binary Locations
| Platform | Paths Checked |
|---|---|
| Linux | /usr/bin/cursor-agent, ~/.local/bin/cursor-agent, PATH |
Troubleshooting
"Agent not fingerprinted"
- Ensure
cursor-agentis installed - Verify the command is in PATH or at a known location
- Check file permissions
"Session creation failed"
- Verify
cursor-agent create-chatworks from terminal - Check that Cursor is authenticated
- Look at node logs for detailed errors
"Traffic not appearing"
- Ensure interception is enabled
- Check that the proxy is using VPN or TPROXY mode (not system proxy)
- Verify HTTP/2 traffic is being captured (check for H2_DATA entries)
"HTTP/2 connection issues"
- The proxy handles HTTP/2 frame-level interception automatically
- If traffic appears but the agent fails, check for certificate trust issues
- gRPC streaming is supported - both directions are captured
Gemini CLI Connector
The Gemini connector enables interaction with Google's Gemini CLI agent. It is implemented as a Lua agent script (agents/gemini.lua).
Overview
Gemini CLI is Google's command-line AI assistant. Like Claude Code, it can read files, execute commands, and work with code. The connector supports Linux and Windows.
Fingerprinting
The connector looks for Gemini CLI by checking:
- PATH search - Finding the
geminiexecutable in PATH (prefers.cmdon Windows) - Explicit paths - Checking known installation locations:
- Linux:
~/.local/bin/gemini,/usr/local/bin/gemini,/usr/bin/gemini - Windows:
%USERPROFILE%\.local\bin\gemini.cmd,%USERPROFILE%\AppData\Roaming\npm\gemini.cmd, etc.
- Linux:
If found, fingerprinting succeeds and the agent appears in the node's agent list.
Interception
Traffic is intercepted for the domain:
generativelanguage.googleapis.com
When interception is enabled, you'll see:
- Prompts sent to the Gemini API
- Responses including assistant messages
- Function/tool calls and results
Authentication
Gemini CLI requires authentication to function. During reconnaissance, Praxis validates that valid authentication is configured before including paths in the project list.
Authentication is considered valid if any of the following are true:
-
Environment variables - One of these is set:
GEMINI_API_KEYGOOGLE_GENAI_USE_VERTEXAIGOOGLE_GENAI_USE_GCA
-
Settings file - The
security.authobject is present in the relevantsettings.json:- For user homes:
~/.gemini/settings.json - For project paths:
.gemini/settings.jsonin the project, or the owning user's home settings
- For user homes:
Paths without valid authentication are filtered out during reconnaissance. This prevents the UI from showing user homes or projects that cannot actually be used with Gemini.
Reconnaissance
Static Recon
Static reconnaissance discovers:
Configuration
- User settings (
~/.gemini/settings.json) - Google account info (
~/.gemini/google_accounts.json) - OAuth credentials (
~/.gemini/oauth_creds.json) - System defaults and settings (platform-specific paths)
Context Files
- Global context (
~/.gemini/GEMINI.md) - Project context files (configurable via
context.fileNamein settings)
Sessions
- Session files under
~/.gemini/tmp/<project_hash>/chats/ - Session metadata including message count and timestamps
Semantic Recon
When semantic recon is enabled, the connector also creates a session and queries the agent directly to discover internal tools and capabilities.
Session Management
Sessions use the Agent Client Protocol (ACP) -- a JSON-RPC 2.0 protocol over NDJSON stdio. Praxis uses the agent-client-protocol crate's ClientSideConnection for typed, async communication.
Session Context
When creating a session, you can specify:
Working Directory - Where Gemini should operate.
YOLO Mode - When enabled, tool permission requests are auto-approved.
Interactive Mode - When set (TUI or web sessions), permission requests are forwarded to the user for approval. Non-interactive sessions (MCP, orchestrator) auto-deny permission requests.
Transacting
gemini --acpis spawned as an async subprocessClientSideConnectionestablished,InitializeRequesthandshake performedPromptRequestsends the prompt; the agent streams backSessionUpdatenotifications (text chunks, tool calls, plans, tool results)- Permission requests handled via the
Clienttrait callback PromptResponsereturned withStopReasonon completion
Cancellation
Sessions support mid-prompt cancellation via CancelNotification. The agent responds with StopReason::Cancelled and any partial output is preserved.
Config Editing
You can view and edit Gemini's configuration files directly from the Praxis UI:
- User settings with model and API preferences
- Context files
Changes are written back to disk and take effect on the next Gemini session.
Tool Discovery
The connector supports both static and semantic recon. Static recon parses configuration files to discover settings and context files. Semantic recon creates a session and queries the agent directly to discover internal tools and capabilities.
Files and Paths
Global (Home Directory)
| File | Path | Content |
|---|---|---|
| User settings | ~/.gemini/settings.json | Main configuration |
| Google accounts | ~/.gemini/google_accounts.json | Account info |
| OAuth credentials | ~/.gemini/oauth_creds.json | Auth credentials |
| Global context | ~/.gemini/GEMINI.md | Global instruction file |
| Sessions | ~/.gemini/tmp/<hash>/chats/ | Session history by project |
System (Platform-specific)
| File | Linux Path | Windows Path |
|---|---|---|
| System defaults | /etc/gemini-cli/system-defaults.json | C:\ProgramData\gemini-cli\system-defaults.json |
| System settings | /etc/gemini-cli/settings.json | C:\ProgramData\gemini-cli\settings.json |
Project (Working Directory)
| File | Path | Content |
|---|---|---|
| Project settings | .gemini/settings.json | Project-specific settings |
| Project context | GEMINI.md | Project instruction file (configurable) |
Troubleshooting
"Agent not fingerprinted"
- Ensure Gemini CLI is installed
- Verify the
geminicommand is in PATH - On Windows, check that the
.cmdwrapper exists
"Session creation failed"
- Check that Gemini CLI can run normally from terminal
- Verify Google API credentials are configured
- Look at node logs for detailed errors
M365 Copilot Connector
The M365 Copilot connector enables interaction with Microsoft 365 Copilot. Windows only.
Overview
Microsoft 365 Copilot runs in a WebView2 browser component. The connector uses Chrome DevTools Protocol (CDP) via the praxis.devtools Lua library to interact with the Copilot UI programmatically.
Architecture
agents/m365copilot.lua ← Agent-specific: selectors, recon JS, config
↓ uses
praxis.devtools ← Lua helper: generic transact loop, lifecycle
↓ uses
praxis.cdp_* ← Native Rust: CDP connection, JS eval, DOM ops
The M365 connector is a Lua agent (agents/m365copilot.lua) that uses the shared praxis.devtools library for DevTools session management and the native praxis.cdp_* API for CDP operations.
Fingerprinting
The connector checks for Copilot availability:
- Searches for
M365Copilot.exein running processes - Checks the Windows package install location (
Microsoft.MicrosoftOfficeHub)
Interception
Traffic is intercepted for:
- Domain:
substrate.office.com - URL pattern:
m365Copilot/Chathub
Session Management
Creating Sessions
When you create a session:
- All running
M365Copilot.exeprocesses are killed by name - All existing CDP connections are drained and their process trees terminated
- App is launched with a random debugging port via
WEBVIEW2_ADDITIONAL_BROWSER_ARGUMENTS - On Windows, the process is spawned on a hidden desktop so the window is invisible (release builds by default; debug builds default to visible). Override with
PRAXIS_NOT_HIDDEN=1to show the window, orPRAXIS_NOT_HIDDEN=0to hide it in debug builds. If the hidden desktop cannot be created, the window is minimized after DevTools connects. - CDP connection is established via chromiumoxide (5 attempts, 2s interval)
- Post-initialization: waits for input element, clicks Work/Web toggle, opens new private chat
Transacting
The praxis.devtools library provides a generic transact loop:
- Waits for input element (
#m365-chat-editor-target-element) - Counts existing messages
- Clicks input, inserts text via CDP
InsertText(handles emojis/special chars), presses Enter - Polls for response (250ms interval, 120s max)
- Detects idle state (no activity for ~3s) and retries up to 3 times
Response completion is detected by checking:
- New
div[data-testid="markdown-reply"]elements - Absence of "Stop generating" button
- Non-empty response text
Aborting
CDP sessions support abort_transaction — when a transaction is cancelled (e.g. via the web UI), the entire process tree is terminated by PID. The session state stores the process_id which the Rust session layer uses for process-level cancellation.
Cleanup Safety Net
When a session is closed (or dropped), the Rust layer performs cleanup even if the Lua session_close callback fails:
- Kills the process tree by PID
- Removes the CDP connection handle from the global map
This prevents orphaned browser processes after crashes or Lua errors.
Working Directories
M365 Copilot supports two working directories that map to toggle buttons:
- Work - Enterprise/organizational context
- Web - Web search context
Reconnaissance
Static Recon
Discovers user identity and available toggles by executing JavaScript in a temporary DevTools session:
- User identity via
nestedAppAuthServiceprofile object (UPN and display name) - Available toggles (Work/Web) by checking for toggle button elements
Recon requires a valid process_path from a prior fingerprint. If fingerprint hasn't run, recon returns empty results.
Semantic Recon
Creates a temporary session and asks Copilot to list its tools, then parses the response with the semantic parser. Uses a dual-prompt fallback: tries a JSON-format prompt first, and if zero tools are parsed, retries with a high-level overview prompt.
Requirements
- Windows - This connector is Windows-only
- M365 License - User must have Copilot access
- Logged In - User must be authenticated to Microsoft
Troubleshooting
"Agent not fingerprinted"
- Verify the user has M365 Copilot access
- Check that
M365Copilot.exeis installed
"Session creation failed"
- Check that the app can launch with debugging enabled
- Verify M365 authentication is valid
- Look for firewall blocking debugging ports (9222-9999 range)
- Check node logs for CDP errors
- Set
PRAXIS_NOT_HIDDEN=1to see the app window for debugging
"Responses not captured"
- UI selectors may have changed; report as an issue
- Check for Copilot page structure changes
Limitations
- No config editing (browser-based)
- No MCP server discovery
- Requires active M365 authentication
- Session reliability depends on Microsoft's UI
Architecture Overview
Praxis has a distributed architecture designed for monitoring and controlling AI agents across multiple systems. Let's walk through how the pieces fit together.
The Big Picture
┌─────────────────┐
│ Web Browser │
│ (React SPA) │
└────────┬────────┘
│ HTTP/WebSocket
┌────────▼────────┐
│ Web │
│ (HTTP Server) │
└────────┬────────┘
│ Internal
┌────────▼────────┐
│ Service │
│ (Backend) │
└────────┬────────┘
│ RabbitMQ (AMQP)
┌────────────────────────┼────────────────────────┐
│ │ │
┌──────▼──────┐ ┌──────▼──────┐ ┌──────▼──────┐
│ Node │ │ Node │ │ Node │
│ (Target A) │ │ (Target B) │ │ (Target C) │
└─────────────┘ └─────────────┘ └─────────────┘
Components
Node
The node runs on target systems where AI agents are installed. It's the "eyes and hands" of Praxis on each endpoint.
What it does:
- Fingerprints installed agents
- Performs reconnaissance on agent configurations and sessions
- Intercepts traffic between agents and LLM backends
- Creates and manages sessions with agents
- Provides PTY terminal access to the system
Key characteristics:
- Stateless - all persistent data lives on the service
- Single binary, no dependencies
- Communicates with service over RabbitMQ
See Node Architecture for details.
Service
The service is the central backend that coordinates everything.
What it does:
- Tracks all connected nodes and their agents
- Stores configuration, operation definitions, and chain workflows
- Manages the semantic operations queue
- Executes chains by orchestrating multi-step workflows
- Persists intercepted traffic and recon results
- Handles LLM provider integrations
Key characteristics:
- Persistent storage (SQLite default, PostgreSQL for production)
- Stateful - knows about all nodes and their state
- Runs the operation manager and chain executor
See Service Architecture for details.
Web
The web component serves the frontend and provides the API.
What it does:
- Serves the React single-page application
- Provides WebSocket endpoint for real-time communication
- Handles HTTP requests for static assets
- Bridges between browser clients and the service
Key characteristics:
- React/TypeScript frontend with Tailwind CSS
- WebSocket for bidirectional communication
- Builds into the binary (embedded assets)
See Web Architecture for details.
Communication
No direct client↔node traffic
The service is the only component that talks to nodes. Clients (CLI, web, external ACP tools) speak to the service; the service forwards to the relevant node over RabbitMQ. This keeps access control, session routing, and request correlation in one place and means node failure modes never leak into clients.
CLI ─▶ RabbitMQ ─▶ Service ─▶ RabbitMQ ─▶ Node
Web SPA ─▶
External ACP client ─▶
ACP (Agent Client Protocol)
Each node exposes a single ACP server (node/src/acp_server/) over
RabbitMQ. That one endpoint is how every local agent on the node is
driven — the connector to use is selected per-session via
_meta.praxis.connector on the session/new request. Multiple concurrent
sessions are supported on the same node, each with its own freshly-built
Lua VM.
The service-side proxy (service/src/acp_node_proxy.rs) routes frames:
- External client → service →
_meta.praxis.nodeId→ target node. - Node → service → originating client (by correlated
client_id). - Service's internal orchestrator → node, using a
svc_*pseudo-client-id so responses are consumed in-process instead of being forwarded.
Recon is a custom ACP extension (_praxis/recon) plus four file-op
extensions (_praxis/read_file, _praxis/write_file, _praxis/grep_files,
_praxis/write_session_content). The node advertises them in
InitializeResponse._meta.extensions along with the connector catalog.
RabbitMQ
All communication between nodes, service, and web clients flows through RabbitMQ:
| Queue | Direction | Purpose |
|---|---|---|
NodeSignal | Node → Service | Registration, traffic, recon results, outbound ACP frames |
Node_{id} | Service → Node | Commands, parser responses, inbound ACP frames |
NodeBroadcast | Service → All Nodes | Refresh requests (fanout exchange) |
ClientSignal | Client → Service | UI requests, inbound ACP frames |
Client_{id} | Service → Client | Direct responses, outbound ACP frames |
ClientBroadcast | Service → All Clients | State updates (fanout exchange) |
RabbitMQ provides:
- Reliable message delivery
- Decoupling between components
- Easy scaling (nodes can come and go)
- Persistence for messages in flight
Message Flow Example
Here's what happens when a CLI driver runs a prompt over ACP:
- CLI (ACP proxy) →
ClientSignal→ Service - Service (
AcpNodeProxy) sees_meta.praxis.nodeId, forwards the raw JSON-RPC frame viaNode_{id}→ Node - Node (
NodeAcpServer) processessession/new/session/prompt/ etc., running on a per-session Lua VM - Node emits response +
session/updatenotifications onNodeSignal - Service (
AcpNodeProxy::forward_to_client) routes them to the originatingClient_{id}queue - CLI reads responses from its client queue and emits them on stdout
Data Flow
Intercepted Traffic
Agent ─HTTPS─▶ Proxy ─▶ Node ─RabbitMQ─▶ Service ─▶ Database
│
└─▶ Web ─WebSocket─▶ Browser
Operations
Browser ─▶ Web ─▶ Service ─▶ LLM (planning)
│
└─▶ Node ─▶ Agent (execution)
│
└─▶ Output ─▶ Service ─▶ Browser
Database Schema
The service stores everything in a relational database:
- config - key-value settings (LLM configs, etc.)
- operation_definitions - saved operation templates
- semantic_operations - operation execution history
- chain_definitions - workflow definitions
- chain_executions - workflow execution history
- traffic_log - intercepted HTTP traffic
- intercept_rules - traffic matching rules
- recon_results - cached reconnaissance data
- application_logs - centralized logging (controlled by
application_logs_enabled)
Deployment Patterns
Development
Single machine running everything:
- Docker Compose with service, web, and RabbitMQ
- Node running locally for testing
Production
Separate concerns:
- Service/Web on central server
- RabbitMQ (possibly managed service)
- Nodes deployed to target systems
- PostgreSQL for the database
Cloud (Azure)
See Azure Deployment:
- Container Apps for service/web
- Managed RabbitMQ or Container Instance
- Azure Database for PostgreSQL
Node Architecture
The node is the component that runs on target systems. It's responsible for all local interactions with AI agents.
Overview
┌──────────────────────────────────────────────────────────────┐
│ Node │
│ │
│ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ │
│ │ Agent Registry │ │ Intercept Mgr │ │ Terminal Mgr │ │
│ │ │ │ │ │ │ │
│ │ ┌────────────┐ │ │ ┌────────────┐ │ │ ┌────────────┐ │ │
│ │ │ Connector │ │ │ │ Proxy │ │ │ │ PTY │ │ │
│ │ ├────────────┤ │ │ ├────────────┤ │ │ └────────────┘ │ │
│ │ │ Connector │ │ │ │ TUN/VPN │ │ │ │ │
│ │ ├────────────┤ │ │ ├────────────┤ │ └────────────────┘ │
│ │ │ Connector │ │ │ │ TPROXY │ │ │
│ │ └────────────┘ │ │ ├────────────┤ │ │
│ └────────────────┘ │ │ Hosts │ │ │
│ │ └────────────┘ │ │
│ └────────────────┘ │
│ │
│ ┌────────────────────────────────────────────────────────┐ │
│ │ Runtime / Message Handler │ │
│ └────────────────────────────────────────────────────────┘ │
│ │ │
│ RabbitMQ │
└──────────────────────────────┼───────────────────────────────┘
│
To Service
Agent Registry
The agent registry manages all supported agent connectors. On startup the
registry is built via rebuild() which:
- Creates native agents from the factory (currently unused; all agents are Lua-based)
- Loads Lua connectors from the service (delivered in the
RegistrationAckmessage) - Falls back to embedded Lua scripts if no service scripts are provided
The service includes all stored Lua scripts in the NodeRegistrationAck sent
to the node's direct queue during registration. This avoids a race condition
where a fanout broadcast could arrive before the node's exchange consumer is
ready. On re-registration (e.g. after connection loss), scripts are also
delivered via the ack.
Subsequent script changes (add/edit/delete via the web UI) are broadcast to
nodes via AgentRegistryUpdate on the fanout exchange.
Updates are session-gated: if a session is open when an update arrives, it is queued and applied after the session closes. If multiple updates arrive while a session is open, only the latest is kept.
Fingerprint Caching
Fingerprinting runs --version on each agent binary to verify availability and
extract the version string. Results are cached for 60 seconds when the agent is
available. Unavailable agents (not installed) are re-checked on every cycle so
they are discovered as soon as they appear.
Development Builds
In debug builds, PRAXIS_IGNORE_SERVICE_AGENTS=1 (the default) causes the node
to ignore service-pushed scripts and use only embedded Lua scripts. Set to 0
to test with service-managed scripts.
Intercept Manager
The intercept manager handles traffic capture. It supports four methods:
Proxy Mode
Configures system proxy settings to route HTTP/HTTPS through a local proxy:
- Linux: Sets
HTTP_PROXYandHTTPS_PROXYenvironment variables - Windows: Modifies registry proxy settings
The proxy terminates TLS using a generated root CA, captures traffic, then re-encrypts and forwards to the actual destination.
VPN Mode
Creates a TUN adapter and routes specific IPs through it:
- TUN device created (wintun on Windows, tun crate on Linux)
- Intercept domains resolved to IP addresses
- Routes added through the TUN interface
- Packet engine performs NAT to redirect to local proxy
This captures traffic even from applications that ignore proxy settings.
Hosts Mode
Modifies the hosts file to redirect domains to localhost:
- Adds entries for intercept domains
- Proxy listens and handles redirected traffic
- Simpler but less flexible than VPN mode
TPROXY Mode (Linux)
Uses iptables TPROXY for transparent interception:
- Intercept domains resolved to IP addresses
- iptables mangle rules mark packets to target IPs
- Policy routing directs marked packets to loopback
- TPROXY redirects packets to proxy
- Proxy uses
SO_ORIGINAL_DSTto get real destination
This provides kernel-level interception without a TUN device.
Certificate Authority
All methods use a generated CA:
- Root CA created with unique key
- Root cert installed in system trust store
- Leaf certificates generated per domain
- TLS termination with valid-looking certs
Multi-User Support
When the node runs as root, it provides multi-user support:
User Enumeration
The node scans all user home directories (/home/* and /root) to discover:
- Agent configurations (e.g.,
.claude/,.gemini/,.codex/) - Project directories with agent config files
- Session history files
This allows a single node running as root to manage agents across all users on the system.
User-Aware Session Execution
When a session is created with a working directory owned by a non-root user, the node automatically:
- Determines the directory owner's uid/gid
- Sets the
HOMEenvironment variable to the user's home directory - Spawns the agent process as that user
This ensures the agent:
- Has appropriate file permissions for the project
- Reads its config from the correct user's home directory
- Creates files owned by the correct user
Security Considerations
- Path validation ensures file operations stay within valid home directories
- Config file access is restricted to enumerated user homes
- The node validates all paths before reading or writing
Session Management
Sessions allow direct interaction with agents:
CLI Agents (PTY)
- PTY created for the agent process
- Agent spawned with appropriate flags (and as appropriate user when running as root)
- Prompts written to stdin
- Responses read from stdout
- Output parsed and returned
CLI Agents (ACP)
Agents that support the Agent Client Protocol (Cursor, Gemini) use a long-lived subprocess with JSON-RPC 2.0 over NDJSON stdio instead of PTY. The node uses the agent-client-protocol crate's ClientSideConnection for typed, async communication:
- Agent spawned with ACP flag (e.g.
cursor-agent acp,gemini --acp) viatokio::process::Command ClientSideConnectionestablished over the subprocess stdin/stdout- Initialize handshake via typed
InitializeRequest/InitializeResponse - Prompts sent via typed
PromptRequest, responses received asPromptResponsewithStopReason - Real-time streaming updates (
SessionUpdatevariants: text chunks, tool calls, tool results, plans) delivered via theClienttrait'ssession_notificationcallback - Permission requests handled via the
Clienttrait'srequest_permissioncallback - Cancellation via
CancelNotification
The connection runs on a dedicated thread with a LocalSet (since ClientSideConnection is !Send). An AcpHandle provides a Send-safe interface for the Lua runtime via channels.
Browser-based Agents
- App with webview launched with debugging enabled (on a hidden desktop in release builds; visible in debug builds by default)
- CDP connection established via chromiumoxide
- Prompts injected via DOM manipulation (InsertText + Enter)
- Responses polled from page via JavaScript evaluation
- Abort kills the entire process tree; Drop safety net cleans up even on Lua errors
Session Context
Sessions are created with:
- Working directory - where the agent operates
- YOLO mode - auto-approve tool calls
- Interactive - whether permission requests should be forwarded to the user (TUI/web) or auto-denied (MCP/orchestrator)
Terminal Manager
Provides PTY terminal access to the target system:
- Shell spawned (bash/zsh/powershell)
- PTY handles input/output
- Terminal data streamed to web UI
- Supports resize, Ctrl+C, etc.
Message Handling
The node speaks two protocols over RabbitMQ. Agent and session interaction
use ACP (Agent Client Protocol). Everything else — intercept, terminal,
config, registration — uses the bespoke NodeCommand envelope.
ACP (node-as-agent)
The node runs its own ACP server (node/src/acp_server/) and appears to the
service as a single ACP-speaking agent. The service forwards client ACP
frames to the node over RabbitMQ via NodeDirectMessage::Acp(AcpFrame);
responses and notifications flow back via NodeSignalMessage::Acp.
Standard ACP methods supported:
initialize— capability handshake. The node advertises the connector catalog and supported extensions inInitializeResponse._meta:{ "extensions": { "_praxis/recon": { "version": 1 } }, "connectors": [ { "shortName": "claude-code", "name": "Claude Code" }, ... ], "nodeId": "..." }session/new— create a session. The target connector is selected via_meta.praxis.connector. Session options (yolo,promptTimeoutSecs,interactive) also live under_meta.praxis:{ "cwd": "/path", "_meta": { "praxis": { "connector": "claude-code", "yolo": false, "promptTimeoutSecs": 600, "interactive": true } } }session/prompt— send a prompt to the named session.session/cancel— cancel an in-flight prompt.session/close— terminate and release the session's per-session Lua VM.session/list— enumerate live sessions on the node.
Multiple concurrent sessions are supported. Each session owns a freshly instantiated Lua VM (loaded from connector bytecode compiled once at connector-load time), so no Lua-level state leaks between sessions sharing the same connector script.
ACP extensions
All are agent-scoped custom ACP methods (no session_id required) and are
advertised in InitializeResponse._meta.extensions:
_praxis/recon— reconnaissance. Params{ "agent_short_name": string, "is_semantic": bool }; returns aReconResult. Replaces the legacyNodeCommand::Agent(Recon)/Agent(ReconSemantic)commands._praxis/read_file,_praxis/write_file,_praxis/grep_files— agent-scoped file ops used by recon tooling and the orchestrator._praxis/write_session_content— writes agent-session content through the connector'swrite_session_contenthook so agents with virtual session stores can intercept the write.
NodeCommand (non-agent concerns)
#![allow(unused)] fn main() { pub enum NodeCommand { Intercept(InterceptCommand), Terminal(TerminalCommand), Config(ConfigCommand), AgentRegistry(AgentRegistryCommand), } }
Agent and session interaction have moved off NodeCommand entirely. The
legacy NodeCommand::Agent and NodeCommand::Session variants — along
with NodeSignalMessage::ReconResultUpdate and ::SessionUpdate — were
removed once the CLI, web frontend, service orchestrator, and MCP server
had all been ported to ACP.
Intercept Commands
Enable- start interception with specified methodDisable- stop interception and cleanup
State Management
The node is mostly stateless-it reports to the service but doesn't persist data locally. However, some state is maintained:
Intercept State
Saved to disk for crash recovery:
- Active interception method
- Installed certificate info
- Modified system settings
On restart, the node cleans up stale state.
Session State
Kept in memory:
- Live ACP sessions keyed by
session_id, each with its own Lua VM and cancellation flag - PTY handles
- Transaction tracking
Node Reset
A node can be reset at any time via the UI, CLI (node reset), or MCP
(node_reset). Reset cancels all in-flight operations, closes sessions and
terminals, disables interception, and re-registers the node with the service
— equivalent to a clean restart without killing the process.
The reset signal is delivered on a dedicated RabbitMQ queue
(Node_{id}_reset) consumed by its own task. This guarantees the signal is
never blocked by a long-running command handler in the main event loop. When
the reset consumer receives a message it cancels a CancellationToken that
the main loop observes. Slow commands are also wrapped in tokio::select!
with this token so they abort at the next .await point.
After cleanup the runtime returns RuntimeExit::Reset and the main
reconnection loop immediately re-registers without the usual reconnect delay.
Registration
When the node starts:
- Generates unique node ID (or uses existing)
- Collects system information
- Runs agent fingerprinting
- Sends registration to service
- Begins processing commands
Periodic updates report current state to the service.
Service Architecture
The service is the central backend that coordinates nodes, manages data, and orchestrates operations. It is the only component that talks to nodes — clients (CLI, web, external ACP tools) always reach nodes through the service's ACP server and proxy layer.
Overview
┌──────────────────────────────────────────────────────────────┐
│ Service │
│ │
│ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ │
│ │ Node Tracker │ │ Semantic Ops │ │ Chain │ │
│ │ │ │ Manager │ │ Executor │ │
│ │ node_1 ─────┐ │ │ │ │ │ │
│ │ node_2 ─────┤ │ │ queue ─────┐ │ │ workflow ──┐ │ │
│ │ node_3 ─────┘ │ │ executor ──┘ │ │ steps ─────┘ │ │
│ └────────────────┘ └────────────────┘ └────────────────┘ │
│ │
│ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ │
│ │ Trigger │ │ LLM Client │ │ Message │ │
│ │ Engine │ │ │ │ Processor │ │
│ │ scheduler ────│ │ providers ────│ │ │ │
│ └────────────────┘ └────────────────┘ └────────────────┘ │
│ │
│ ┌────────────────┐ │
│ │ Database │ │
│ │ SQLite/PG ────│ │
│ └────────────────┘ │
│ │
│ RabbitMQ │
└─────────────────────────────┬────────────────────────────────┘
│
┌───────────────┼───────────────┐
│ │ │
Nodes Clients Web
ACP server and node proxy
The service hosts an ACP server (service/src/acp_server.rs) that
external clients speak to. When a client frame carries
_meta.praxis.nodeId or names a session_id the service has mapped to a
node, the AcpNodeProxy
(service/src/acp_node_proxy.rs) forwards the frame over RabbitMQ to the
target node's ACP server. Responses and session/update notifications
flow back the same way.
The service's internal orchestrator subsystems (e.g. tools, future
semantic_ops, claude_bridge) also drive nodes through this same proxy,
using AcpNodeProxy::request / request_collecting_text. Internal
callers get a svc_* pseudo-client-id so their responses are completed
in-process instead of being delivered to any external client queue.
Node Tracking
The service maintains state for all connected nodes:
#![allow(unused)] fn main() { struct NodeState { node_id: String, machine_name: String, os_details: String, agents: Vec<AgentInfo>, selected_agent: Option<SelectedAgent>, intercept_status: InterceptStatus, terminal_active: bool, last_seen: DateTime<Utc>, } }
Registration
When a node registers:
- Node info stored/updated
- Agent list recorded
- Acknowledgment sent with node-specific queue name
- Node subscribes to broadcast exchange
- Service broadcasts current
application_logs_enabledstate to nodes and clients
Health Monitoring
Nodes send periodic updates. If a node goes silent:
- Marked as potentially offline
- Can be manually removed from UI
- Automatic cleanup after timeout
Semantic Operations Manager
Handles execution of semantic operations through agents:
Operation Queue
Operations are queued per node:
- One operation runs at a time per node
- FIFO ordering
- Can cancel queued or running operations
Execution Modes
One-Shot Mode:
- Operation prompt sent directly to agent session
- Agent executes and responds
- Response captured and returned
Agent Mode:
- Operation sent to orchestrator LLM with system prompt
- Orchestrator determines action using
session_prompttool - Action executed via agent
- Result returned to orchestrator
- Repeat until complete or max iterations
System Prompts
Agent mode uses system prompts embedded at build time:
| Prompt | Location | Purpose |
|---|---|---|
| Semantic Op Agent | service/src/prompts/semantic_op_agent.prompt | Orchestrator behavior |
| Tool Calling | common/src/prompts/tool_calling.prompt | Tool call JSON format |
| Task Completion | common/src/prompts/task_completion.prompt | Completion signal format |
These prompts are compiled into the binary using include_str! and cannot be modified at runtime. This ensures consistent behavior and prevents prompt injection.
Model Override
Operations can specify a different LLM model than the default. The manager resolves the model reference and uses the appropriate provider.
Chain Executor
Executes multi-step workflows:
Chain Structure
Trigger → Element → Element → ... → Termination
│
└── Transform/Operation/Prompt
Execution Flow
- Chain triggered (manual, scheduled, or event-driven)
- Target spec resolved into concrete node/agent pairs
- For multi-target specs, the executor performs a fan-out (one execution per target)
- Elements executed in order following connections
- Output from each element passed to next
- Session groups maintain shared context
- Termination collects final output
Session Groups
Elements in the same session group share an agent session:
- Maintains conversation context
- Allows multi-turn interactions
- YOLO mode can be set per group
Target Resolution
When a chain runs with a TargetSpec (from a trigger or advanced targeting), the targeting module resolves it into concrete (node_id, agent_short_name) pairs:
- List all registered nodes
- Filter by
node_idsif non-empty - Filter by
os_filter(case-insensitive substring on OS details) - If
include_triggering_nodeis set, ensure the triggering node passes the filter - For each surviving node, filter discovered agents by
agent_short_names - Skip agents that are not currently available
- Return the flattened list of resolved targets
Each resolved target gets its own independent chain execution.
Trigger Engine
The trigger engine automates chain execution based on configured triggers. It is initialized at service startup and runs for the lifetime of the service.
Trigger Types
#![allow(unused)] fn main() { enum TriggerConfig { Scheduled { schedule: ScheduleSpec, recurring: bool }, InterceptMatch { rule_id: i64 }, NewNode, } enum ScheduleSpec { DailyAt { hour: u8, minute: u8 }, Interval { minutes: u32 }, } }
Scheduler Loop
The engine runs a polling loop that checks for due scheduled triggers every 30 seconds. It also accepts refresh signals (via Notify) so that CRUD operations on triggers cause an immediate re-check.
For each due trigger:
- Load the associated chain definition
- Resolve the target spec against the current node registry
- Execute the chain via
execute_fan_outfor each resolved target - Mark the trigger as fired (update
last_fired_at, recomputenext_fire_at) - If the trigger is non-recurring, disable it after firing
Event-Driven Triggers
Event triggers fire outside the polling loop, in direct response to events:
InterceptMatch - When intercepted traffic matches an intercept rule, the node dispatch handler calls fire_intercept_match_triggers(). The engine looks up all enabled InterceptMatch triggers whose rule_id matches, applies a 60-second debounce per trigger, and fires matching chains.
NewNode - When a node registers, the node dispatch handler spawns a delayed task (10 seconds to allow agent discovery) that calls fire_new_node_triggers(). The engine fires all enabled NewNode triggers with the registering node ID as the triggering node.
Trigger Storage
Triggers are stored in the chain_triggers database table with JSON-serialized trigger_config and target_spec columns. The engine queries this table for due triggers and event-based triggers, and updates it after firing.
Database
The service uses SQLAlchemy-style database abstraction supporting SQLite and PostgreSQL:
Schema
-- Configuration
CREATE TABLE config (
key TEXT PRIMARY KEY,
value TEXT
);
-- Operation definitions
CREATE TABLE operation_definitions (
id INTEGER PRIMARY KEY,
full_name TEXT UNIQUE,
content TEXT,
created_at TIMESTAMP,
updated_at TIMESTAMP
);
-- Operation executions
CREATE TABLE semantic_operations (
id TEXT PRIMARY KEY,
node_id TEXT,
agent_short_name TEXT,
operation_name TEXT,
status TEXT,
output TEXT,
created_at TIMESTAMP,
completed_at TIMESTAMP
);
-- Traffic log
CREATE TABLE traffic_log (
id INTEGER PRIMARY KEY,
timestamp TIMESTAMP,
node_id TEXT,
agent_short_name TEXT,
direction TEXT,
url TEXT,
request_body BLOB,
response_body BLOB,
-- ...
);
-- Lua agent scripts
CREATE TABLE lua_agent_scripts (
id TEXT PRIMARY KEY,
name TEXT,
script TEXT,
created_at TEXT,
updated_at TEXT
);
-- Chain triggers
CREATE TABLE chain_triggers (
id TEXT PRIMARY KEY,
chain_id TEXT NOT NULL,
trigger_config TEXT NOT NULL, -- JSON: TriggerConfig
target_spec TEXT NOT NULL, -- JSON: TargetSpec
enabled INTEGER DEFAULT 1,
last_fired_at TEXT,
next_fire_at TEXT,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);
-- Chain definitions, executions, etc.
Connection
Default: SQLite at ~/.praxis_operations.db
For production: PostgreSQL via PRAXIS_DATABASE_URL
LLM Client
Handles communication with LLM providers:
Supported Providers
- Anthropic (Claude)
- OpenAI (GPT)
- Google (Gemini)
- Groq
- Cerebras
- Mistral
- xAI
- Ollama (local)
Configuration
Stored in database as key-value pairs:
llm.semantic_ops.providerllm.semantic_ops.modelllm.semantic_ops.api_key- (similar for other features)
Usage
Different features use different LLM assignments:
- Semantic Operations - operation orchestration
- Semantic Parser - tool discovery during recon
- Traffic Parser - traffic summarization
Message Processing
The service processes messages from multiple queues:
Node Messages (NodeSignal)
Registration- node startupInformationUpdate- periodic state updateCommandResponse- response to commandInterceptedTraffic- captured trafficReconResultUpdate- recon dataSemanticParserRequest- parser request from node
Client Messages (ClientSignal)
Registration- client (web) connectionCommand- forward to nodeSemanticOpRun- execute operationChainRun- execute chainTrafficLogRequest- query traffic- Configuration and management requests
Broadcasts
The service sends broadcasts (fanout exchange) to keep all clients in sync:
StateUpdate- periodic full stateChainExecutionUpdate- chain progressServiceOnline- service restart notificationEventLoggingSet- centralized logging toggle
Lua Agent Script Management
The service manages Lua agent connector scripts stored in the database. Default scripts from the agents/ directory are embedded at build time and seeded into the lua_agent_scripts table on first startup when the table is empty.
When a node registers, the service includes all Lua scripts in the NodeRegistrationAck message sent to the node's direct queue. This avoids a race condition where a fanout broadcast could arrive before the node's exchange consumer is ready.
Scripts can be added, updated, or deleted via the web UI (Settings > Agents tab). When scripts change, the service broadcasts an AgentRegistryUpdate to all connected nodes so they reload the latest scripts.
A "Reset Defaults" operation clears all scripts and re-inserts the embedded defaults.
Agent version information (extracted during fingerprinting) is included in the DiscoveredAgent data reported by nodes and displayed in the web UI.
Claude Bridge
The service can optionally run Claude Bridge listeners that accept inbound connections from Claude Code instances. Each connection creates a virtual node with an active session, allowing Claude to be controlled through Praxis without deploying a full node.
Two protocol versions are supported:
CCRv1 - WebSocket listener with bidirectional NDJSON. Simpler protocol, fewer requirements on the Claude side.
CCRv2 - HTTP server with SSE for server-to-client messages and POST for client-to-server messages. Includes epoch-based versioning and heartbeat-based disconnect detection.
Both bridges are managed by dedicated manager structs (CcrV1Manager, CcrV2Manager) that start and stop based on configuration changes. When enabled, they bind to their configured ports and accept connections. Each connection runs a BridgeSession that handles the protocol handshake, registers a virtual node via RabbitMQ, and relays messages between the Claude worker and the Praxis service.
Bridge nodes only support the Session capability. They do not support interception, recon, or terminal access. See Claude Bridge for protocol details and operator setup.
Startup Sequence
- Load configuration from database
- Seed default Lua agent scripts (if table is empty)
- Connect to RabbitMQ
- Declare queues and broadcast exchanges
- Start message consumers
- Initialize semantic ops manager
- Initialize chain executor
- Initialize trigger engine and start scheduler
- Start Claude Bridge listeners (if enabled)
- Request node re-registration (broadcast)
- Begin processing messages
Error Handling
The service handles various failure scenarios:
- Node disconnect: State preserved, node can reconnect
- RabbitMQ failure: Reconnection with backoff
- LLM errors: Reported to operation caller
- Database errors: Logged, operation may fail
Errors are logged and surfaced to the UI where appropriate.
Web Architecture
The web component serves the frontend and provides the communication layer between browsers and the service.
Overview
┌───────────────────────────────────────────────────────────┐
│ Web Component │
│ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ HTTP Server (Axum) │ │
│ │ │ │
│ │ GET / → Static files (React SPA) │ │
│ │ GET /ws → WebSocket upgrade │ │
│ │ GET /api/* → API endpoints │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ┌──────────────────────────▼──────────────────────────┐ │
│ │ WebSocket Handler │ │
│ │ │ │
│ │ Client ◀───JSON Messages───▶ RabbitMQ │ │
│ └─────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ React Frontend │ │
│ │ │ │
│ │ TypeScript + Tailwind + React Flow │ │
│ │ (Embedded in binary) │ │
│ └─────────────────────────────────────────────────────┘ │
└───────────────────────────────────────────────────────────┘
HTTP Server
The web server is built with Axum and handles:
Static Assets
The React frontend is compiled and embedded in the binary at build time. When you request /, you get the SPA.
WebSocket Endpoint
/ws upgrades HTTP connections to WebSocket for real-time communication. Each connected browser gets:
- A unique client ID
- A dedicated RabbitMQ queue for responses
- State updates via broadcast exchange (fanout)
API Endpoints
Minimal REST API for specific operations:
/api/health- health check/api/nodes- node list (for programmatic access)
Most functionality uses WebSocket for bidirectional communication.
WebSocket Handler
Connection Lifecycle
- Browser connects to
/ws - Server generates client ID
- Client registered with RabbitMQ (creates response queue)
- Initial state sent to client
- Bidirectional message flow begins
- On disconnect, cleanup and queue deletion
Message Flow
Client → Server:
Browser → WebSocket → Handler → RabbitMQ (ClientSignal) → Service
Server → Client:
Service → RabbitMQ (Client_{id} or ClientBroadcast exchange) → Handler → WebSocket → Browser
Message Types
Messages are JSON-encoded variants of ClientSignalMessage and ClientDirectMessage:
// Sent by client
interface ClientMessage {
type: 'Command' | 'SemanticOpRun' | 'ChainRun' | ...;
payload: {...};
}
// Received by client
interface ServerMessage {
type: 'StateUpdate' | 'CommandResponse' | 'SemanticOpUpdate' | ...;
payload: {...};
}
React Frontend
Technology Stack
- React 18 with TypeScript
- Tailwind CSS for styling
- React Flow for chain builder visualization
- xterm.js for terminal emulation
- Vite for build tooling
Application Structure
web/frontend/src/
├── components/ # Reusable UI components
├── pages/ # Page components
├── hooks/ # Custom React hooks
├── contexts/ # React context providers
├── utils/ # Utility functions
└── App.tsx # Main application
Key Components
AppContext - Global state management:
- Connected nodes and their state
- Selected node and agent
- WebSocket connection status
- Settings and configuration
NodeList - Sidebar showing all connected nodes and their agents.
NodeDetailPage - Shows node info and an agents table with columns for name, short name, version, and session status.
AgentDetailPage - Agent header with name, version, and session controls. Includes session interaction panel, recon results, and operation/chain runners.
ReconPanel - Displays reconnaissance results organized by category.
SessionPanel - Interactive session interface for sending prompts.
ChainBuilder - Visual workflow editor using React Flow.
TrafficViewer - Table and detail view of intercepted traffic.
Terminal - PTY terminal emulator using xterm.js.
State Management
The frontend uses React Context for global state:
interface AppState {
nodes: Map<string, NodeState>;
selectedNode: string | null;
selectedAgent: string | null;
settings: Settings;
wsConnected: boolean;
}
State is primarily driven by StateUpdate messages from the service, keeping all clients in sync.
Real-Time Updates
WebSocket messages trigger state updates:
StateUpdatearrives with all node data- Context updates state
- Components re-render with new data
This means multiple browser tabs see the same state-select an agent in one tab, see it selected in another.
Orchestrator
The Orchestrator is an AI-powered agent that can autonomously interact with the Praxis network. It connects to the built-in MCP SSE server as a client to access all Praxis tools dynamically.
Architecture
┌─────────────────────────────────────────────────────┐
│ Orchestrator │
│ │
│ LLM (Claude/GPT/etc) │
│ │ │
│ ▼ │
│ Tool Parser ──▶ Local Tools (wait, report_plan) │
│ │ │
│ ▼ │
│ MCP Client ──SSE──▶ MCP Server (Service) │
│ └──▶ All Praxis tools │
└─────────────────────────────────────────────────────┘
How It Works
- On session start, the Orchestrator connects to the MCP SSE server at
http://127.0.0.1:{port}/sse - It fetches all available tools via
list_toolsand converts them to the AI tool format - Two local tools (
waitandreport_plan) are appended for sleep and plan tracking - The combined tool definitions are included in the system prompt
- User prompts enter a tool-use loop: the LLM generates responses, tool calls are parsed and executed (local tools handled in-process, everything else delegated to the MCP server), and results fed back to the LLM
- The MCP client connection is dropped when the session ends
Prerequisites
- MCP server must be enabled in Settings > MCP Server
- Orchestrator LLM must be configured in Settings > LLM Providers > Feature Selection
Tool Execution
Tools are stateless — each MCP tool call includes explicit parameters (e.g., node ID) rather than relying on selected-node context. The LLM manages passing the correct IDs based on previous tool results.
Build Process
Development
cd web/frontend
npm install
npm run dev # Starts Vite dev server on :5173
The dev server proxies API requests to the running web component.
Production
The frontend is built and embedded during cargo build:
npm run buildproduces static files- Build script embeds files in binary
- Axum serves from embedded assets
To skip frontend build during development:
PRAXIS_SKIP_FRONTEND=1 cargo build
Configuration
Environment Variables
| Variable | Effect |
|---|---|
PRAXIS_NODES_DIR | Directory with node binaries for download |
PRAXIS_SKIP_FRONTEND | Skip frontend build |
Ports
- Default HTTP/WebSocket port: 8080
- Can be changed via command line or environment
Error Handling
WebSocket Errors
- Connection drops handled with reconnection logic
- Stale state detected via sequence numbers
- Reconnect requests full state update
API Errors
- HTTP errors returned as JSON with status codes
- WebSocket errors sent as error messages
Security Considerations
- No authentication by default (intended for internal use)
- Should be behind firewall or VPN in production
- HTTPS can be configured via reverse proxy
Local Development
This guide covers running Praxis locally for development and testing.
Quick Start with Docker
The fastest way to get running:
docker compose up --build
This starts:
- RabbitMQ on port 5672 (management UI on 15672)
- Praxis service and web on port 8080
- MCP server on port 8585 (when enabled in Settings > MCP Server)
- Claude Bridge CCRv1 on port 8586 (when enabled in Settings > Claude Bridge)
- Claude Bridge CCRv2 on port 8587 (when enabled in Settings > Claude Bridge)
Open http://localhost:8080 to access the UI.
To use a different MCP server port:
PRAXIS_MCP_PORT=9090 docker compose up --build
With PostgreSQL
For PostgreSQL instead of SQLite:
docker compose --profile postgres up --build
Faster Builds
Skip praxis_node binaries when you only need the service and web components:
SKIP_NODE_BUILD=1 docker compose up --build
Use the release-optimized profile for fully optimized production builds (full LTO, single codegen unit — significantly slower):
CARGO_PROFILE=release-optimized docker compose up --build
Building from Source
Prerequisites
- Rust 1.70+ with cargo
- Node.js 18+ with npm
- RabbitMQ running locally
Build Steps
- Clone the repository:
git clone https://github.com/originsec/praxis.git
cd praxis
- Build everything:
cargo build --release
This builds the service, web, and node components. The frontend is built automatically during cargo build.
Skip Frontend Build
During development, you can skip the frontend build:
PRAXIS_SKIP_FRONTEND=1 cargo build
Then run the frontend dev server separately for hot reload:
cd web/frontend
npm install
npm run dev
The dev server proxies to the backend.
Running Locally
Start RabbitMQ
If not using Docker:
# Linux
sudo systemctl start rabbitmq-server
Create the praxis user:
rabbitmqctl add_user praxis praxis
rabbitmqctl set_permissions -p / praxis ".*" ".*" ".*"
Start the Service
cargo run --release --bin praxis_service
The service starts and connects to RabbitMQ, creating necessary queues.
Start the Web Component
cargo run --release --bin praxis_web
The web component serves the UI on http://localhost:8080.
Start a Node
For testing locally, run a node on your own machine:
cargo run --release --bin praxis_node
The node connects to RabbitMQ and registers with the service.
Environment Variables
Configure via environment or .env file:
| Variable | Default | Description |
|---|---|---|
PRAXIS_RABBITMQ_URL | amqp://praxis:praxis@localhost:5672 | RabbitMQ connection |
PRAXIS_DATABASE_URL | ~/.praxis_operations.db | Database path |
RUST_LOG | info | Log level |
Database Options
SQLite is used by default with no configuration required.
For PostgreSQL or advanced configuration, see Database Configuration.
Development Workflow
Code Changes
- Make changes to Rust code
- Rebuild:
cargo build - Restart affected component
For frontend changes with the dev server running, changes hot-reload automatically.
Testing
Run tests:
cargo test
Logs
Adjust log verbosity:
RUST_LOG=debug cargo run --bin praxis_service
RUST_LOG=praxis_node::intercept=trace cargo run --bin praxis_node
Common Issues
RabbitMQ connection failed
- Verify RabbitMQ is running
- Check credentials match
- Ensure the
PRAXIS_RABBITMQ_URLis correct
Frontend not building
- Ensure Node.js is installed
- Run
npm installinweb/frontend - Check for build errors
Database errors
- Check file permissions for SQLite
- Verify PostgreSQL is running and accessible
- Check the connection URL format
Node not appearing
- Verify the node connected to RabbitMQ
- Check node logs for errors
- Ensure service is running
Multiple Nodes
You can run multiple nodes locally (useful for testing):
# Terminal 1
cargo run --bin praxis_node
# Terminal 2
cargo run --bin praxis_node
Each node gets a unique ID and appears separately in the UI.
Debugging
Enable debug logging
RUST_LOG=debug cargo run --bin praxis_service
Check RabbitMQ queues
Open http://localhost:15672 (praxis/praxis) to see queue activity.
Frontend debugging
Open browser dev tools. The React app logs useful debug information to the console.
Database Configuration
Praxis supports two database backends:
- SQLite (default) - Zero-configuration, single-instance deployments
- PostgreSQL - Production deployments, multiple service instances
Quick Reference
| Feature | SQLite | PostgreSQL |
|---|---|---|
| Setup | Automatic | Requires server |
| Multiple instances | No | Yes |
| Network storage (SMB/NFS) | No | Yes |
| Cloud deployments | No | Yes |
| Connection pooling | 1 connection | 10 connections |
| Best for | Local development | Production, cloud, teams |
SQLite (Default)
No configuration required. The database file is created automatically at:
| Platform | Path |
|---|---|
| Linux/macOS | ~/.praxis_operations.db |
| Windows | %USERPROFILE%\.praxis_operations.db |
SQLite is configured with WAL journal mode and a 5-second busy timeout.
Warning: SQLite does not work reliably on network file systems (SMB, NFS, Azure Files, EFS). File locking mechanisms don't translate correctly over these protocols, leading to database corruption and "database is locked" errors. For cloud deployments with persistent storage, use PostgreSQL.
Custom SQLite Path
export PRAXIS_DATABASE_URL=/path/to/custom.db
# or
export PRAXIS_DATABASE_URL=sqlite:///path/to/custom.db
PostgreSQL
Prerequisites
- PostgreSQL 14+ server
- A database created for Praxis
- User with CREATE TABLE privileges
Setup
Create the database:
createdb praxis
Configure the connection:
export PRAXIS_DATABASE_URL=postgresql://user:password@host:5432/praxis
The schema is created automatically on first run.
Connection URL Format
postgresql://[user[:password]@][host][:port]/database[?options]
Examples:
# Local server, default port
postgresql://praxis:secret@localhost/praxis
# Remote server with port
postgresql://praxis:secret@db.example.com:5432/praxis
# With SSL mode
postgresql://praxis:secret@db.example.com:5432/praxis?sslmode=require
SSL/TLS Configuration
For production deployments, enable SSL in the connection URL:
| Mode | Description |
|---|---|
sslmode=disable | No SSL (not recommended) |
sslmode=prefer | Try SSL, fall back to unencrypted |
sslmode=require | Require SSL, don't verify certificate |
sslmode=verify-ca | Require SSL, verify CA |
sslmode=verify-full | Require SSL, verify CA and hostname |
Example with full verification:
export PRAXIS_DATABASE_URL="postgresql://user:pass@host:5432/praxis?sslmode=verify-full&sslrootcert=/path/to/ca.crt"
Connection Pool Settings
PostgreSQL connections use these defaults:
| Setting | Value | Description |
|---|---|---|
| Max connections | 10 | Maximum pool size |
| Connect timeout | 30s | Time to establish connection |
| Idle timeout | 600s | Close idle connections after |
These are hardcoded but sufficient for most deployments. For high-traffic scenarios, tune PostgreSQL server settings (max_connections, shared_buffers) instead.
Schema
The schema is created automatically. Key tables:
| Table | Purpose |
|---|---|
operations | Semantic operation executions |
operation_definitions | Stored operation templates |
intercepted_traffic | Captured HTTP traffic |
intercept_rules | Traffic matching rules |
traffic_matches | Rule match results |
operation_chains | Chain workflow definitions |
chain_executions | Chain execution history |
recon_results | Agent reconnaissance data |
event_log | Centralized logging |
service_config | Key-value configuration |
lua_agent_scripts | Lua agent connector scripts |
Traffic data is automatically pruned after 7 days.
Schema Migrations
Schema migrations run automatically on service startup. The service applies idempotent ALTER TABLE statements to add new columns introduced in newer versions. No manual migration steps are required when upgrading Praxis. The service_config table stores version tracking keys (e.g., builtin_scripts_version) to coordinate data migrations like updating built-in scripts.
Migration: SQLite to PostgreSQL
Praxis doesn't include a built-in migration tool. To migrate:
- Export data from SQLite:
sqlite3 ~/.praxis_operations.db .dump > praxis_dump.sql
-
Convert SQLite-specific syntax to PostgreSQL:
INTEGER PRIMARY KEY→SERIAL PRIMARY KEYBLOB→BYTEA- Remove
AUTOINCREMENT - Adjust date functions if used
-
Import to PostgreSQL:
psql -d praxis -f praxis_dump.sql
For most deployments, starting fresh with PostgreSQL is simpler than migrating.
Multi-Instance and Cloud Deployments
PostgreSQL is required for:
- Multiple
praxis_serviceinstances (e.g., behind a load balancer) - Cloud deployments (Azure Container Apps, AWS ECS, Kubernetes)
- Any deployment using network-attached storage
SQLite limitations:
- File locking doesn't work over SMB, NFS, Azure Files, or EFS
- Concurrent writes from multiple processes cause corruption
- "Database is locked" errors under load
- No recovery from partial writes on network storage
PostgreSQL handles:
- Concurrent connections from multiple instances
- Proper transaction isolation and row-level locking
- Network-transparent client/server architecture
- Connection pooling per instance
Backup and Restore
SQLite
# Backup
cp ~/.praxis_operations.db ~/.praxis_operations.db.backup
# Restore
cp ~/.praxis_operations.db.backup ~/.praxis_operations.db
PostgreSQL
# Backup
pg_dump -Fc praxis > praxis_backup.dump
# Restore
pg_restore -d praxis praxis_backup.dump
For point-in-time recovery, configure PostgreSQL WAL archiving.
Troubleshooting
Connection Refused
Error: Connection refused (os error 111)
- Verify PostgreSQL is running:
pg_isready -h host -p 5432 - Check firewall rules allow port 5432
- Verify
pg_hba.confallows connections from your IP
Authentication Failed
Error: password authentication failed for user "praxis"
- Verify username and password in URL
- Check
pg_hba.confauthentication method - Ensure user exists:
\duin psql
Database Does Not Exist
Error: database "praxis" does not exist
Create it:
createdb praxis
# or
psql -c "CREATE DATABASE praxis;"
SSL Required
Error: SSL connection is required
Add SSL mode to connection URL:
postgresql://user:pass@host:5432/praxis?sslmode=require
SQLite Locked
Error: database is locked
- If using network storage (SMB, NFS, Azure Files): switch to PostgreSQL
- Only one
praxis_serviceinstance can use SQLite - Close other connections (GUI tools, scripts)
- Check for zombie processes:
lsof ~/.praxis_operations.db
Performance Tuning
PostgreSQL Server
For production workloads, tune these PostgreSQL settings:
# postgresql.conf
max_connections = 100
shared_buffers = 256MB
effective_cache_size = 768MB
maintenance_work_mem = 64MB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 4MB
Vacuum and Maintenance
PostgreSQL autovacuum handles routine maintenance. For large traffic volumes, consider:
# Manual vacuum after bulk deletes
psql -d praxis -c "VACUUM ANALYZE intercepted_traffic;"
Indexing
The schema includes indexes for common queries. If you run custom queries against the database, add indexes as needed:
-- Example: index for custom report queries
CREATE INDEX idx_operations_agent ON operations(agent_short_name);
Azure Deployment
This guide covers deploying Praxis to Azure using Azure Container Apps with PostgreSQL, automatic scaling, and persistent storage.
Architecture
┌─────────────────────────────────────────────────┐
│ Azure │
│ │
│ ┌──────────────┐ ┌──────────────────────┐ │
│ │ Container │ │ Container Instance │ │
│ │ App (Praxis) │◄───│ (RabbitMQ) │ │
│ └──────┬───────┘ └──────────────────────┘ │
│ │ │ │
│ ┌──────▼───────┐ ┌─────────▼────────┐ │
│ │ PostgreSQL │ │ Azure File Share│ │
│ │ Flexible │ │ (persistence) │ │
│ └──────────────┘ └──────────────────┘ │
│ │
└─────────────────────────────────────────────────┘
│
│ Internet
│
┌─────▼─────┐
│ Nodes │
│ (Targets) │
└───────────┘
Prerequisites
- Azure CLI - Install from https://docs.microsoft.com/en-us/cli/azure/install-azure-cli
- Docker - Install from https://docs.docker.com/get-docker/
- Azure Subscription - Active subscription with appropriate permissions
Quick Start
1. Login to Azure
az login
az account set --subscription <your-subscription-id>
2. Deploy Praxis
cd /path/to/praxis
./scripts/azure-deploy.sh
The script will:
- Create all required Azure resources
- Build and push Docker images to ACR
- Deploy PostgreSQL Flexible Server
- Deploy Praxis with RabbitMQ
- Display connection details
3. Access Your Deployment
After deployment completes, you'll receive URLs for:
- Web Interface (HTTPS):
https://praxis-app.{region}.azurecontainerapps.io - RabbitMQ (AMQP):
amqp://praxis:praxis@praxis-rabbitmq-{hash}.{region}.azurecontainer.io:5672 - RabbitMQ Management UI:
http://praxis-rabbitmq-{hash}.{region}.azurecontainer.io:15672
Script Commands
./scripts/azure-deploy.sh # Deploy Praxis
./scripts/azure-deploy.sh --stop # Stop all resources (pause billing)
./scripts/azure-deploy.sh --start # Start all resources
./scripts/azure-deploy.sh --delete # Delete all Azure resources
./scripts/azure-deploy.sh --help # Show help
Configuration
Customize deployment with environment variables:
export AZURE_RESOURCE_GROUP="praxis-rg"
export AZURE_LOCATION="westus2"
export PRAXIS_POSTGRES_PASS="MySecureP@ssword123"
./scripts/azure-deploy.sh
| Variable | Default | Description |
|---|---|---|
AZURE_RESOURCE_GROUP | praxis-rg | Resource group name |
AZURE_LOCATION | francecentral | Azure region |
AZURE_ACR_NAME | praxisacr | Container registry name prefix |
AZURE_CONTAINER_APP_ENV | praxis-env | Container app environment |
AZURE_STORAGE_ACCOUNT | praxisstorage | Storage account prefix |
AZURE_POSTGRES_SERVER | praxis-postgres | PostgreSQL server name prefix |
PRAXIS_POSTGRES_PASS | Praxis_db_2024! | PostgreSQL admin password |
Resource names are automatically made unique using a hash suffix derived from your subscription and resource group.
What Gets Deployed
- Azure Container Registry (ACR) - Stores Praxis and RabbitMQ images
- Azure Storage Account - File share for RabbitMQ persistence
- PostgreSQL Flexible Server - Database backend (Burstable B1ms tier)
- Container App Environment - Managed environment for Container Apps
- RabbitMQ - Azure Container Instance with persistent storage
- Praxis - Container App with external HTTPS ingress
Stopping and Starting
To pause billing when not using Praxis:
# Stop all resources
./scripts/azure-deploy.sh --stop
This will:
- Stop PostgreSQL Flexible Server
- Stop RabbitMQ Container Instance
- Scale Praxis Container App to 0 replicas
To resume:
# Start all resources
./scripts/azure-deploy.sh --start
Storage accounts and Container Registry may still incur minimal charges when stopped.
Updating Deployments
After making code changes, redeploy by running the script again:
./scripts/azure-deploy.sh
The script detects existing resources and updates them rather than recreating.
Management Commands
# View Praxis logs (real-time)
az containerapp logs show -n praxis-app -g praxis-rg --follow
# View RabbitMQ logs
az container logs --name praxis-rabbitmq -g praxis-rg --follow
# Open Praxis in browser
az containerapp browse -n praxis-app -g praxis-rg
# Restart RabbitMQ
az container restart --name praxis-rabbitmq -g praxis-rg
Troubleshooting
# Check Praxis app status
az containerapp show -n praxis-app -g praxis-rg --query properties.runningStatus
# View recent logs
az containerapp logs show -n praxis-app -g praxis-rg --tail 100
az container logs --name praxis-rabbitmq -g praxis-rg --tail 100
# Check RabbitMQ status
az container show --name praxis-rabbitmq -g praxis-rg --query instanceView.state
# Check PostgreSQL status
az postgres flexible-server show -n <server-name> -g praxis-rg --query state
Security Best Practices
Warning: The Praxis web interface has no built-in authentication or access control. Anyone who can reach the URL can access and control your Praxis deployment. You must implement access protection at the network or gateway level.
Protecting the Web Interface
Since Praxis does not provide its own authentication, use one of these Azure-native approaches:
Azure AD Easy Auth (Recommended)
Container Apps support built-in authentication. Enable it via the Azure Portal or CLI:
az containerapp auth update \
-n praxis-app \
-g praxis-rg \
--unauthenticated-client-action RedirectToLoginPage \
--set-provider-aad \
--client-id <your-app-registration-client-id> \
--issuer "https://login.microsoftonline.com/<your-tenant-id>/v2.0"
This requires users to authenticate with Azure AD before accessing Praxis.
Other Options
- VNet Integration - Restrict to internal network only, access via VPN or Azure Bastion
- IP Allowlisting - Use Container Apps ingress access restrictions to allow specific IPs
- Azure Front Door with WAF - For production: WAF protection, DDoS mitigation, geo-restrictions
Other Security Recommendations
- Change default passwords - Set
PRAXIS_POSTGRES_PASSand update RabbitMQ credentials - Use Azure Key Vault - Store secrets securely rather than in environment variables
- Enable diagnostic logging - Send logs to Log Analytics for audit trails
- Regular updates - Keep base images current
Cleanup
Delete all resources:
./scripts/azure-deploy.sh --delete
This deletes:
- Container Instance (RabbitMQ)
- Container App (Praxis)
- PostgreSQL Flexible Server
- Azure Container Registry
- Storage Account
- Log Analytics Workspace
- Container App Environment
- Resource Group
Verify deletion:
az group list --query "[?name=='praxis-rg']" -o table
Contributing
Praxis is open source and welcomes contributions. This guide covers the codebase structure and how to get involved.
Repository Structure
praxis/
├── common/ # Shared types and utilities
├── node/ # Node component (runs on targets)
├── service/ # Service component (backend)
├── web/ # Web component (frontend + server)
├── semantic_parser/ # LLM-based text parsing library
├── docs/ # This documentation
├── .github/ # CI/CD workflows
└── docker-compose.yml # Local development setup
Components
Common (common/)
Shared code used by all components:
- Message types and serialization
- RabbitMQ utilities
- AI client abstraction
- Logging macros
When adding functionality needed by multiple components, put it here.
Node (node/)
The agent that runs on target machines:
- Agent connectors (Claude Code, Gemini, etc.)
- Traffic interception
- Session management
- Terminal handling
node/src/
├── agent_connectors/ # Per-agent implementations
│ └── lua/ # Lua connector runtime + CDP helpers
├── intercept/ # Traffic interception
├── terminal/ # PTY terminal
└── runtime.rs # Main event loop
Lua-based agent scripts (Claude Code, Codex, Cursor, Gemini, M365 Copilot) live in agents/ at the project root and are embedded into the binary at build time.
Service (service/)
The backend that coordinates everything:
- Node tracking
- Semantic operations
- Chain execution
- Database persistence
service/src/
├── semantic_ops/ # Operation execution
├── chain_execution/ # Chain runner
├── database/ # Persistence layer
└── config/ # Service configuration
Web (web/)
The frontend and HTTP/WebSocket server:
- React SPA frontend
- Axum HTTP server
- WebSocket handler
- RabbitMQ bridge
web/
├── src/ # Rust backend
└── frontend/ # React frontend
└── src/
├── components/
├── pages/
└── context/
Semantic Parser (semantic_parser/)
Standalone library for LLM-based parsing:
- Schema-based extraction
- Multi-provider support
- Retry logic
See Semantic Parser for details.
Development Workflow
Setup
- Install Rust and Node.js
- Start RabbitMQ:
docker compose up rabbitmq - Build:
cargo build - Run service:
cargo run --bin praxis_service - Run web:
cargo run --bin praxis_web - Run node:
cargo run --bin praxis_node
Environment Variables for Development
| Variable | Default (debug) | Description |
|---|---|---|
PRAXIS_IGNORE_SERVICE_AGENTS | 1 | When 1, node ignores Lua scripts pushed from the service and uses only embedded scripts. Set to 0 to test service-managed agent scripts. |
PRAXIS_DATABASE_URL | SQLite in home dir | Database connection string |
PRAXIS_RABBITMQ_URL | amqp://praxis:praxis@localhost:5672 | RabbitMQ connection |
Making Changes
- Create a branch
- Make changes
- Run tests:
cargo test - Build:
cargo build - Test manually
- Submit PR
Code Style
- Follow existing patterns
- Use
common::log_*macros for logging (except innode/src/runtime.rsevent forwarder-usetracing::*there to avoid recursion) - Prefer explicit over clever
- Comment non-obvious blocks
Adding Agent Connectors
See Adding New Connectors. Prefer Lua-based connectors for CLI agents — they can be developed and tested at runtime via the web UI without recompiling.
Lua agent scripts live in agents/ at the project root and are embedded into binaries at build time. Shared libraries are at node/src/agent_connectors/lua/lib/ (helpers.lua for common utilities, devtools.lua for CDP/DevTools support).
Adding Operations
Operations are JSON definitions. Add to the library via the web UI or directly to the database.
Frontend Development
For hot reload:
cd web/frontend
npm run dev
The dev server proxies API requests to the running web component.
Testing
Unit Tests
cargo test
Integration Tests
Run the full stack and test manually. Automated integration tests are on the roadmap.
Testing Connectors
- Install the target agent
- Run a node
- Verify fingerprinting
- Test session creation
- Test interception
Pull Requests
Before Submitting
- Code builds without warnings
- Tests pass
- Changes are documented
- Commit messages are clear
PR Process
- Open a PR against
main - Describe the change
- Wait for review
- Address feedback
- Merge when approved
Feature Requests
Open an issue with:
- What you want
- Why it's useful
- Any implementation ideas
Bug Reports
Open an issue with:
- What happened
- What you expected
- Steps to reproduce
- Logs if available
Contact
- Issues: GitHub Issues
- Email: david.kaplan@preludesecurity.com
- Twitter: @depletionmode
Semantic Parser
The semantic parser is a standalone library for extracting structured data from unstructured text using LLMs. It's used throughout Praxis for various parsing tasks.
What It Does
Given:
- Raw text (config files, transcripts, logs)
- A JSON schema
- Parsing instructions
The semantic parser returns structured JSON matching the schema.
Usage in Praxis
Semantic Recon
When running semantic reconnaissance, the parser extracts tool definitions from config files:
Input: Claude Code mcp.json file contents
Schema: { "tools": [{ "name": string, "description": string }] }
Output: Structured tool list
Traffic Analysis
When traffic parsing is enabled, the parser analyzes LLM traffic:
Input: Intercepted request/response
Schema: { "prompt_summary": string, "tool_calls": [...] }
Output: Structured analysis
Session Analysis
Parsing session transcripts for capability discovery:
Input: Session history file
Schema: { "capabilities": [...], "sensitive_data": [...] }
Output: Extracted information
Library API
Basic Usage
#![allow(unused)] fn main() { use semantic_parser::{SemanticParser, ParserConfig, Provider}; // Configure the parser let config = ParserConfig { provider: Provider::Anthropic, api_key: "sk-...".to_string(), model: "claude-haiku-4-5-20241022".to_string(), max_retries: 3, max_tokens: Some(4096), }; // Create parser let parser = SemanticParser::new(config)?; // Parse text let schema = r#"{"name": "string", "version": "string"}"#; let prompt = "Extract the package name and version"; let text = "This is mypackage version 1.2.3"; let result = parser.parse(text, prompt, schema).await?; // Returns: {"name": "mypackage", "version": "1.2.3"} }
Provider Support
The parser supports multiple LLM providers:
| Provider | ID | Notes |
|---|---|---|
| Anthropic | anthropic | Claude models |
| OpenAI | openai | GPT models |
google | Gemini models | |
| Groq | groq | Fast inference |
| Cerebras | cerebras | Fast inference |
| Mistral | mistral | Mistral models |
| xAI | xai | Grok models |
| NVIDIA | nvidia | NIM models |
| Ollama | ollama | Local models |
Model Selection
For parsing tasks, use fast, cheap models:
Recommended:
claude-haiku-4-5-20241022(Anthropic)gpt-4o-mini(OpenAI)gemini-1.5-flash(Google)llama-3.3-70b-versatile(Groq)
Fast inference providers like Groq and Cerebras work well since parsing typically requires many sequential calls.
Schema Format
Schemas are JSON Schema-like strings:
{
"tools": [
{
"name": "string",
"description": "string",
"parameters": {}
}
],
"config_path": "string"
}
The parser attempts to return valid JSON matching this structure.
Retry Logic
The parser includes built-in retry logic:
- Send request to LLM
- Parse response as JSON
- If invalid, retry with feedback
- Return result or error after max retries
Default: 3 retries.
Error Handling
The parser returns Result<String>:
- Success: Valid JSON string
- Error: Parsing failed after retries, or API error
#![allow(unused)] fn main() { match parser.parse(text, prompt, schema).await { Ok(json) => process_result(&json), Err(e) => log::warn!("Parsing failed: {}", e), } }
Configuration in Praxis
The semantic parser LLM is configured in Settings:
- Go to Settings → LLM Providers
- Configure Semantic Parser provider and model
- Save
The service uses this configuration for all parsing operations.
Performance Considerations
Latency: Each parse call makes an LLM request. For bulk parsing, consider batching.
Cost: Fast models are cheaper. Choose based on parsing complexity.
Accuracy: More capable models produce better results for complex extractions.
Examples
Parse MCP Config
#![allow(unused)] fn main() { let schema = r#"{ "servers": [{ "name": "string", "command": "string", "args": ["string"], "env": {} }] }"#; let result = parser.parse( &mcp_json_contents, "Extract all MCP server configurations", schema ).await?; }
Parse Session Transcript
#![allow(unused)] fn main() { let schema = r#"{ "files_accessed": ["string"], "commands_run": ["string"], "api_keys_mentioned": ["string"] }"#; let result = parser.parse( &transcript, "Extract file paths, commands, and any API keys from this conversation", schema ).await?; }
Parse Traffic
#![allow(unused)] fn main() { let schema = r#"{ "model": "string", "prompt_preview": "string", "token_count": "number", "has_tool_calls": "boolean" }"#; let result = parser.parse( &request_body, "Extract LLM request metadata", schema ).await?; }
Standalone Use
The semantic parser can be used outside of Praxis:
[dependencies]
semantic_parser = { path = "../semantic_parser" }
It's designed to be a general-purpose LLM parsing library.
API Reference
This reference documents the message types and RabbitMQ queues/exchanges used for communication between Praxis components.
RabbitMQ Queues
| Queue | Direction | Purpose |
|---|---|---|
NodeSignal | Node → Service | Node registration, commands, traffic |
NodeBroadcast | Service → All Nodes | Broadcast commands to all nodes (fanout exchange) |
Node_{id} | Service → Node | Commands for specific node |
Node_{id}_semantic | Service → Node | Semantic parser responses |
ClientSignal | Client → Service | Client requests |
ClientBroadcast | Service → All Clients | System state updates (fanout exchange) |
Client_{id} | Service → Client | Responses for specific client |
NodeEventLog | Node → Service | Application log entries |
ServiceEventLog | Service → Service | Service log entries |
Message Flow
┌────────┐ ┌─────────┐ ┌────────┐
│ Client │ │ Service │ │ Node │
└───┬────┘ └────┬────┘ └───┬────┘
│ │ │
│──ClientSignal─────────────▶│ │
│ │──Node_{id}───────────────▶│
│ │ │
│ │◀──────────NodeSignal──────│
│◀──Client_{id}──────────────│ │
│ │ │
│◀──ClientBroadcast exchange─│──NodeBroadcast exchange─▶│
│ │ │
Node Messages
NodeSignalMessage
Messages sent from nodes to the service via NodeSignal queue.
#![allow(unused)] fn main() { pub enum NodeSignalMessage { // Node registration on startup Registration(NodeRegistration), // Periodic information update InformationUpdate(NodeInformationUpdate), // Response to a command CommandResponse(CommandResponse), // PTY terminal output TerminalOutput(TerminalOutput), // Request semantic parsing from service SemanticParserRequest { node_id: String, request: SemanticParserRequest }, // Intercepted traffic entry InterceptedTraffic(InterceptedTrafficEntry), // Intercept status update InterceptStatusUpdate(InterceptStatus), // Outbound ACP frame (response or session/update notification) Acp { node_id: String, client_id: String, json_rpc: String }, } }
NodeDirectMessage
Messages sent to specific nodes via Node_{id} queue.
#![allow(unused)] fn main() { pub enum NodeDirectMessage { // Registration acknowledgment RegistrationAck(NodeRegistrationAck), // Command to execute Command(CommandRequest), // Semantic parser response SemanticParserResponse(SemanticParserResponse), // Inbound ACP frame (request or notification destined for the node) Acp(AcpFrame), } }
NodeBroadcastMessage
Messages broadcast to all nodes via NodeBroadcast fanout exchange.
#![allow(unused)] fn main() { pub enum NodeBroadcastMessage { // Request all nodes to send information update NodeInformationUpdateRequest, // Request nodes to re-register NodeRefreshRegistration, // Enable/disable centralized event logging EventLoggingSet { enabled: bool }, } }
Client Messages
ClientSignalMessage
Messages sent from clients to the service via ClientSignal queue.
#![allow(unused)] fn main() { pub enum ClientSignalMessage { // Registration Registration(ClientRegistration), // Command to forward to node Command(CommandRequest), // Remove a node from tracking RemoveNode { node_id: String }, // Semantic Operations SemanticOpRun { client_id, node_id, agent_short_name, operation_name, request_id }, SemanticOpCancel { operation_id }, SemanticOpRemove { operation_id }, SemanticOpClear, SemanticOpListRequest, // Service Configuration ServiceConfigGet { client_id, keys: Vec<String> }, ServiceConfigSet { client_id, values: HashMap<String, String> }, // Operation Definitions OpDefAdd { client_id, content: String }, OpDefList { client_id }, OpDefDelete { client_id, full_name }, OpDefGet { client_id, full_name }, // Chain Definitions ChainDefList { client_id }, ChainGet { client_id, chain_id }, ChainCreate { client_id, definition: ChainDefinitionInput }, ChainUpdate { client_id, chain_id, definition: ChainDefinitionInput }, ChainDelete { client_id, chain_id }, ChainRun { client_id, chain_id, node_id, agent_short_name, working_dir, target_spec }, ChainCancel { client_id, execution_id }, ChainExecutionList { client_id }, ChainExecutionRemove { execution_id }, ChainExecutionClear, // Chain Triggers ChainTriggerCreate { client_id, chain_id, trigger_config: TriggerConfig, target_spec: TargetSpec }, ChainTriggerUpdate { client_id, trigger_id, enabled, trigger_config, target_spec }, ChainTriggerDelete { client_id, trigger_id }, ChainTriggerList { client_id, chain_id: Option<String> }, // Traffic Interception TrafficLogRequest { client_id, filters: TrafficLogFilters }, TrafficMatchesRequest { client_id, rule_id, limit, offset }, TrafficClear { client_id }, TrafficSearchRequest { client_id, filters: TrafficSearchFilters }, InterceptRuleCreate { client_id, name, regex_pattern, ... }, InterceptRuleUpdate { ... }, InterceptRuleDelete { client_id, id }, InterceptRuleList { client_id }, InterceptEnable { client_id, node_id, method }, InterceptDisable { client_id, node_id }, // Application Log ApplicationLogRequest { client_id, node_id, level_filter, regex_filter, limit, offset }, ApplicationLogClear { client_id, node_id }, // Recon ReconGet { client_id, node_id, agent_short_name }, } }
ClientDirectMessage
Messages sent to specific clients via Client_{id} queue.
#![allow(unused)] fn main() { pub enum ClientDirectMessage { // Registration RegistrationAck(ClientRegistrationAck), CommandResponse(CommandResponse), StateUpdate(SystemState), TerminalOutput(TerminalOutput), // Semantic Operations SemanticOpQueued { operation_id, queue_position, request_id }, SemanticOpUpdate(SemanticOpUpdate), SemanticOpList(Vec<SemanticOpUpdate>), // Service Configuration ServiceConfigResponse { values: HashMap<String, String> }, ServiceConfigSaved, // Operation Definitions OpDefListResponse { definitions: Vec<OperationDefinitionInfo> }, OpDefGetResponse { definition: Option<OperationDefinitionInfo> }, OpDefAdded { full_name }, OpDefDeleted { full_name, success }, OpDefError { message }, // Chain Definitions ChainDefListResponse { chains: Vec<ChainDefinitionInfo> }, ChainGetResponse { chain: Option<ChainDefinitionFull> }, ChainCreated { chain: ChainDefinitionInfo }, ChainUpdated { chain: ChainDefinitionInfo }, ChainDeleted { chain_id, success }, ChainError { message }, ChainExecutionStarted { execution_id, chain_id }, ChainExecutionUpdate(ChainExecutionUpdate), ChainExecutionListResponse { executions: Vec<ChainExecutionUpdate> }, // Chain Triggers ChainTriggerCreated { trigger: ChainTriggerInfo }, ChainTriggerUpdated { trigger: ChainTriggerInfo }, ChainTriggerDeleted { trigger_id: String }, ChainTriggerListResponse { triggers: Vec<ChainTriggerInfo> }, // Traffic Interception TrafficLogResponse { entries: Vec<InterceptedTrafficEntry>, total_count }, TrafficSearchResponse { entries, total_count }, TrafficMatchesResponse { matches: Vec<TrafficMatchWithDetails>, total_count }, TrafficCleared { deleted_count }, InterceptRuleListResponse { rules: Vec<InterceptRule> }, InterceptRuleCreated { rule }, InterceptRuleUpdated { rule }, InterceptRuleDeleted { id, success }, InterceptRuleError { message }, InterceptStatusUpdate(InterceptStatus), // Application Log ApplicationLogResponse { node_id, entries, total_count }, ApplicationLogCleared { deleted_count }, // Recon ReconGetResponse { node_id, agent_short_name, recon_result, performed_at, is_semantic }, } }
ClientBroadcastMessage
Messages broadcast to all clients via ClientBroadcast fanout exchange.
#![allow(unused)] fn main() { pub enum ClientBroadcastMessage { // Periodic state update with all nodes StateUpdate(SystemState), // Service has come online ServiceOnline, // Chain execution progress ChainExecutionUpdate(ChainExecutionUpdate), // Enable/disable centralized event logging EventLoggingSet { enabled: bool }, } }
Node Protocol
Agent and session interaction with the node uses ACP (Agent Client
Protocol) over RabbitMQ. Everything else uses the NodeCommand envelope.
ACP transport envelope
#![allow(unused)] fn main() { pub struct AcpFrame { pub client_id: String, // originating/receiving external client pub json_rpc: String, // raw JSON-RPC 2.0 frame } }
NodeDirectMessage::Acp(AcpFrame) carries inbound frames (service → node).
NodeSignalMessage::Acp { node_id, client_id, json_rpc } carries outbound
frames (node → service → originating client).
The service proxies node-bound ACP frames: an external client's frame is
forwarded to the right node when _meta.praxis.nodeId is set on
session/new, and subsequent frames for the returned session_id are
routed automatically. Inside the service, orchestrator-originated frames
use a svc_* pseudo-client-id so responses are consumed in-process by
AcpNodeProxy::request instead of being fanned out to a RabbitMQ client
queue.
Connector selection
session/new requires a _meta.praxis.connector field naming the local
agent connector to use (e.g. "claude-code", "codex"). Discover the
connector catalog via InitializeResponse._meta.connectors:
{
"extensions": { "_praxis/recon": { "version": 1 } },
"connectors": [
{ "shortName": "claude-code", "name": "Claude Code" },
{ "shortName": "codex", "name": "OpenAI Codex" }
],
"nodeId": "..."
}
Extension methods
All extensions are advertised under InitializeResponse._meta.extensions.
_praxis/recon— agent-scoped reconnaissance. Params{ "agent_short_name": string, "is_semantic": bool }; result is a serializedReconResult. Replaces the legacyNodeCommand::Agent(Recon)._praxis/read_file— read a file on the node. Params{ "agent_short_name": string, "path": string }._praxis/write_file— write a file on the node. Params{ "agent_short_name": string, "path": string, "contents": string }._praxis/grep_files— regex search across one or more files. Params{ "agent_short_name": string, "path": string, "pattern": string }._praxis/write_session_content— write agent-session content through the connector'swrite_session_contenthook (so agents with virtual session stores can intercept the write). Params{ "agent_short_name": string, "session_file": string, "contents": string }.
NodeCommand (non-agent concerns)
#![allow(unused)] fn main() { pub enum NodeCommand { Intercept(InterceptCommand), Terminal(TerminalCommand), Config(ConfigCommand), AgentRegistry(AgentRegistryCommand), } }
Agent and session traffic no longer flows through NodeCommand; the
legacy Agent and Session variants were removed alongside the ACP
migration. CommandRequest / CommandResponse still wrap NodeCommand
for intercept, terminal, config, and registry traffic.
InterceptCommand
#![allow(unused)] fn main() { pub enum InterceptCommand { Enable { method: Option<InterceptMethod> }, Disable, } }
TerminalCommand
#![allow(unused)] fn main() { pub enum TerminalCommand { Create, // Create PTY session Write { data: Vec<u8> }, // Send keystrokes Resize { rows: u16, cols: u16 }, // Resize terminal Close, // Close session } }
Key Data Types
NodeRegistration
#![allow(unused)] fn main() { pub struct NodeRegistration { pub node_id: String, pub node_type: String, pub machine_name: String, pub os_details: String, } }
SelectedAgent
#![allow(unused)] fn main() { pub struct SelectedAgent { pub short_name: String, pub session_id: Option<String>, pub process_name: Option<String>, pub yolo_mode: bool, pub working_dir: Option<String>, } }
ReconResult
#![allow(unused)] fn main() { pub struct ReconResult { pub tools: ReconTools, pub config: Vec<ConfigItem>, pub sessions: Vec<SessionItem>, pub project_paths: Vec<String>, pub metadata: Option<ReconMetadata>, } }
SemanticOperationSpec
#![allow(unused)] fn main() { pub struct SemanticOperationSpec { pub name: String, pub description: String, pub agent_info: String, pub timeout: u64, pub operation_prompt: String, pub mode: String, // "one-shot" or "agent" pub agent_iterations: u32, pub yolo_mode: bool, pub model_ref: Option<String>, } }
InterceptedTrafficEntry
#![allow(unused)] fn main() { pub struct InterceptedTrafficEntry { pub id: Option<i64>, pub timestamp: DateTime<Utc>, pub node_id: String, pub agent_short_name: String, pub intercept_method: InterceptMethod, pub direction: TrafficDirection, pub method: Option<String>, pub url: String, pub host: String, pub request_headers: Option<IndexMap<String, String>>, pub request_body: Option<Vec<u8>>, pub response_status: Option<u16>, pub response_headers: Option<IndexMap<String, String>>, pub response_body: Option<Vec<u8>>, } }
ChainDefinitionInput
#![allow(unused)] fn main() { pub struct ChainDefinitionInput { pub name: String, pub description: String, pub category: String, pub elements: Vec<ChainElement>, pub connections: Vec<ChainConnection>, pub disabled: bool, pub timeout: Option<u64>, } }
TriggerConfig
#![allow(unused)] fn main() { pub enum TriggerConfig { // Time-based trigger Scheduled { schedule: ScheduleSpec, recurring: bool }, // Fires when intercepted traffic matches a rule InterceptMatch { rule_id: i64 }, // Fires when a new node registers NewNode, } pub enum ScheduleSpec { // Fire once per day at hour:minute (UTC) DailyAt { hour: u8, minute: u8 }, // Fire every N minutes Interval { minutes: u32 }, } }
TargetSpec
#![allow(unused)] fn main() { pub struct TargetSpec { // Specific node IDs (empty = all registered nodes) pub node_ids: Vec<String>, // Case-insensitive substring filter on node os_details pub os_filter: Option<String>, // Specific agent short names (empty = all available agents) pub agent_short_names: Vec<String>, // For event triggers: include the node that triggered the event pub include_triggering_node: bool, } }
ChainTriggerInfo
#![allow(unused)] fn main() { pub struct ChainTriggerInfo { pub id: String, pub chain_id: String, pub trigger_config: TriggerConfig, pub target_spec: TargetSpec, pub enabled: bool, pub last_fired_at: Option<DateTime<Utc>>, pub next_fire_at: Option<DateTime<Utc>>, } }
InterceptMethod
#![allow(unused)] fn main() { pub enum InterceptMethod { Proxy, // System proxy settings Vpn, // TUN adapter Hosts, // Hosts file redirection } }
TrafficDirection
#![allow(unused)] fn main() { pub enum TrafficDirection { Send, // Request to LLM Receive, // Response from LLM } }
WebSocket API
The web component exposes a WebSocket endpoint at /ws for real-time updates.
Connection
const ws = new WebSocket('ws://localhost:8080/ws');
Message Format
All messages are JSON-encoded ClientDirectMessage or ClientBroadcastMessage types.
Events
| Event | Type | Description |
|---|---|---|
StateUpdate | Broadcast | System state with all nodes |
ServiceOnline | Broadcast | Service has restarted |
CommandResponse | Direct | Response to command |
TerminalOutput | Direct | PTY output data |
SemanticOpUpdate | Direct | Operation progress |
ChainExecutionUpdate | Both | Chain progress |
ChainTriggerCreated | Direct | Trigger created |
ChainTriggerUpdated | Direct | Trigger updated |
ChainTriggerDeleted | Direct | Trigger deleted |
ChainTriggerListResponse | Direct | Trigger list response |
HTTP API
The web component also exposes REST endpoints for certain operations.
Endpoints
| Method | Path | Description |
|---|---|---|
GET | / | Web UI (SPA) |
GET | /ws | WebSocket upgrade |
GET | /api/health | Health check |
GET | /api/nodes | List nodes |
Most operations use WebSocket for real-time bidirectional communication rather than REST.
Configuration Reference
This reference documents all configuration options for Praxis components.
Environment Variables
RabbitMQ
| Variable | Default | Description |
|---|---|---|
PRAXIS_RABBITMQ_URL | amqp://praxis:praxis@localhost:5672 | RabbitMQ connection URL |
Database (Service)
| Variable | Default | Description |
|---|---|---|
PRAXIS_DATABASE_URL | ~/.praxis_operations.db | Database connection |
Formats:
postgresql://user:pass@host:5432/dbname- PostgreSQLsqlite:///path/to/file.db- SQLite with URL prefix/path/to/file.db- SQLite (implicit)
See Database Configuration for detailed setup.
Web Component
| Variable | Default | Description |
|---|---|---|
PRAXIS_NODES_DIR | (none) | Directory containing node binaries for download |
Build
| Variable | Effect |
|---|---|
PRAXIS_SKIP_FRONTEND | Skip frontend build during cargo build |
PRAXIS_NOT_HIDDEN | Disable hidden desktop for DevTools agents. Defaults to 1 in debug builds (visible for development) and 0 in release builds (hidden for production). Set to 1 to make the browser window visible for debugging. |
SKIP_NODE_BUILD | Docker build arg. Set to 1 to skip building praxis_node binaries (Linux and Windows cross-compile). Defaults to 0. Significantly speeds up Docker builds when only service/web changes are needed. Usage: SKIP_NODE_BUILD=1 docker compose up --build |
CARGO_PROFILE | Docker build arg. Cargo build profile to use. Defaults to release (thin LTO, 16 codegen units). Set to release-optimized for fully optimized production builds (full LTO, single codegen unit). Usage: CARGO_PROFILE=release-optimized docker compose up --build |
Logging
| Variable | Example | Description |
|---|---|---|
RUST_LOG | info | Log level filter |
RUST_LOG | debug | Verbose logging |
RUST_LOG | praxis_node::intercept=debug | Module-specific logging |
Service Configuration
Service configuration is stored in the database and managed via the web UI.
Application Logging
| Key | Default | Description |
|---|---|---|
application_logs_enabled | false | Enable centralized application/event logging from service, web, and nodes |
When disabled or missing, logging is off by default. The service broadcasts the current setting to nodes and web clients at startup and on registration.
LLM Provider Settings
Access via Settings > LLM Providers in the web UI.
| Key | Format | Description |
|---|---|---|
llm.semantic_ops.provider | anthropic | Provider for semantic operations |
llm.semantic_ops.model | claude-sonnet-4-20250514 | Model for semantic operations |
llm.semantic_ops.api_key | (encrypted) | API key for provider |
llm.semantic_parser.provider | anthropic | Provider for semantic parsing |
llm.semantic_parser.model | claude-haiku-4-5-20241022 | Model for parsing |
llm.semantic_parser.api_key | (encrypted) | API key for provider |
llm.traffic_parser.provider | anthropic | Provider for traffic analysis |
llm.traffic_parser.model | claude-haiku-4-5-20241022 | Model for traffic analysis |
llm.traffic_parser.api_key | (encrypted) | API key for provider |
llm.orchestrator.provider | anthropic | Provider for Orchestrator |
llm.orchestrator.model | claude-sonnet-4-20250514 | Model for Orchestrator |
llm.orchestrator.api_key | (encrypted) | API key for provider |
Prompt Timeout
| Key | Default | Description |
|---|---|---|
prompt_timeout_secs | 600 | Maximum time in seconds a single agent prompt can run before the agent process is killed. Applies to all sessions unless overridden per-session. |
Claude Bridge Settings
Access via Settings > Claude Bridge in the web UI.
| Key | Default | Description |
|---|---|---|
claude_ccrv1_enabled | false | Enable the CCRv1 (WebSocket) bridge listener |
claude_ccrv1_port | 8586 | Port for CCRv1 WebSocket connections |
claude_ccrv2_enabled | false | Enable the CCRv2 (HTTP+SSE) bridge listener |
claude_ccrv2_port | 8587 | Port for CCRv2 HTTP connections |
The Claude Bridge allows Claude Code to connect directly to the service as a virtual node, without deploying a full Praxis node. See Claude Bridge for protocol details and setup instructions.
MCP Server Settings
Access via Settings > MCP Server in the web UI.
| Key | Default | Description |
|---|---|---|
mcp_server_enabled | false | Enable the built-in MCP SSE server |
mcp_server_port | 8585 | Port for the MCP SSE server |
The MCP server exposes all Praxis tools via the Model Context Protocol over SSE transport. It is used by the built-in Orchestrator and can also be used by external AI agents. See MCP Server for full details.
Supported Providers
| Provider ID | Name | API Key | Base URL |
|---|---|---|---|
anthropic | Anthropic | required | fixed |
openai | OpenAI | required | fixed (overridable) |
gemini | Google (Gemini) | required | fixed |
groq | Groq | required | fixed |
cerebras | Cerebras | required | fixed |
mistral | Mistral | required | fixed |
xai | xAI | required | fixed |
nvidia | NVIDIA | required | fixed |
fireworksai | Fireworks AI | required | fixed |
minimax | MiniMax | required | fixed |
moonshot | Moonshot AI | required | fixed |
openrouter | OpenRouter | required | fixed |
ollama | Ollama (local) | optional | defaults to http://localhost:11434/v1 |
custom | Custom (OpenAI-compatible) | optional | required |
Every model definition can carry an optional base_url field that
overrides the provider default. For custom the base URL is required
— discovery and inference both fail without it. For ollama the base
URL defaults to the local daemon; set it explicitly if you run Ollama
remotely or on a non-default port.
Model Reference Format
When specifying models in operations or chains:
provider::model
Examples:
anthropic::claude-sonnet-4-20250514openai::gpt-4ogoogle::gemini-1.5-progroq::llama-3.3-70b-versatile
Node Configuration
Node Commands
Nodes accept configuration commands at runtime:
| Command | Parameter | Description |
|---|---|---|
SetReportInterval | interval_secs: u64 | How often to send information updates |
Agent Connector Configuration
Each agent connector may have specific configuration. See individual connector documentation.
Claude Code
- Config path:
~/.claude.jsonor~/.config/claude/config.json - MCP servers:
~/.claude/mcp.json - Sessions:
~/.claude/projects/
Gemini CLI
- Config path:
~/.gemini/settings.json - Sessions:
~/.gemini/sessions/
M365 Copilot
- Mode: DevTools (via CDP)
- Platform: Windows only
Operation Definitions
Operations are defined in JSON and stored in the service database.
JSON Format
{
"item_type": "operation",
"name": "find_credentials",
"short_name": "find_credentials",
"category": "recon",
"description": "Search for hardcoded credentials",
"agent_info": "Security researcher looking for exposed secrets",
"timeout": 300,
"operation_prompt": "Search the current directory for files that may contain hardcoded credentials, API keys, passwords, or secrets. List each finding with the file path and context.",
"mode": "one-shot",
"agent_iterations": 1,
"yolo_mode": false,
"disabled": false
}
Fields
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Short name (used with category) |
description | string | Yes | Human-readable description |
category | string | Yes | Category for organization |
agent_info | string | No | Context for the AI agent |
timeout | u64 | Yes | Timeout in seconds |
operation_prompt | string | Yes | The prompt to execute |
mode | string | Yes | one-shot or agent |
agent_iterations | u32 | No | Max iterations (agent mode) |
yolo_mode | bool | No | Auto-approve actions |
model_ref | string | No | Model override (provider::model) |
disabled | bool | No | Disable the operation |
Full Name
Operations are referenced by category::name, e.g., recon::find_credentials.
Chain Definitions
Chains are visual workflows stored in the service database.
Elements
| Element Type | Properties |
|---|---|
Trigger | id, trigger_type |
Operation | id, operation_name, model_ref, session_group, block_config |
Transform | id, prompt, model_ref, session_group, block_config |
GenericPrompt | id, prompt, session_group, block_config |
Memory | id, mode (store or retrieve), key |
Loop | id, max_iterations |
Termination | id, label |
block_config fields (all optional):
| Field | Type | Description |
|---|---|---|
max_runtime | u64 | Per-element timeout in seconds |
yolo_mode | bool | Auto-approve for this element's session |
working_dir | string | Working directory override |
require_all_inputs | bool | Wait for all upstream inputs before executing (default: true) |
Session Groups
{
"id": "group-1",
"color": "#8B5CF6",
"yolo_mode": true
}
Elements in the same session group share an agent session context.
Connections
{
"id": "edge-1",
"from_element": "trigger-1",
"to_element": "op-1",
"from_port": 0,
"to_port": 0,
"condition": "Always"
}
condition values: Always (default), OnSuccess, OnFailure.
Intercept Rules
Rules for matching and processing intercepted traffic.
Rule Structure
{
"name": "Capture API Keys",
"regex_pattern": "Authorization:\\s*Bearer",
"target_direction": "send",
"scope": { "type": "all" },
"enabled": true,
"summarization_prompt": "Extract and summarize the authentication tokens"
}
Target Direction
| Value | Description |
|---|---|
send | Match outgoing requests |
receive | Match incoming responses |
both | Match both directions |
Scope
| Type | Example | Description |
|---|---|---|
all | {"type": "all"} | All nodes/agents |
node | {"type": "node", "node_id": "abc123"} | Specific node |
agent | {"type": "agent", "node_id": "abc123", "agent_short_name": "claudecode"} | Specific agent |
Database Schema
SQLite (Default)
Default location: ~/.praxis_operations.db
Tables:
config- Key-value configurationoperation_definitions- Semantic operationssemantic_operations- Operation executionschain_definitions- Chain workflowschain_executions- Chain runstraffic_log- Intercepted trafficintercept_rules- Traffic rulestraffic_matches- Rule matchesrecon_results- Stored recon dataapplication_logs- Centralized logging table (controlled byapplication_logs_enabled)
PostgreSQL
For production and multi-instance deployments, use PostgreSQL. See Database Configuration for setup, migration, and tuning.
Default Ports
| Service | Port | Protocol |
|---|---|---|
| Web UI | 8080 | HTTP |
| WebSocket | 8080 | WS |
| MCP SSE Server | 8585 | HTTP |
| Claude Bridge CCRv1 | 8586 | WS |
| Claude Bridge CCRv2 | 8587 | HTTP |
| RabbitMQ | 5672 | AMQP |
| RabbitMQ Management | 15672 | HTTP |
| PostgreSQL | 5432 | TCP |
| Proxy (when enabled) | Dynamic | HTTP |
CLI Configuration
The Praxis CLI (praxis_cli) stores state and can be configured via command-line options or environment variables.
CLI State File
| Platform | Path |
|---|---|
| Linux/macOS | ~/.praxis/cli.json |
| Windows | %USERPROFILE%\.praxis\cli.json |
Contents:
{
"client_id": "uuid-generated-on-first-run"
}
CLI Options
| Option | Environment Variable | Default | Description |
|---|---|---|---|
-r, --rabbitmq | PRAXIS_RABBITMQ_URL | amqp://praxis:praxis@localhost:5672 | RabbitMQ URL |
-t, --timeout | - | 600 | Connection/command timeout in seconds |
-C, --command | - | - | Run a single command and exit |
--status | - | - | Check connection status |
--clear | - | - | Clear local state |
File Locations
Linux
| File | Path |
|---|---|
| Database | ~/.praxis_operations.db |
| CLI State | ~/.praxis/cli.json |
| CLI Binary | ~/.praxis/bin/praxis_cli |
| Claude Config | ~/.claude.json or ~/.config/claude/config.json |
| Gemini Config | ~/.gemini/settings.json |
macOS
| File | Path |
|---|---|
| Database | ~/.praxis_operations.db |
| CLI State | ~/.praxis/cli.json |
| CLI Binary | ~/.praxis/bin/praxis_cli |
| Claude Config | ~/.claude.json or ~/.config/claude/config.json |
| Gemini Config | ~/.gemini/settings.json |
Windows
| File | Path |
|---|---|
| Database | %USERPROFILE%\.praxis_operations.db |
| CLI State | %USERPROFILE%\.praxis\cli.json |
| CLI Binary | %USERPROFILE%\.praxis\bin\praxis_cli.exe |
| Claude Config | %USERPROFILE%\.claude.json |
| Hosts File | C:\Windows\System32\drivers\etc\hosts |