Skip to main content

Architecture

OSAPI turns Linux servers into managed appliances. You install a single binary, point it at a config file, and get a REST API and CLI for querying and changing system configuration — hostname, DNS, disk usage, memory, load averages, and more. State-changing operations run asynchronously through a job queue so the controller itself never needs root privileges.

The Three Processes

OSAPI has three runtime components. They can all run on the same host or be spread across many.

NATS Server

A lightweight message broker that stores job state and routes messages between the controller and agents. OSAPI embeds a NATS server with JetStream enabled, so you don't need to install anything extra — just run osapi nats server start.

For production deployments with multiple hosts, you can point everything at an external NATS cluster instead of the embedded one. Just change the nats.server section in osapi.yaml.

Controller

The control plane process. It runs several sub-components:

  • REST API — an HTTP server that handles authentication (JWT), validates requests, and translates them into jobs published to NATS. The controller never executes system commands directly — it creates a job and returns a job ID. Clients poll for results.
  • Component heartbeat — registers the controller in the registry KV so health status can report its state. The heartbeat includes sub-component status (api, metrics, notifier, tracing) so operators can see the health of each internal service.
  • Condition watcher — monitors the registry KV for condition transitions and dispatches notifications.

Start it with osapi controller start.

Agent

A background process that subscribes to NATS, picks up jobs, and executes the actual system operations (reading hostname, querying DNS, checking disk usage, etc.). Agents run with whatever privileges they have — if an agent can't read something due to permissions, it reports the error rather than failing silently. Each agent publishes its own sub-component status (heartbeat, metrics) via the registry heartbeat.

Start it with osapi agent start.

Deployment Models

Single Host

The simplest setup. All three processes run on the same machine:

Use osapi start to run all three in a single process — the recommended approach for single-host deployments:

osapi start

The CLI on the same host talks to the controller over localhost. This is useful for managing a single appliance or for development.

Multi-Host

For managing a fleet, run a shared NATS server (or cluster) and point multiple agents at it. Each agent registers with its hostname and optional labels, and the job routing system delivers work to the right place.

You can target jobs to specific hosts, broadcast to all, or route by label:

  • --target _any — send to any available agent (load balanced)
  • --target _all — send to every agent (broadcast)
  • --target web-01 — send to a specific host
  • --target group:web.dev — send to all agents with a matching label

How a Request Flows

When you run a command like osapi client node hostname:

The controller never touches the operating system directly. It's a thin coordination layer between clients and agents.

Further Reading

For details on individual features — what they do, how they work, and how to configure them — see the Features section:

Deep Dives