Closed-loop execution intelligence

When execution drifts from intent,
you should know first.

vigilos watches what your team ships, compares it against what you said you’d ship, and surfaces the drift before it turns into a missed milestone.

GitHub

PRs · commits

Slack

decisions · scope

Linear

tickets · velocity

live · ingesting

Goal

Migrate auth to Auth0 by end of Q3

5/8 checkpoints doneDue Sep 30At risk
  • 2m ago

    github · merged PR #142

    auth-rewrite-v2: split SAML into separate service

    scope signal
  • 12m ago

    slack · #eng-platform

    We're rolling auth into the migration too — easier in one shot.

    decision
  • 1h ago

    linear · ENG-481 in progress

    OAuth provider abstraction layer

  • just now

    alert · drift detected

    Scope expanding beyond original plan — at risk

    at risk
$ ingesting next event

The problem

Most teams run on an open loop.

You set a goal. Work happens across half a dozen tools. Decisions get made in DMs. Scope creeps in commits. By the time someone checks — usually a week before the deadline — it’s already too late.

Today

Open loop

Intent goes in. Execution scatters. Nothing comes back.

INTENTGoal+ planPRDMTicketMtgDoc↘ DRIFT UNNOTICED

vigilos

Closed loop

Execution feeds back. Drift surfaces. The system learns.

INTENTGoal+ planEXECUTIONPRs · Slack · LinearDRIFT DETECTEDalert + recommendationACTIONtrackedre-assess↻ FEEDBACK

How it works

Four stages, one loop. Always running.

The system observes, compares, and recommends — then watches you act on it and starts again. Not a dashboard. Not a chatbot. A closed loop.

01 — Intent

Capture what you said you'd ship.

A goal is more than a one-liner. Plans are decomposed into checkpoints, owners, and target dates — each with explicit acceptance criteria the system can evaluate against.

  • Goal · target date · success criteria
  • Plan decomposed into checkpoints
  • Stored as structured artifacts, not free text

Goal artifact

Migrate auth to Auth0 by end of Q3

Due Sep 30 · Owner: platform

  • Define provider abstraction
  • Migrate web sign-in flows
  • Migrate mobile sign-in flows
  • Decommission legacy SAML stack

02 — Execution

Stream what your team is actually doing.

Connectors pull real signals from where work happens — GitHub PRs, Slack threads, Linear tickets — and store them with enough structure to reason about: who, what, when, scope-signal, decision, or blocker.

  • GitHub · Slack · Linear connectors
  • Tagged: scope · decision · blocker
  • Append-only, replayable timeline

Recent signals

  • githubPR #142 merged · auth-rewrite-v2
  • slack#eng-platform · scope discussion
  • linearENG-481 in progress

03 — Drift

Surface divergence before it costs you.

An LLM with a fixed rubric compares execution to intent and emits a structured assessment — status, severity, confidence, recommendations — validated against a strict schema. Same input, same output, every time.

  • Schema-enforced output (Pydantic + tool use)
  • Status: on track · at risk · off track
  • Specific recommendation, not a vibe

Drift assessment

At risk · scope creep

Auth migration scope is expanding to include legacy SAML decommission, which wasn’t in the original plan.

severity

0.62

confidence

0.84

04 — Action

Close the loop with tracked follow-ups.

Recommendations don't disappear into a ticket void. They become tracked action items the next assessment can see — so the system learns, and you stop hearing about the same drift twice.

  • Action items: owner · due date · status
  • Re-assess sees prior state and resolutions
  • Audit trail across the whole loop

Action items

  • Lock SAML decommission to next quarteropen
  • Update plan with explicit scope boundaryin progress
  • Notify stakeholders of timeline impactdone
Re-assess sees: at_risk → on_track

What it catches

Four ways execution slips. We see all of them.

Scope creep

Plan said A. PRs say A + B + C.

We watch commit messages and Slack threads for scope-expanding language and flag it the day it happens — not at sprint review.

Signal

drift 68%

+3 unplanned features in last 9 commits

Velocity drop

Throughput is slipping.

Checkpoint-completion rate compared to plan trajectory. When the slope flattens, we surface it with a confidence score and the likely cause.

Signal

drift 78%

12 PRs/wk → 4 PRs/wk over the last 14d

Hidden blockers

Someone's stuck. No one filed it.

Slack messages with blocker-shaped phrasing turn into structured blockers attached to the goal — even when nobody bothered to open a ticket.

Signal

drift 52%

Blocker mentioned 4× across #eng-platform, no ticket

Missed milestone

Due dates with receipts.

Every checkpoint has a target. As the clock runs, the system re-projects feasibility and tells you which milestone is going to miss — and why.

Signal

drift 84%

Checkpoint 4 will miss by 6d at current velocity

Under the hood

Not a wrapper.
An execution OS.

Anyone can wire a chatbot to a Slack channel. We built the primitives that make AI’s output something you can ship a business on.

  • Schema-enforced reasoning

    The LLM doesn't get to make things up. Every response is structured output, validated against Pydantic models with CHECK constraints all the way down. Bad data fails loudly at the boundary.

  • Model-portable by design

    Prompts, rubrics, and tool schemas are versioned in code. Swap Claude for GPT or open-weights without a rewrite. The model is a strategy, not a dependency.

  • Audit trail you can defend

    Every assessment captures: input events, prompt version, model used, tokens, structured output, and the human acknowledgment that followed. Append-only, replayable, exportable.

assessment.json
schema.py
tool_use · validated
// drift_assessor LLM output → Pydantic.parse_obj()
{
  "goal_id": "a3f1...4c",
  "status": "at_risk",
  "alert_type": "scope_creep",
  "severity": 0.62,
  "confidence": 0.84,
  "summary":
    "Auth migration scope expanding to include legacy SAML decommission, which wasn't in the original plan.",
  "recommendation":
    "Lock SAML decommission to Q4 and document it as out of scope for this goal.",
  "evidence_event_ids": ["e_142", "e_148", "e_153"],
  "trace": {
    "prompt_version": "drift_v3",
    "model": "claude-sonnet-4-5",
    "tokens_in": 2418,
    "tokens_out": 312
  }
}
schema · okinserted into drift_alerts

Try it

See it on a real workspace.

Two weeks of mock execution, a goal that visibly drifts, an LLM-assessed alert with structured recommendations, and the tracked action items that close the loop. All live, all clickable.

Local first · No signup · `/dashboard` is live