Skip to main content
Panaptico is not fully autonomous and not fully manual. It uses governed execution — bounded, permissioned, auditable work where humans retain control at every decision point while AI handles discovery, sequencing, and routine work.

How tasks work

Every task in the implementation checklist follows a governed lifecycle:
1

Task is created with requirements

Each task has acceptance criteria, evidence requirements, dependencies, an owner, and optionally an approval gate
2

Dependencies are checked

A task cannot start until its upstream dependencies are completed. Blocked tasks are visible in the dependency graph.
3

Work is executed

The owner works through the task — manually, with AI assistance, or through bounded automated execution in a sandboxed environment
4

Evidence is attached

Completion requires proof: files, screenshots, test results, configuration exports, links, or verification output. Tasks cannot be marked done without evidence.
5

Approval is routed

If the task has an approval gate, it routes to the named approver. The approver can approve, request changes, or escalate.
6

State is recorded

Every state change — pending → in progress → done, approval granted, evidence attached, risk escalated — is timestamped and attributed in the audit trail.

Evidence model

Evidence is not optional documentation. It is structured proof linked to specific tasks:
Evidence typeExamples
FilesConfiguration exports, Terraform plans, test reports, architecture screenshots
LinksDashboard URLs, monitoring endpoints, ticket references
Execution resultsOutput from AI-assisted task execution in sandboxed environments
Verification resultsAutomated checks confirming acceptance criteria are met
Tasks can define what evidence is required before they can be completed.

Approval chains

Approvals in Panaptico are explicit:
  • Each approval gate has a named approver — not “someone from the team”
  • Approvers see the task context, evidence, and execution results
  • Three outcomes: approved, changes requested, or escalated
  • Approval history is recorded in the audit trail
  • Unresolved approvals are surfaced in the project overview as risks

AI-assisted execution

For tasks that benefit from automation, Panaptico provides AI-assisted execution:
  • Task-scoped agents can generate diagnostics, remediation plans, and configuration files
  • Execution runs in sandboxed environments with explicit dependency handling
  • Output is captured as evidence and linked to the task
  • Agents cannot bypass approval gates or modify blueprint-wide state without explicit request
The AI assists — it does not override human judgment.

Risk and blocker management

When work is blocked:
  • Tasks can be marked as blocked with a reason and downstream impact
  • Risks are tracked with severity, ownership, and resolution status
  • Blocking risks surface in the project overview intelligence dashboard
  • The dependency graph shows which downstream tasks are affected
  • Risk resolution is recorded with evidence

What makes this different from a ticket system

DimensionTicket system (Jira, Linear)Panaptico governed execution
StructureFlat or loosely grouped ticketsPhased tasks with formal dependencies
EvidenceOptional attachmentsRequired proof linked to acceptance criteria
ApprovalsComment-based or externalNamed approvers with explicit routing
ContextIsolated per ticketEvery task sees the full implementation graph
AuditActivity log per ticketFull cross-task, cross-surface audit trail
HealthManual status reportsAutomated scoring, trend detection, A–F grading
Post-completionTicket is closedTask state feeds into post-implementation monitoring

Next steps

Post-implementation

What happens after go-live

Executing tasks

Step-by-step task execution guide