# acequia-devops vs. Acequia Platform: Convergence Analysis

> **Design** — comparative analysis, not implemented behavior.

**Date:** 2026-04-17
**Context:** Stephen Guerin's [acequia-devops](https://simtable.acequia.io/dev/ai-team/acequia-devops/) project (March 2026) compared against the production Acequia platform (localWebDAV, localDiscovery, acequia2).

---

## Overview

Stephen's acequia-devops and the Acequia platform share the same WebDAV substrate and several conceptual roots, but they're aimed at fundamentally different problems and have diverged significantly in architecture. This document maps the convergences, divergences, and opportunities for synthesis.

---

## Shared Foundations

| Concept | acequia-devops (Stephen) | Acequia Platform |
|---------|--------------------------|------------------|
| **WebDAV as truth** | Everything is a file-at-URL. WebDAV namespace is the system of record | Same — `acequia/{domain}/` is the canonical file tree, served by Nephele |
| **Browser as compute** | Browser tabs are runners, editors, and policy enforcers | Same — browser tabs run apps, register route handlers, serve files via BrowserDAV |
| **Service Worker as gatekeeper** | SW intercepts fetch, gates writes by policy tier | Same — SW intercepts all fetch, routes to group handlers, manages VFS mounts |
| **Ambient authority** | Session cookies = auth, no API keys | Similar — JWT in cookies, extended with delegation chains |
| **WebRTC for coordination** | `acequia-wrtc.js` — ephemeral peer mesh for real-time coordination | `acequia2/acequia/webrtc/` — peer mesh for route proxying and data channels |
| **BroadcastChannel for local tabs** | Bridges RTC messages to local tabs | Same — SW uses `client.postMessage()` as fastest transport for local routes |
| **Roles** | human, agent, coordinator, worker, watcher | owner, editor, viewer, pending (auth roles, not runtime roles) |

Both projects arrive at the same core insight: **the browser is a full compute node, WebDAV is the shared memory, and the service worker is the policy boundary.**

---

## Where They Diverge

### 1. Purpose

**acequia-devops** is a **CI/CD pipeline in the browser** — builds, tests, attestations, artifact promotion through channels (dev → stage → prod). The core loop is: *edit → build → attest → promote*.

**Acequia platform** is a **peer-to-peer application framework** — identity, delegation, real-time communication, and app hosting. The core loop is: *register → delegate → serve → coordinate*.

### 2. Authority Model

This is the sharpest divergence:

- **acequia-devops**: Ambient authority (cookies) for most operations, cryptographic attestations for gated writes. The SW verifies hash-based or signature-based attestation chains before allowing "control writes" (channel promotions). Trust model: *"prove this computation happened correctly."*

- **Acequia platform**: Delegation chains with PS256 JWTs. Authority flows from owner → user via chain tokens with scoped paths, write permissions, depth limits, and expiry. The server verifies the chain. Trust model: *"prove you were authorized by someone who was authorized."*

acequia-devops doesn't have delegation — everyone with a session cookie has the same ambient authority, differentiated only by what the SW policy allows. The Acequia platform doesn't have attestations — there's no concept of "prove a build passed before writing."

These are **orthogonal trust axes** that could complement each other.

### 3. Signaling Architecture

- **acequia-devops**: Signaling via WebDAV polling. Offers/answers are written as JSON files to `/.wrtc/signaling/{room}/{peerId}-to-{targetId}.json` and polled every 2 seconds. No dedicated signaling server.

- **Acequia platform**: Dedicated signaling server (localDiscovery, port 31313) with persistent WebSocket connections. Real-time route registry with versioning, heartbeat/stale peer reaping, HTTP-to-WebSocket proxy fallback, glare detection.

Stephen's approach is elegant in its simplicity (pure WebDAV, no extra server) but would struggle with latency and connection setup time at scale. localDiscovery solves exactly these problems.

### 4. State of Implementation

- **acequia-devops**: A design exploration from March 5–6, 2026. The `acequia-wrtc.js` has the class structure but key methods (`_checkForOffers()`, `_checkForAnswers()`) are TODO stubs. The dashboard saves Pareto decisions to WebDAV but doesn't connect to any build system. Five ChatGPT agent summaries provide architectural research.

- **Acequia platform**: Production system with 277 tests, multi-domain SNI SSL, delegation chains, BrowserDAV file serving, group coordination primitives (Shared State, WorkerPool, Event Bus), deployed on EC2 serving `acequia.io` and `santafe.live`.

### 5. Worker Model

- **acequia-devops**: Envisions WebContainers + esbuild-wasm running in the browser for builds/tests. Workers are compute nodes in a distributed build system. Message types include `work:offer`, `work:accept`, `run:start`, `run:result`.

- **Acequia platform**: WorkerPool is a general-purpose distributed job queue over the peer mesh, not build-specific. Workers declare capabilities, jobs are matched and dispatched. Already implemented with tests.

### 6. Write Classification

acequia-devops introduces a 4-tier write model that doesn't exist in the platform:

| Tier | Examples | Gate |
|------|----------|------|
| Free | Source edits, notes, drafts | ETag only |
| Reviewed | Pipeline config, policy changes | ETag + lint/test |
| Guarded | Channel promotion to prod | Attestation + signature |
| Break-glass | Emergency overrides | Special signer + loud audit |

The Acequia platform currently has a simpler model: reads and writes are gated by delegation chain scope (paths + writePaths), not by the nature of the operation.

---

## The Complementary Vision

These aren't competing projects — they're complementary layers. acequia-devops describes a **CI/CD application** that could be built *on top of* the Acequia platform.

### Direct Mappings

| acequia-devops Concept | Acequia Platform Equivalent |
|------------------------|---------------------------|
| `AcequiaRoom` (WebRTC mesh) | `acequia.groups.Group` (signaling, routes, transport fallback) |
| `WORK_OFFER` / `WORK_ACCEPT` | `group.workerPool` (capability matching, job dispatch) |
| `FILE_CHANGED` broadcast | `group.publish()` / `group.subscribe()` (Event Bus) |
| Session cookie auth | Delegation chain tokens (scoped, time-limited, revocable) |
| WebDAV polling for signaling | localDiscovery WebSocket (real-time, with reconnection) |
| `BroadcastChannel` bridging | SW `client.postMessage()` transport (already integrated) |

### What acequia-devops Adds

The **attestation chain** concept is genuinely novel relative to the platform. The idea that certain writes require cryptographic proof of valid computation — not just proof of authorization — is a different trust axis:

- **Delegation chains** answer: *"who authorized this write?"*
- **Attestation chains** answer: *"what evidence supports this write?"*

Combining both would enable policies like: "this channel promotion requires a valid delegation chain (authorized user) AND a passing attestation (CI checks passed)."

### What the Platform Provides

If acequia-devops were built as an Acequia app (`acequia/{domain}/DevOps/`), it would inherit:

- Real-time signaling without polling (localDiscovery)
- Transport fallback (postMessage → WebRTC → HTTP proxy)
- Scoped authority via delegation chains (AI agent gets limited token, not full session)
- Multi-domain isolation
- BrowserDAV for peer-to-peer file serving
- Existing coordination primitives (Shared State, WorkerPool, Event Bus)

---

## Architectural Decision Comparison

Stephen's Pareto decisions (recorded in `dashboard/pareto-state.json`) mapped against the platform's current state:

| Decision Area | Stephen's Choice | Platform Status |
|---------------|-----------------|-----------------|
| Merge Strategy | LWW + ETag conflict detect | ETags used for concurrency; no CRDT |
| Versioning | Snapshot + optional Git interop | No versioning layer (files are current state) |
| UI Framework | Vanilla JS + Web Components | Vanilla JS + Shoelace web components |
| Cross-Tab Coordination | BroadcastChannel + Web Locks | BroadcastChannel via SW postMessage; no Web Locks |
| Worker Architecture | SW + Worker pool | SW (policy) + WorkerPool (group primitive) |
| Policy Complexity | Path rules + ETag only | Delegation chain scope (paths + writePaths) |
| Signing Key Custody | Hash-only (no signing) | PS256 key pairs in IndexedDB (CryptoKey, non-exportable) |
| WebRTC Transport | Noted as "main transport between nodes" | Implemented with full signaling, glare detection, chunked binary |

Notable alignment on vanilla JS + web components, SW + worker pool architecture, and WebRTC as primary peer transport. The platform has gone further on signing (full PS256 key management) while Stephen chose hash-only for simplicity.

---

## Synthesis Opportunities

1. **Build acequia-devops as an Acequia app** — Replace `acequia-wrtc.js` with `acequia.groups.Group`, gaining real-time signaling, transport fallback, and the full peer coordination stack.

2. **Add attestation-gated writes to the SW policy layer** — The platform's SW could learn to verify attestation chains alongside delegation chains, enabling "authorized AND verified" write policies.

3. **Scoped AI agent tokens** — acequia-devops uses ambient authority for AI agents (same session cookie as the human). Delegation chains could give AI agents scoped, time-limited, revocable tokens — a significant security improvement for autonomous agents.

4. **Pipeline-as-data in WebDAV** — The `pipeline.json` contract concept could become a standard Acequia app pattern: declarative workflows stored as WebDAV files, executed by WorkerPool peers, with results attested and stored alongside.

5. **WebDAV-based signaling as fallback** — Stephen's polling-based signaling could serve as an additional transport fallback when localDiscovery is unavailable, extending the platform's transport chain: postMessage → WebRTC → HTTP proxy → WebDAV polling.
