# Group Shared State

> **Platform Reference** — describes implemented, production behavior.

Groups support shared mutable state that any member can update and all members receive change notifications. The state is managed by a peer-elected leader that merges patches and broadcasts updates.

```javascript
// Update state (shallow merge)
group.setState({ counter: 1, status: 'active' })

// Read current state
console.log(group.state)  // { counter: 1, status: 'active' }

// Listen for changes (from any peer, including self)
group.on('stateChanged', ({ state, patch, peerId, version }) => {
  console.log('Changed:', patch)
})
```

---

## When to Use Shared State

| Use Shared State when... | Use [WorkerPool](worker-pool.md) when... | Use [Routes](creating-acequia-apps.md#route-registration) when... | Use [Event Bus](event-bus.md) when... |
|---|---|---|---|
| All peers need the same data | Work should go to ONE peer | One canonical handler per URL | Events are ephemeral |
| Updates should reach ALL members | You need load distribution | You need `fetch()` semantics | Fire-and-forget messaging |
| Data is small (metadata, config, status) | Payloads are large or compute-heavy | Simple request/response is enough | Topic-based filtering |
| You need last-write-wins convergence | Jobs need retry, timeout, progress | A stable handler is expected | High-frequency updates |

---

## Quick Start

```javascript
import acequia from '/acequia.js'

await acequia.acequiaReady()

const group = new acequia.groups.Group('my-group', {
  displayName: 'My App',
  capabilities: ['state-leader'],  // required for shared state
})
await group.ready

// Set state — shallow merge, null deletes keys
group.setState({ counter: 1, status: 'active' })
group.setState({ status: null })  // deletes 'status' key

// Read current state
const state = group.state  // { counter: 1 }

// React to changes from any peer
group.on('stateChanged', ({ state, patch, peerId, version }) => {
  // state   — full state after merge
  // patch   — just the keys that changed
  // peerId  — who made the change (null for leader broadcasts)
  // version — leader's version counter (null for optimistic local updates)
})
```

---

## How It Works

### Leader Election

On every `peersList` change, peers elect the leader with the **lowest `instanceId`** among those with the `state-leader` capability. All peers use the same deterministic sort, so they always agree on the leader without coordination.

**WorkerPool integration:** Any WorkerPool with a `handler` automatically gets the `state-leader` capability. Shared state works out of the box with pools — no extra configuration needed.

If using Group directly (without WorkerPool), add `state-leader` to capabilities explicitly:

```javascript
const group = new acequia.groups.Group('my-group', {
  capabilities: ['state-leader'],
  // ...
})
```

### Update Flow

1. Peer calls `setState(patch)` — **optimistic local merge** + fires `stateChanged` immediately
2. Patch is sent to the leader (WebRTC if connected, WS relay otherwise)
3. Leader merges the patch, increments version, broadcasts `groupState:changed` to all peers
4. Each peer updates its state and fires `stateChanged` with the authoritative version

### Collect-Then-Lead

When a new leader is elected (e.g., the old leader disconnected), it requests state from all peers before broadcasting. This prevents a fresh peer from overwriting established state with `{}`.

### Transport

WebRTC data channel is preferred (fast, peer-to-peer). If WebRTC connection fails, the leader falls back to WebSocket relay through the discovery server. State is always delivered — the transport is an optimization, not a requirement.

---

## API

### Reading State

```javascript
group.state  // plain object — the current shared state
```

### Writing State

```javascript
group.setState(patch)  // shallow merge into shared state
```

- Keys set to `null` are deleted from the state
- The merge is shallow — nested objects are replaced, not deep-merged

### stateChanged Event

```javascript
group.on('stateChanged', ({ state, patch, peerId, version }) => { ... })
```

| Field | Type | Description |
|-------|------|-------------|
| `state` | `object` | Full state after merge |
| `patch` | `object` | Just the keys that changed |
| `peerId` | `string \| null` | Who made the change (`null` for leader broadcasts) |
| `version` | `number \| null` | Leader's version counter (`null` for optimistic local updates) |

---

## Patterns

### Broadcast Notifications

Use shared state to notify ALL group members. Unlike WorkerPool dispatch (which goes to ONE worker), state changes reach everyone:

```javascript
// Sender: notify all peers of a completed upload
group.setState({
  lastUpload: { filename: 'photo.jpg', url: '/uploads/abc/photo.jpg', ts: Date.now() },
})

// All receivers:
group.on('stateChanged', ({ patch }) => {
  if (patch?.lastUpload) showUploadNotification(patch.lastUpload)
})
```

### Presence / Status

```javascript
// Each peer publishes its status
group.setState({
  [`peer:${acequia.getInstanceId()}`]: { status: 'recording', since: Date.now() },
})

// All peers see everyone's status
group.on('stateChanged', ({ state }) => {
  const activePeers = Object.entries(state)
    .filter(([k, v]) => k.startsWith('peer:') && v?.status === 'recording')
  updateUI(activePeers)
})
```

### Clearing State

```javascript
// Clear specific keys
group.setState({ tempData: null, oldStatus: null })

// Clear all state
const nullPatch = {}
for (const key of Object.keys(group.state)) nullPatch[key] = null
group.setState(nullPatch)
```

---

## Gotchas

**Shallow merge only.** `setState({ nested: { a: 1 } })` followed by `setState({ nested: { b: 2 } })` results in `{ nested: { b: 2 } }` — the second call replaces the entire `nested` object. To update nested data, spread it yourself:

```javascript
group.setState({ nested: { ...group.state.nested, b: 2 } })
```

**Optimistic updates fire before leader confirmation.** The local `stateChanged` event fires immediately with `version: null`. The leader's broadcast arrives later with an authoritative `version` number. If two peers set the same key simultaneously, the leader's merge order wins.

**Requires `state-leader` capability.** At least one peer in the group must have `state-leader` in its capabilities. WorkerPool workers get this automatically. Without it, state updates are sent but no leader processes them.

**State is ephemeral.** It exists only while group members are online. When all peers leave, state is lost. For persistence, write state to the WebDAV server separately.

**Keep state small.** Every update broadcasts the full state to all peers. State is best for metadata, counters, and status — not large payloads. For large data, use [WorkerPool](worker-pool.md) jobs or direct file operations.

For the full set of patterns and gotchas, see [App Patterns & Field Notes](app-patterns-field-notes.md#group-shared-state).
