Skip to content

Anonymity Addendum

Related: DDS: Verifiable Deliberation on AT Protocol Date: 2026-02-05 Status: Discussion notes


Executive Summary

This addendum explores the anonymity challenges inherent in any decentralized deliberation system. The key finding is that achieving strong cross-deliberation unlinkability is fundamentally difficult, regardless of which underlying protocol (AT Protocol, custom federation, pure P2P) is chosen. These challenges are not specific to AT Protocol. They are inherent to any system that uses public infrastructure and aims to minimize trust in operators.

Scope: This addendum covers participant anonymity: pseudonymity, cross-deliberation unlinkability, correlation resistance, and metadata leakage. Deliberation access (restricting who can participate) is a separate concern addressed in the main spec, Deliberation Access and Implementation Addendum, Deliberation Access.

Two distinct concerns are often conflated:

  1. Participant identity: what other participants learn about you (Levels 0-3, see §4)
  2. Metadata privacy: what infrastructure operators can infer about you from traffic, timing, and PDS origin (see §5)

These are orthogonal. Solving one does not solve the other. True anonymity requires both, which leads to two fundamentally different implementation paths (see §6).


1. The Core Problem

Deliberation platforms face a fundamental tension:

UsabilityPrivacy
See my voting historyVotes unlinkable across deliberations
Sync across devicesNo correlation of my activity
One login for everythingDifferent identity per context
Fast, responsive UXNo metadata leakage
Trust-minimizationPrivacy (from operators)
Don’t trust PDS operatorOperator can’t see my activity
Don’t trust relay operatorsOperators can’t correlate my DIDs
Verifiable, not trustworthyNo single point of surveillance

These goals conflict. Any design must choose trade-offs.

Decentralization increases sovereignty but can reduce privacy. In a centralized system, you trust one operator with everything: simple threat model, one entity to trust. In a decentralized system, data flows through public infrastructure (PDS, Relay, AppView). Multiple entities observe your activity. Privacy requires hiding from all of them, which is harder.

This is not specific to AT Protocol:

ProtocolSame Challenges?Notes
AT ProtocolYesFirehose is public, PDS origin visible
ActivityPubYesServer origin visible, federation leaks metadata
Custom P2PYesDHT queries reveal interest, timing correlation
Blockchain-basedYes (worse)All data permanently public, immutable
Pure IPFSYesRequest patterns observable, no crowd
CentralizedDifferentSingle trust point, but simpler threat model

The correlation challenges are inherent to the trust-minimized public infrastructure model, not to any specific protocol. However, the difficulty of solving them varies: on Nostr and Logos Messaging, anonymity flows naturally from the architecture (no server knows your identity), while on AT Protocol, the PDS inherently knows the user’s identity, so anonymity requires workarounds (see Design Rationale, Anonymity-First Protocols). For anonymity-first applications, those protocols may be a better foundation. DDS builds on AT Protocol for other reasons (usability, interoperability, ecosystem maturity, sovereignty guarantees) and handles participant anonymity at the identity layer rather than the transport layer.


2. Correlation Vectors (Protocol-Agnostic)

Regardless of the underlying protocol, these correlation vectors exist:

2.1 Identity Linkage

VectorAttackMitigationCost
Same identifierEmail/phone used across deliberationsPer-deliberation pseudonymsUX complexity
Same DIDDID used across deliberationsPer-deliberation DIDsArchitecture complexity
Same PDS originAll DIDs from same server visible on FirehoseLarge multi-tenant PDS (crowd)Must trust PDS crowd

2.2 Network Linkage

VectorAttackMitigationCost
IP addressSame IP makes requests for multiple DIDsTor/VPN/MixnetLatency, complexity
TimingRequests for multiple DIDs at similar timesRandomized delaysLatency
SessionSame auth session queries multiple DIDsSeparate sessionsUX friction

2.3 Behavioral Linkage

VectorAttackMitigationCost
Activity patternsUser active at same times across deliberationsActivity noiseUnnatural UX
Writing styleStylometry on opinionsStyle obfuscationUnnatural writing
Voting patternsStatistical correlation of voting behaviorNone practicalFundamental limit

2.4 Infrastructure Linkage (Trust-Minimization Specific)

VectorAttackMitigationCost
Firehose/RelayPublic observer sees all commits from same PDSHide in crowd OR encryptTrust crowd OR complex crypto
AppView queriesHistory view reveals DID linkageNo cross-deliberation historyTerrible UX
Self-hosted PDSTrivially links all your DIDs (you’re the only user)Don’t self-host for privacyIronic: self-host = less privacy

3. Specific Challenges

The challenges below apply specifically to cross-deliberation unlinkability (Level 3, per-deliberation identity). For Levels 0-2, the user has a single persistent DID, so there is no multi-DID linkage problem: history views, PDS origin, and behavioral patterns all attach to one known identifier by design.

3.1 The History View Problem

The most fundamental challenge: users want to see their participation history.

Users expect to see “all deliberations I’ve participated in” and “my votes across all topics.” This requires the system to know that {DID_1, DID_2, DID_3} belong to the same user. The options:

  1. Server knows (queries reveal linkage): defeats trust-minimization.
  2. Client knows (local storage only): see note on sync below.
  3. Encrypted on server: access patterns still reveal linkage.
  4. No history view: unacceptable UX.

Note on cross-device sync: Privacy-preserving sync is possible via local-first / device-to-device direct sync (similar to the Type B device sync in the Implementation Addendum). However:

  • Must ensure any relay/proxy server doesn’t learn the DID linkage
  • Non-trivial to implement correctly
  • Adds significant complexity to achieve both privacy AND sync

Conclusion: Any system that provides cross-deliberation history while minimizing trust in operators faces a fundamental challenge. Solutions exist but require careful design.

3.2 Self-Hosted PDS: Sovereignty ≠ Privacy

Expanding on the self-hosted PDS vector from §2.4:

A self-hosted PDS provides maximum sovereignty (you control everything) and is fully trust-minimized (you are the operator), but it is worse for privacy. Your PDS has only your accounts, so the Firehose sees “pds.alice.com committed for did:plc:a1, b2, c3,” making correlation trivial: all DIDs belong to Alice. There is no crowd to hide in.

A managed PDS (many users) mixes your DIDs with thousands of others. The Firehose sees “bigpds.example.com committed for 50,000 DIDs,” making it harder to correlate which are yours. But you must trust the managed PDS operator.

The trade-off: self-hosted = sovereignty without privacy. Managed = privacy (crowd) without full sovereignty.


4. Participant Identity Levels

Note: The canonical summary of identity levels (0-3) is in the main specification, Terminology. This section provides extended discussion, threat models, and implementation guidance for each level.

These levels describe what other participants see about you. This is independent of metadata privacy (what infrastructure operators can infer); see §5 for that concern.

Level 0: Identified Participation

  • Credentials attached to DID reveal real-world identity (name, email, organization).
  • Fully linkable across deliberations.
  • Other participants and observers can see who said what.
  • Appropriate for: public civic discourse, company town halls, formal consultations.
  • Trade-off: maximum accountability, minimum privacy.

Level 1: Pseudonymous Participation (DDS Default)

  • User authenticates with identifiers (e.g., email, phone, social login) or linkable credentials (e.g., EUDI wallet, W3C VC without ZK), but the AppView does not expose them to other participants.
  • Other participants see only the DID.
  • Same DID used across deliberations (linkable by DID).
  • Trust-minimized: can walkaway from any operator.
  • Appropriate for: most deliberation use cases.
  • Threat model: protects against casual deanonymization by peers. The PDS operator knows the user’s identifiers; pseudonymity is from the participant perspective, not from the operator perspective.

Level 2: Anonymous, ZK-verified Participation (Persistent)

  • Persistent DID across the network, associated with a ZK nullifier.
  • Proves eligibility (e.g., one-person-one-identity, event ticket, membership) without revealing who you are.
  • No strong identifiers (e.g., email, phone, wallet) attached.
  • Same DID used across deliberations (linkable by DID) but no deanonymization path via credentials.
  • Appropriate for: participation where accountability is not required but sybil resistance is needed.
  • Threat model: same as Level 1, but with no credential-based deanonymization path. However, “anonymous” here refers narrowly to credential opacity. The persistent DID is a correlation anchor: an observer who sees the same DID across deliberations can trivially apply the correlation vectors from §2 (activity timing, writing style, voting patterns) to build a behavioral profile and potentially deanonymize the participant. In practice, this level is closer to pseudonymity with credential hiding than true anonymity. For unlinkability, Level 3 (per-deliberation identity) is required.

Level 3: Anonymous, ZK-verified (Per-deliberation)

  • Fresh ephemeral identifier per deliberation (see Implementation Addendum §5).
  • Unlinkable across deliberations: participation in deliberation A cannot be correlated with deliberation B.
  • ZK nullifiers scoped per deliberation ensure eligibility per context (e.g., one-person-one-vote, event ticket, membership).
  • Appropriate for: sensitive consultations, whistleblower platforms, political dissent, any context requiring unlinkability.
  • Trade-off: because each deliberation uses a fresh identity, no prior eligibility proof carries over. The user must re-present their credential (e.g., re-scan ticket, re-do ZK proof) for every deliberation they join. This is the core UX cost compared to Level 2, where a single verification persists across contexts.
  • Limitation: protects against other participants and public observers, but not against infrastructure operators (PDS, relay) who can correlate via IP, timing, and session metadata. For protection from operators, metadata privacy measures are needed (see §5).

On Guest Participation: “Guest” describes an account status (no hard credentials), not a single identity level. Guest participation spans a spectrum:

  • Unverified guests (no credentials at all, device-bound identity) operate at Level 2 (persistent identifier) or Level 3 (per-deliberation identifier), but without sybil resistance.
  • Soft-verified guests (e.g., Zupass ticket holders with ZK proofs) also operate at Level 2 or Level 3, but with sybil resistance via per-context ZK nullifiers.

Both types are anonymous to other participants. The difference is sybil resistance, not anonymity level. See Implementation Addendum §5 for design exploration.


5. Metadata Privacy

Metadata privacy is orthogonal to participant identity levels. It addresses what infrastructure operators and network observers can learn about you, regardless of what identity level you use.

A user at Level 3 (per-deliberation anonymous) with no metadata protection is still visible to their PDS operator, who can correlate fresh DIDs by IP address, session timing, and request patterns. The ZK proof hides the user’s identity from other participants; it does not hide anything from infrastructure.

Protection tiers

These can be combined with any identity level (0-3):

No protection (default): Standard AT Protocol infrastructure. PDS origin visible on the Firehose. IP address visible to the PDS operator. Sufficient for most use cases where the threat model is casual deanonymization by other participants, not operator-level surveillance.

Crowd hiding: Large multi-tenant PDS where your DIDs are mixed with thousands of others. Correlation requires traffic analysis rather than trivial PDS-origin matching. Trade-off: must trust the PDS operator not to correlate internally. Resists public observation, not operator collusion.

Strong metadata privacy: Tor for all network requests. Local-first sync only (device-to-device, no server-side history). No server learns which DIDs belong to the same user. Resists all observers, including operators. Trade-off: significant engineering effort, challenging UX, substantial development investment.

The honest assessment

Per-deliberation anonymity (Level 3) without metadata protection is useful but partial. It hides you from other participants and public Firehose observers. But the same operators you are hiding from at the identity level can infer the same linkage from traffic metadata. Identity-level anonymity alone does not deliver full anonymity. True anonymity requires both identity-level protection (Level 3) and strong metadata privacy, which is a fundamentally different architecture (see §6).

The gap between crowd hiding and strong metadata privacy is significant. Crowd hiding trusts the PDS operator. Strong metadata privacy trusts no one, but requires Tor integration, local-first sync, and careful audit of all network paths.


6. Recommendation for Implementers

6.1 Two Paths, Not One

DDS defines two architecturally distinct implementation paths:

Pseudonymity path:

  • Levels 0-2: persistent identity, honest about linkability.
  • Simple architecture, standard AT Protocol infrastructure.
  • In practice, most deliberation use cases lean towards this path.
  • DDS prioritizes tooling for this path first (SDK, best practices, reference implementations).

Strong anonymity path:

  • Level 3 + strong metadata privacy (Tor, local-first sync).
  • Per-deliberation identity, crowd hiding or Tor routing.
  • Fundamentally different architecture.
  • Separate best-practice guides and SDK recommendations may follow later.

DDS does not prescribe which path is “better.” The choice depends on the product’s use case and threat model. DDS provides guidance for both, with recommendations tailored to product goals.

6.2 Why Not Both in One App?

The two paths make opposing architectural choices. They should not be mixed in the same application:

AspectPseudonymityStrong anonymity
IdentityPersistent DIDEphemeral DID per deliberation
InfrastructureStandard PDSTor-routed, crowd hiding
HistoryServer-side syncLocal-first only
AuthStandard AT OAuthSeparate mechanism
Credential reuseVerify onceRe-verify every deliberation
ComplexityStandardSignificant engineering effort

Mixing them in one app means neither works well. The infrastructure choices for pseudonymity (standard PDS, server sync, persistent DID) actively undermine anonymity. The infrastructure choices for anonymity (Tor, local-first, ephemeral DIDs) sacrifice the usability that makes pseudonymity practical.

Implementers should choose one path and commit to it.