# Orange Anchor: Digital Collateral for Existing Accounts

**Version 2.2 — Published**
**May 2026**

*Cost-backed accounts and cross-platform flagging.*

---

> **Core invariant.** Per-commitment production cost is positive and scales linearly with volume up to operational constants, under a stated, adversary-favourable threat model. The architectural contribution is cost preservation and attribution: production cost paid by a holder persists as attributable scarcity across every system that recognises the commitment. See *Orange Anchor White Paper v2.3* for the full argument.

---

## About This Document

This document is an integration and value brief for platform decision-makers and engineering leads at existing digital platforms. It explains what Orange Anchor offers, how it attaches to the systems already in place, and what platforms gain by adopting it.

The architectural pattern, security composition, and parameter calibration are developed in the companion documents — the *Orange Anchor White Paper v2.3*, *BACC v1.9*, and the *Orange Anchor Technical Paper* (forthcoming). This document assumes those foundations and develops integration paths, platform value, and user experience.

The document is structured so that the bold claim that opens each subsection conveys the core argument on its own. The prose beneath each claim adds depth and justification but is not required for first-pass comprehension. A reading guide for different audiences appears in Appendix A.

---

## Abstract

Orange Anchor is a mechanism by which a user, on their own device, produces a self-issued digital object whose creation requires real, non-trivial resources, anchored to Bitcoin's public sequence. The object — referred to in this document as *collateral* — can be staked behind any account a user already holds, on any platform that chooses to recognise it. Misuse of the account can be flagged by the recognising platform, and flags are publicly verifiable against the same anchor by every other recognising platform. The result is a primitive that converts uncollateralized accounts into collateralized ones without replacing existing identity infrastructure: a single new property attached to existing accounts, verifiable against the Bitcoin chain alone, with no issuer, no operator dependency, and no protocol modification to either Bitcoin or the integrating platform. This brief describes the mental model, the integration paths, the user experience, and the platform outcomes.

---

## 1. Digital Collateral: The Core Concept

**1.1 — Orange Anchor produces digital collateral: a self-issued object that costs real resources to produce, can be staked behind any account, and loses standing when flagged for misuse.**

Collateral is used here in its original sense: a commitment placed alongside a claim, at risk if the claim proves false. The object — referred to throughout this document as an *Orange Anchor proof*, and as *collateral* when staked behind an account — is generated by the user's own device during a *bracketed interval*: the real-time window between two Bitcoin anchor transactions that bound the proof's production. Within that interval, the device commits memory bandwidth, computational time, sensor activity, and Bitcoin-anchored time to produce a single object. The object is the user's. It carries no claim about who they are. Its only claim is that producing it cost something real, and that the cost is now sunk into an object whose standing can be damaged through public flagging.

**1.2 — Production cost scales linearly in volume: the per-proof floor — bracketed real-time, anchor fees, energy, and memory bandwidth occupation — must be paid in full for every proof and cannot be compressed by capital.**

Each additional proof requires another bracketed real-time interval on another device-slot. The components that constitute the per-proof floor — the bracket itself, the on-chain anchor fees, the energy consumed during the interval, and the memory bandwidth held during memory-hard checkpoints — do not amortise across volume. Fixed engineering and orchestration costs *do* amortise as production scales, but they are bounded; the per-proof floor remains underneath them. An attacker producing a thousand proofs pays a thousand floors, not one. This is the structural property that bounds the adversary class — high-throughput automated account production at near-zero marginal cost — that defeats most identity infrastructure today.

**1.3 — The collateral is anchored to Bitcoin, requiring no issuer, no platform, and no trusted third party for verification.**

Verification depends only on the Bitcoin chain and the proof itself. There is no Orange Anchor company in the verification path. There is no operator that must be online. There is no trust relationship a platform must enter into to recognise a proof. Bitcoin's public, ordered, work-protected sequence is the only external dependency, and it is one every modern platform can already access.

**1.4 — Misuse can be flagged, and flags are publicly visible to every system that recognises the collateral: ban once, cost persists everywhere.**

When a platform identifies a holder as acting maliciously, the flag is a public observation against a public anchor. Other platforms that recognise the same anchor can see the same flag. The attacker cannot detach the flag from the proof, transfer the proof to a clean identity, or rebuild reputation cheaply elsewhere — every new account requires new collateral, produced from scratch at full cost. The result is a network effect: the value of integration to any one platform increases as the set of recognising platforms grows.

**1.5 — Flagging is the enforcement mechanism that converts collateral from a one-time charge into persistent stake.**

Without flagging, collateral would be a gateway charge: paid once, then forgotten. With flagging, it is persistent stake: paid once, exposed continuously, recoverable only through honest behaviour across the systems that recognise it. The cost is not consumed by use; it remains at risk for the lifetime of the object, returned to the holder through accumulated trust and easier access over time.

*The mechanism that converts collateral from one-time cost into persistent stake is flagging — described next.*

---

## 2. How Flagging Works

**2.1 — A flag is a signed, publicly published observation against a proof's anchor, discoverable by any verifier.**

A flag is not a centralised registry entry. It is a signed statement — *"this platform observed this proof's holder acting maliciously, on this date, for this reason"* — published publicly against the proof's anchor reference, and verifiable independently of any operator or central authority.

**2.2 — Flags may be published on-chain or through operator feeds; the protocol specifies the flag format, and the publication channel is a deployment choice.**

A flag may be recorded as a signed Bitcoin transaction referencing the proof's anchor — the highest-availability, most censorship-resistant option — or distributed through operator feeds as a signed off-chain record, cryptographically verifiable independently of the operator. Both modes use the same flag format and the same signature scheme, so a verifier consuming flags from multiple channels reconciles them by anchor reference and publisher signature. Platforms choose the channel; verifiers choose what to ingest. The flag format and operator feed schema are documented in the companion *BAVAI Operator Specification v1.0*.

**2.3 — Flags are reputationally weighted: the publishing platform's signature carries its track record, and verifiers weight flags accordingly.**

A flag from a platform with a history of accurate enforcement carries more weight than one from an unknown source. False flags are themselves reputational risk: platforms that flag accurately accumulate trust over time among the verifiers that consume their feeds; platforms that flag carelessly are weighted down. The protocol assigns no weight; verifiers do.

**2.4 — Holders may publish counter-statements signed with their own collateral key, creating a public dispute record.**

The protocol does not adjudicate disputes; it records them publicly for verifiers to evaluate. A dispute does not erase the original flag — both stand, signed by their respective parties, on the public record.

*With the collateral and flagging model established, the platform value proposition follows directly.*

---

## 3. Why Platforms Integrate

**3.1 — Most digital platforms operate on uncollateralized relationships: accounts are produced at near-zero cost, and the resulting identity inflation is paid for by every honest user through degraded trust, increased moderation, and fraud overhead.**

When the marginal cost of a new account is near zero, any actor can produce as many accounts as their automation budget allows. The cost of defending against this falls on the platform and its honest users: detection systems, moderation teams, fraud reserves, support overhead, diluted recommendation signals, manipulated reviews, suppressed organic engagement. Identity inflation is a hidden tax that platforms have paid for so long they have stopped noticing it. It removes the scarcity signal that allows a platform to distinguish valuable participation from noise; without that signal, economic calculation about account value becomes structurally impossible.

**3.2 — Orange Anchor lets platforms move from uncollateralized to collateralized relationships without modifying their existing identity infrastructure, account model, or login flow.**

The platform keeps its login system, its account schema, its session model, its existing identity providers. Orange Anchor attaches as a supplementary signal on top of whatever identity infrastructure already exists. There is no migration. There is no rip-and-replace. There is a single new property on each account: whether collateral is attached, and if so, of what strength, with what flag history.

**3.3 — Once a user has staked collateral behind an account, the platform's threat model shifts from "anyone can be a bot" to "misuse now costs the attacker real resources they cannot recover."**

The change in security posture is structural, not incremental. Before integration: every new account is equally suspicious, every interaction is equally weighted, every defensive control is applied uniformly because the platform cannot distinguish stake from no-stake. After integration: accounts with collateral carry economic exposure, accounts without collateral carry the platform's existing default risk, and the platform can route trust, rate-limits, and surfacing decisions on a real and verifiable distinction.

**3.4 — Platforms can calibrate trust based on the strength and history of attached collateral, reducing security overhead on accounts that carry credible stake.**

A platform does not have to apply uniform suspicion to every account. Collateral with substantial history, dense sensor binding, long brackets, and a clean flag record can be treated more permissively. Collateral with thin history or recent flags can be treated more strictly. Accounts with no collateral remain at the platform's existing default. The platform sets its own policy; the protocol imposes no tiers. The cost of defence migrates from the platform's moderation budget to the attacker's production budget.

*The practical question is how a platform attaches. Two paths cover the range of existing infrastructure.*

---

## 4. Integration Paths

**4.1 — Orange Anchor supports two integration paths, chosen by the platform based on its existing identity infrastructure.**

Path B is the lower-friction option and is the right starting point for most existing platforms. Path A serves new platforms and sovereignty-focused communities that prefer cost-backed credentials in place of traditional account identifiers. Both paths produce the same collateral guarantee; they differ only in how the collateral connects to the platform's existing notion of an account.

### 4.2 Path A — Cost-Backed Sign-In

**4.2.1 — A platform can accept Orange Anchor as a primary credential: the user authenticates by proving control of the collateral key, and the platform records that proof as the account's identity.**

In this model, the proof *is* the account. The user signs a challenge with their Orange Anchor key; the platform verifies the signature and the underlying collateral; an account is created or resumed bound to that key. There is no email, no password, no third-party identity provider. The user's collateral is their identity within the platform, and the platform's account record points directly at the proof's anchor. This path suits new platforms, sovereignty-focused communities, and any system willing to use cost-backed credentials in place of traditional account identifiers.

### 4.3 Path B — Collateral Attachment to an Existing Account

**4.3.1 — A platform can let users bind their Orange Anchor collateral to an existing account — Google, Apple, X, an SSI DID, an email login, a forum membership — through a dual-signed cryptographic binding.**

The user signs an association statement with their Orange Anchor key. The platform's account system signs the same statement with the user's existing account credential, or, equivalently, the user signs through the existing account's authenticated session. The result is a binding between the proof and the account that neither party can produce alone. Existing identity infrastructure does not need to be replaced; the platform annotates accounts with collateral rather than restructuring them.

**4.3.2 — The binding is dual-signed: the Orange Anchor key and the existing account's controlling key both sign the association, producing a co-signature that neither party can forge unilaterally.**

The dual-signature requirement prevents two failure modes. An attacker who steals an Orange Anchor proof cannot bind it to someone else's account, because they cannot sign with the account's key. A platform or attacker who controls an account cannot fraudulently associate it with someone else's proof, because they cannot sign with the proof's key. The binding only exists when both sides agree, and the agreement is publicly verifiable.

**4.3.3 — Integration is intentionally lightweight: a settings-page UI element, a verification endpoint, and a single new property on the account record storing the bound proof's anchor reference.**

A Path B integration consists of a "Connect Orange Anchor" element on the account settings page, a backend endpoint that accepts the dual-signed binding and verifies it against the Bitcoin chain, and a property on the account record storing the bound proof's anchor reference (a 32-byte transaction identifier) plus an optional verifier-assigned assessment. No personal data is shared with Orange Anchor, and there is no service to call. There is no SDK lock-in, no protocol-level dependency, and no operator subscription.

The technical integration — endpoint, UI, verification logic — is a small surface: the protocol message set is documented in *Orange Anchor Interaction Patterns v1.0*, and the verification algorithm has narrow inputs (a portable proof envelope, a verification package, the Bitcoin chain). Engineering effort is dominated by Bitcoin verification integration (full-node operation or SPV with a confirmation-depth policy per *BACC v1.9* §3.2.1) and by operational policy work (quality threshold, flag handling, recovery procedures). The organisational integration — quality policy, trust and safety workflow, flag handling, recovery procedures — is the variable cost and depends on platform scale and risk posture.

### 4.4 Native Integration for DID and SSI Systems

**4.4.1 — DID and SSI systems with user-controlled keys whose key types are supported by the cosign protocol (currently Ed25519, secp256k1, and NIST P-256 — see *Orange Anchor Interaction Patterns v1.0* §3.2 for the canonical list) can integrate natively: the dual-signature is produced entirely client-side, and no operator or third party participates.**

For DID methods and SSI wallets where the user already holds the controlling key and the key type is in the supported set, the dual-signature is produced on the user's own device. The DID document or verifiable credential records the binding as an attestation; verification is performed by any consumer of the credential against the Bitcoin chain. This adds a primitive that has been difficult to instantiate in the SSI ecosystem without an issuer dependency: an economic floor against Sybil production, without introducing a new authority. The integration is additive — existing DID methods and credential schemas continue to work unchanged, with the Orange Anchor anchor reference recorded as supplementary cost-backing. DID methods whose key types fall outside the supported set are not natively addressed at v1.0; future revisions of the cosign protocol may extend the supported set.

*The integration decision requires understanding what the experience looks like on the user side and how trust calibrates over time.*

---

## 5. User Experience and Trust Dynamics

**5.1 — Proof strength is not centrally rated: any platform, community, or verifier can publish its own assessment, and verifiers compose any combination of assessments they choose.**

There is no Orange Anchor quality board and no official tiers. A platform that values long brackets and dense sensor binding publishes a high assessment for proofs that meet its bar; a community that values intermediate anchor density publishes its own. Verifiers compose any combination of assessments they choose, and each use case settles on the floor that meets its needs. Users who produce stronger proofs gain access to higher-floor contexts. Quality assessment is itself a market in which publishers accumulate or lose reputation based on the accuracy of their assessments. The construction paper develops a candidate denomination convention for expressing relative proof quality — the BARU (Bitcoin-Anchored Resource Unit) — as a deployment convention rather than a protocol commitment, available to operators and verifiers who find a tangible unit useful.

**5.2 — User approval for any binding or presentation is a single biometric or PIN gesture in the holder's app; integration friction is asymmetric — heavy on first proof, near-zero on every subsequent use.**

First-time setup is a one-time configuration of seconds, followed by a bracketed background computation that runs while the device is idle and charging — typically completing overnight. The user is not actively involved in the production process; they install the app, approve the bracket, and wake up with collateral ready. Every subsequent binding or presentation is a challenge-response approved with a thumbprint or PIN. Connecting the proof to a Google account is a single tap. Presenting the proof to a new platform is a single tap. The friction profile matches what consumer applications already train users for: heavy onboarding, light ongoing use.

Pushing parameters higher imposes cost on legitimate users as well as on adversaries; the over-engineering principle developed in the *Orange Anchor White Paper v2.3* exploits the favourable cost asymmetry, but the lever is bounded by the friction users will tolerate. Calibration sits inside that bounded range, not above it.

**5.3 — For the user, collateral is portable reputation: produced once on their own device, carried to every platform that recognises it, and earning easier access on every honest interaction — and, as §5.5 develops, making those bindings publicly correlatable to anyone querying the chain.**

The user's incentive to produce collateral is compounding. A single proof, produced overnight on the user's own device, can be bound to any number of accounts across the platforms that recognise it. Every honest interaction on every recognising platform contributes to a clean flag history that all other recognising platforms can see. Reputation built on one platform travels with the user to the next, without that platform needing to issue, federate, or share data with the first. The user owns the proof; the platforms attest to it. The cost is paid once; the benefit accumulates indefinitely. This portability is, by construction, public — the same anchor reference visible to one platform is visible to all, and §5.5 develops what that means for users who want unlinkable accounts.

**5.4 — Per-proof cost to the user is small, paid once at proof creation, and predictable under normal Bitcoin fee conditions.**

Two cost components compose the per-proof figure. Device energy is small and bounded by bracket duration on hardware the user already owns. On-chain anchor fees vary with Bitcoin fee market conditions and are typically reduced by an order of magnitude through *operators* — independent services that aggregate many proofs into single Bitcoin transactions to amortise on-chain fees, and that may also provide indexing and audit feeds. Under the illustrative conditions worked out in *Strategic Cost Calibration Model v1.3* §9.4 (a 10 sat/vB fee snapshot and a 1:50 batching ratio), the amortised per-proof cost falls in the range of tens of cents to low single dollars; see SCCM §9.4 for the full derivation, the explicit assumptions, and the sensitivity analysis under fee-market and batching variation. The user pays once at proof creation, not per interaction, not per platform, not per binding. Fee-market variation is real: under sustained mainnet congestion the cost rises proportionally. The forthcoming Calibration Annex will publish snapshot-anchored numerical values; until that publication, integrators should treat the SCCM §9.4 illustration as the published budgeting reference and substitute their own snapshot values for production planning. Protocol-level operator behaviour — batching formats, indexing interfaces, audit feed schemas, and flag publication formats — is documented in the *BAVAI Operator Specification v1.0*.

**5.5 — A single proof can be bound to multiple accounts; doing so makes those accounts publicly correlatable through the shared anchor. Users who want unlinkable accounts produce separate proofs per account, paying the per-proof cost for each.**

The privacy trade-off is built in by design. Binding the same proof to multiple accounts is the cheaper path and the right choice for users building a single coherent reputation across many platforms — *"this is me on Google, this is me on the forum, this is me in the DAO, and they are publicly the same me."* Producing separate proofs per account is the more expensive path and the right choice for users maintaining deliberately distinct identities — *"this is one of my identities, that is another, and the protocol does not link them."* The protocol enforces no single model; it lets the user choose, and prices each choice transparently. Cost asymmetry is the privacy mechanism: unlinkability is paid for in additional proof production, by the user who needs it.

**5.6 — Verification depends only on the Bitcoin chain and the proof itself; operators are an optional convenience layer for batching and indexing, never a trust dependency.**

A platform that wants to integrate directly does so against the Bitcoin chain, with its own verification logic, its own caching, its own assessment policy. Operators provide convenience services — proof discovery by holder, batching of on-chain anchors to reduce fees, audit feeds across many proofs — but they are never in the trust path. A platform that distrusts every operator can still verify every proof, because verification is a property of the chain, not of any party.

**5.7 — Honest users gain easier access and better treatment as their collateral accumulates positive history; malicious actors face increasing cost with every flag and every burned proof.**

The system rewards persistence. A user whose collateral has been bound to multiple accounts, recognised by multiple platforms, and accumulated a clean flag history over years carries a strong signal that no fresh attacker can match without paying the same time and the same cost. Conversely, an attacker who burns a proof through flagged misuse must produce a new proof from scratch, with no history, at full marginal cost — and the new proof is itself flaggable the moment it is misused. Reputation is not transferable across proofs; the per-proof floor is not amortisable; misuse is not free.

*The user dynamics translate into concrete platform outcomes.*

---

## 6. Platform Outcomes

**6.1 — Collateralized relationships reduce security and moderation cost because every account interaction now carries a real economic threat against the misuser, not the platform.**

The platform stops paying the full cost of identity inflation. Moderation teams handle fewer cases because abuse is more expensive to produce. Fraud reserves shrink because the attacker has economic exposure. Detection systems are augmented by a hard economic signal, not just statistical inference. Trust and safety teams move from open-ended pursuit of unbounded fake accounts to bounded enforcement against attackers whose collateral they can flag and burn.

**6.2 — Identity inflation — the hidden tax that uncollateralized systems pay through fraud, support, and degraded trust — is replaced by a market in which honest users earn easier access and abusers face increasing cost until continued operation exceeds the value extracted.**

Today, every honest user pays for the fake users on the same platform: through worse recommendations, slower support queues, more aggressive default rate-limits, more friction on legitimate actions. Under collateralized relationships, those costs migrate to the population that produces them. Honest users escape the friction. Abusers pay it directly, repeatedly, and visibly, until the cost of operation exceeds the value they can extract — and the platform's defensive posture sharpens with every flag it raises.

**6.3 — Whether a given abuse class becomes uneconomic depends on the cost floor's level relative to attacker return on investment; the architecture makes the floor exist, makes it linear in volume, and makes it tunable upward.**

The architectural property — that a per-proof floor exists, scales linearly with volume, and can be raised by parameter adjustment — is what makes the system's equilibrium adjustable. The specific equilibrium for a given abuse context is a calibration question: low-margin abuse economies (mass spam, engagement farming, low-value review fraud) break at low cost floors; higher-margin abuse economies require higher floors. The technical paper develops this analysis per abuse class. The integration brief's claim is the architectural one: production cost is real, sunk, and verifiable, and persists as attributable scarcity across every system that recognises the commitment. The level at which any given platform sets its floor is a deployment decision, not an architectural one.

**6.4 — Orange Anchor turns identity from an inflationary commons into a scarce, collateralized asset, with minimal friction for honest users and meaningful cost for attackers — and it does so without requiring any platform to give up control of its existing account model.**

The platform retains its identity stack. The user retains their existing accounts. The protocol adds a single new property: a verifiable, transferable-to-no-one, flaggable cost commitment that backs whatever account it is attached to. Friction lives on the production of fake claims, not on honest participation. Adoption is incremental, additive, and reversible. The first platform that integrates gains a quality differentiator. The hundredth platform that integrates inherits a network of cross-platform flag propagation that compounds the value of integration for everyone already in.

The same architectural pattern — self-issued, cost-backed, Bitcoin-anchored claims with public flagging — applies wherever a digital claim must carry verifiable cost without an issuer; identity is the first application. The full generality argument is developed in the *Orange Anchor White Paper v2.3* §9.

---

## Appendix A — Reading Guide

For different readers, the load-bearing path through the document is:

- **Executive (5 minutes):** the bold claims of §1, §3, and §6. Conveys the mental model, the value proposition, and the bottom line.
- **Engineering lead (10 minutes):** all bold claims, plus the supporting prose under §2 (flagging mechanism), §4 (both paths), and §5.6 (verification independence).
- **Integrator scoping work (15 minutes):** full document, with attention to §2 (flagging), §4.3.2 (dual-signature property), §4.3.3 (integration cost and data footprint), §5.4 (per-proof cost), §5.5 (correlation property), and §6.3 (cost floor vs attacker ROI).
- **SSI / DID architect:** §4.4 is the load-bearing point, supported by §1.3 (no trusted third party), §4.3.2 (dual-signature binding), and §5.5 (correlation properties).
- **Privacy / trust and safety reviewer:** §2 (flagging mechanism and dispute), §5.5 (correlation), §5.1 (decentralised assessment), and §5.6 (verification independence).
- **End user / advocate:** §5.3 is the load-bearing user-incentive claim, supported by §5.2 (user experience), §5.4 (cost), and §5.7 (reputation accumulation).

---

## Appendix B — Companion Documents

This brief sits within the Orange Anchor document suite:

- *Orange Anchor White Paper v2.3* — architectural pattern, structural claims about cost-backed digital scarcity, generality argument.
- *BACC v1.9 — Bitcoin-Anchored Collateral Commitments* — construction philosophy, calibration rationale, and ecosystem dynamics for the Orange Anchor instance.
- *Orange Anchor Technical Paper* (forthcoming) — formal construction, parameter calibration, security composition, threat-model snapshot.
- *BAVAI Operator Specification v1.0* — protocol-level specification for batching, indexing, and audit feed services; flag publication formats; and operator interface definitions.
- *Orange Anchor Reference Implementation* — Android application, verification library, and diagnostic tooling.

This brief depends on the *Orange Anchor White Paper v2.3* for architectural foundations, on *BACC v1.9* for the construction-level rationale, and on the technical paper for parameter calibration and security composition. The *BAVAI Operator Specification v1.0* is the protocol-level companion to §2.2 (flag publication) and §5.4 (operator services).

---

## Changelog from v2.1 to v2.2

- **About This Document:** Replaced the single-companion-paper reference with the canonical three-document suite (white paper, construction paper, technical paper forthcoming).
- **§2.2:** Added closing reference to the *Orange Anchor Operator Specification* for flag format and operator feed schema.
- **§5.1:** Added one-sentence reference to the construction paper's BARU denomination unit, classifying it as a deployment convention rather than a protocol commitment. Closes the cross-document terminology gap (a reader of the integration brief who later picks up the construction paper now has prior signal for the unit).
- **§5.4:** Added closing sentence pointing to the *Orange Anchor Operator Specification* for protocol-level operator behaviour, parallel to white paper §5 and construction paper §7.
- **Appendix B:** Reorganised companion-documents listing. Used canonical titles *Orange Anchor White Paper v2.3* and *BACC v1.9*. Added the construction paper (which was missing from v2.1's listing). Added the *BAVAI Operator Specification v1.0*. Added closing sentence locating the operator specification as the protocol-level companion to §2.2 and §5.4.
- **Voice register:** §3.1 and §6 retained at their current warmth. The audience is platform decision-makers and the document needs to articulate value in language that lands; the bold-claim spine holds the document's architectural register, and the explanatory prose carries appropriate warmth for the audience.
- **No body claims changed.** The document's argument and load-bearing claims are unchanged from v2.1.

---

*This document is v2.2 of the Orange Anchor integration brief, published alongside the Orange Anchor White Paper v2.3 and BACC v1.9. Comments, integration questions, and proposed extensions are invited.*