Summary
This addendum updates the independence scorecard (Report #84) with four significant events between March 24-28, 2026:
- Anthropic wins preliminary injunction (Mar 26): Federal judge blocks Pentagon’s supply-chain risk designation, calling it “First Amendment retaliation.” Anthropic B1_SVAS increases from 2.33 to 2.67.
- OpenAI removes “safely” from mission statement (Feb 23): Across 6 iterations, OpenAI removed all safety-specific language. B1_SVAS decreases from 0.67 to 0.5.
- OpenAI nonprofit names leaders with $130B equity stake (Mar 24): Structural incentive alignment between nonprofit safety governance and commercial equity value.
- Multiple federal agencies complete switch from Anthropic to OpenAI (Mar 2-3): State, Treasury, HHS discontinued Claude, adopted GPT-4.1.
Updated Divergence
The Anthropic-OpenAI B1_SVAS gap has widened from 1.66 to 2.17 — the largest scored divergence between any two frontier labs on any metric in the dataset.
| Metric | Anthropic | OpenAI | Gap |
|---|---|---|---|
| B1_SVAS (Safety Veto) | 2.67 | 0.5 | 2.17 |
| D1_SCFI (Constraint Floor) | 2.0 | 0.33 | 1.67 |
| C1_DCS (Disclosure) | 0.167 | 0.250 | -0.083 |
| E1_EIS (Evaluation) | 0.750 | 0.000 | 0.750 |
Structural Analysis
Judicial Validation of Safety Veto
The court ruling provides third-party validation of a safety veto exercise. Prior to March 26, the Anthropic score rested on publicly documented actions. The preliminary injunction adds judicial confirmation that safety restrictions were genuine, government retaliation was illegal, and commercial consequences were real. However, the score remains below ceiling because Anthropic’s RSP v3.0 removed the training pause commitment present in RSP v2.0 — deployment-side safety was maintained under pressure while training-side commitments were loosened.
Mission Statement as Leading Indicator
The removal of safety language from OpenAI’s mission occurred during for-profit restructuring. The structural concern is not that OpenAI lacks stated safety commitments but that the institutional architecture for enforcing them has weakened: the nonprofit now holds $130B in equity in the for-profit entity it oversees.
Federal AI Supply Chain Shift
The completion of the federal switch to OpenAI across multiple agencies creates structural dependency. Federal workflows built around OpenAI’s models will accumulate switching costs, making future safety-motivated restrictions commercially costlier.
Limitations
- Preliminary injunction, not final ruling — scores may need revision
- No new empirical safety data — this updates governance scores, not model safety measurements
- Independence score and actual model safety are different things
Report #327 | F41LUR3-F1R57 Adversarial AI Research