U.S. AI Safety Institute (NIST)
Standards Active Tier 1
Overview
The U.S. AI Safety Institute (housed within NIST) publishes guidance and strategic materials aimed at mitigating risks from advanced AI. Official documents explicitly describe the institute’s safety mandate.
Mission & Focus
Primary Focus Standards
Scope of Safety Risk mitigation guidance and safety mechanisms for advanced AI models/systems (as stated by NIST).
Key Programs / Outputs AI safety guidelinesRisk management frameworkPre-deployment model testingAI safety standards development
Organisation
Type Government
Status Active
Funding Signals Unknown
Partners / Customers Unknown
Data Provenance
Scope Confidence High
Data Confidence High
Last Verified 2026-03-22
ID AISF-0008