AI Safety Organisations

Who is working on what — technical safety, evals, governance, and field-building

We track 117 organisations across 16 countries working on AI safety in its various forms: from technical alignment research to government policy, from evaluations to field-building. This directory complements our Humanoid Robotics Company Directory.

117 Organisations
29 Tier 1
117 Active
16 Countries
117 / 117 shown
United States Est. 2024 For-profit
Technical Active
Scope Building 'safe superintelligence' as sole product/mission.
Programs Safe superintelligence developmentScalable alignment researchSafety-by-design AI systems
Funding Unknown
France Est. 2023 For-profit
Unknown Active
Scope Included only because user requested; safety mission not confirmed from strong primary sources in this batch.
Programs Healthcare AI developmentAgentic AI systems
Funding Unknown
United States Est. 2000 Nonprofit
Technical Active
Scope Technical research on alignment/control of advanced autonomous AI systems.
Programs Agent foundations researchDecision theoryAlignment theoryNontrivial alignment
Funding Unknown
United States Est. 2022 Nonprofit
Mixed Active
Scope Reducing societal-scale risks from AI via research, field-building, and advocacy.
Programs AI safety research grantsStatement on AI RiskField-building programsCompute cluster for safety research
Funding Unknown
United States Est. 2021 Nonprofit
Technical Active
Scope Technical alignment/interpretability and related research.
Programs Eliciting latent knowledgeAlignment theory researchModel evaluations
Funding Unknown
United Kingdom Est. 2018 Academic
Governance Active
Scope AI governance research for risk mitigation and policy design.
Programs AI governance researchPolicy fellowshipsCompute governanceInternational AI governance
Funding Unknown
United Kingdom Est. 2023 Government
Evals Active
Scope Understanding capabilities/impacts of advanced AI and testing risk mitigations.
Programs Frontier model evaluationsAI safety researchPre-deployment testingInternational safety cooperation
Funding Unknown
United States Est. 2024 Government
Standards Active
Scope Risk mitigation guidance and safety mechanisms for advanced AI models/systems (as stated by NIST).
Programs AI safety guidelinesRisk management frameworkPre-deployment model testingAI safety standards development
Funding Unknown
United States Est. 2025 Government
Standards Active
Scope Testing, evaluation, and collaborative research to harness and secure commercial AI systems.
Programs AI standards developmentCommercial AI testingSafety evaluation frameworksIndustry collaboration
Funding Unknown
United States Est. 2022 Program
Training Active
Scope Research training program in model safety: control, interpretability, oversight, evals/red teaming, robustness.
Programs Alignment research scholars programMentorship cohortsInterpretability trainingRed teaming curriculum
Funding Unknown
United States Est. 2022 Program
Training Active
Scope Student-led research group reducing risk from advanced AI.
Programs Student alignment researchAI safety reading groupsTechnical workshops
Funding Unknown
International Est. 2022 Resource
Field-building Active
Scope Included as a meta-resource; not an AI safety org doing safety work itself.
Programs Interactive safety org mapOrganization directory
Funding Unknown
United Kingdom Est. 2023 Resource
Field-building Active
Scope Meta-map; not itself doing AI safety work.
Programs AI safety landscape mappingOrganization categorization
Funding Unknown
United States Est. 2018 Nonprofit
Governance Active
Scope Publishes a report cataloguing AI Safety Institutes worldwide; included as governance/meta-source org.
Programs Responsible tech pipelineAI safety institute landscape mappingCommunity building
Funding Unknown
Japan Est. 2024 Government
Evals Active
Scope Publishes red-teaming methodology guidance on AI safety (documented).
Programs AI safety evaluationsInternational safety cooperationJapan AI safety standards
Funding Unknown
United Kingdom Est. 2024 Coalition
Governance Active
Scope Evidence-based AI policy informed by scientific understanding of AI risks and mitigations.
Programs AI safety policy evidence baseResearch synthesisPublic education on AI risk
Funding Unknown
Mixed Active
Scope International scientific synthesis of capabilities/risks of general-purpose AI systems.
Programs Expert synthesis on AI safetyGlobal risk assessmentInternational consensus building
Funding Unknown
United States Est. 2021 For-profit
Technical Active
Scope Unknown
Programs Constitutional AIResponsible scaling policyInterpretability researchModel evaluations
Funding Unknown
Canada Est. 2023 Nonprofit
Governance Active
Scope Unknown
Programs Canadian AI governanceSafety policy researchRegulatory advocacy
Funding Unknown
United Kingdom Est. 2019 For-profit
Technical Active
Scope Unknown
Programs Value alignment technologySafe AI deployment toolsAlignment consulting
Funding Unknown

ALTER

T2
Israel Est. 2022 Nonprofit
Mixed Active
Scope Unknown
Programs AI safety research in IsraelTechnical alignmentInternational collaboration
Funding Unknown

Astera

T2
United States Est. 2022 Nonprofit
Technical Active
Scope Unknown
Programs Scientific research incubationAI safety-adjacent fundingPublic benefit technology
Funding Unknown
United States Est. 2022 Nonprofit
Training Active
Scope Unknown
Programs AI policy researchSafety communicationPolicy advocacy
Funding Unknown
International Est. 2023 Nonprofit
Training Active
Scope Unknown
Programs Global AI safety coordinationInternational safety community building
Funding Unknown
United States Est. 2014 Nonprofit
Mixed Active
Scope Unknown
Programs AI safety grants programOpen letters on AI riskEU AI Act advocacyExistential risk policy
Funding Unknown
United States Est. 2016 Academic
Technical Active
Scope Unknown
Programs Value alignment researchCooperative inverse reinforcement learningHuman-compatible AI theory
Funding Unknown
Netherlands Est. 2021 Nonprofit
Governance Active
Scope Unknown
Programs Public awareness of x-riskMedia engagement on AI riskPolicy advocacy
Funding Unknown
International Est. 2022 Program
Training Active
Scope Unknown
Programs Career advising for AI safetyMental health supportCommunity resources
Funding Unknown
United States Est. 2016 Coalition
Governance Active
Scope Unknown
Programs Responsible AI practicesABOUT ML frameworkSafety-critical AI workstreamAI incident database
Funding Unknown
France Est. 2019 Government
Governance Active
Scope Unknown
Programs OECD AI PrinciplesNational AI policy trackerAI governance best practices
Funding Unknown
United States Est. 2021 Nonprofit
Mixed Active
Scope Threat assessment/mitigation for AI systems; applied alignment/control; evals.
Programs Adversarial training for safetyAlignment faking researchInterpretability researchControl evaluations
Funding Unknown
United States Est. 2023 Nonprofit
Evals Active
Scope Independent evaluation of frontier models for catastrophic-risk-relevant capabilities.
Programs Autonomous capability evaluationsFrontier model threat assessmentsTask-based eval frameworks
Funding Unknown
United States Est. 2023 Nonprofit
Mixed Active
Scope Reducing risks from dangerous capabilities in advanced AI systems; evaluations for scheming/deception; governance guidance.
Programs Scheming evaluationsDeceptive alignment detectionIn-context scheming benchmarks
Funding Unknown
United States Est. 2022 Nonprofit
Mixed Active
Scope AI safety research & education nonprofit focused on safe and beneficial frontier AI.
Programs Adversarial robustness researchAI safety via debateRed teamingAlignment research incubation
Funding Unknown
United Kingdom Est. 2022 For-profit
Technical Active
Scope Alignment research startup; building controllable, safe development of advanced AI.
Programs Cognitive emulation theoryInterpretability researchCoEm alignment approach
Funding Unknown
Canada Est. 2024 Government
Evals Active
Scope Government institute supporting safe and responsible AI development/deployment in Canada.
Programs AI safety standards for CanadaFrontier model evaluationsSafety research grants
Funding Unknown
Canada Est. 2023 Nonprofit
Governance Active
Scope Catalyzing Canada’s leadership in AI governance and safety.
Programs Canadian AI governance policySafety research coordinationRegulatory frameworks
Funding Unknown
International Est. 2018 Program
Training Active
Scope Online, part-time AI safety research program organizing project teams.
Programs Research bootcampsAlignment project mentorshipField-building retreats
Funding Unknown
United Kingdom Est. 2018 Research org
Governance Active
Scope Governance research and talent development for managing risks/opportunities from advanced AI.
Programs AI governance researchPolicy fellowshipsCompute governanceInternational AI governance
Funding Unknown
United Kingdom Est. 2022 Program
Training Active
Scope Runs free courses on AI safety and governance; builds community for contributors.
Programs AI safety fundamentals courseAI governance courseScalable safety education
Funding Unknown
International Est. 2022 Resource
Field-building Active
Scope Resource hub supporting AI existential safety ecosystem.
Programs AI safety resource hubOrganization directoryReading groups coordination
Funding Unknown

SaferAI

T1
France Est. 2023 Nonprofit
Mixed Active
Scope AI risk measurement, risk management ratings, standards and policy work to make AI safer.
Programs AI risk management ratingsSafety benchmarkingResponsible scaling assessments
Funding Unknown
United States Est. 2016 Academic
Technical Active
Scope Reorient AI research toward provably beneficial systems (mission).
Programs Value alignment researchCooperative inverse reinforcement learningHuman-compatible AI theory
Funding Unknown
United States Est. 2011 Nonprofit
Governance Active
Scope AI risk governance research as part of global catastrophic risks analysis.
Programs Global catastrophic risk modelingAI risk analysisRisk assessment frameworks
Funding Unknown
United States Est. 2017 Nonprofit
Governance Active
Scope Policy research challenging current AI trajectory; accountability and societal risk governance.
Programs AI accountability researchRegulatory policyAI industry analysisWorkers and AI
Funding Unknown
Spain (Valencia; program location) Est. 2024 Program
Evals Active
Scope Academic program dedicated to AI evaluation focusing on capabilities and safety.
Programs International AI evaluation standardsCross-border model testingSafety eval harmonization
Funding Unknown
International Est. 2024 Coalition
Mixed Active
Scope Scientific synthesis of risks and mitigations for general-purpose AI.
Programs Global AI safety synthesis reportExpert consensus buildingInternational risk assessment
Funding Unknown
France (OECD HQ) Est. 2019 Government
Governance Active
Scope Trustworthy AI principles and global policy tracking and guidance.
Programs OECD AI PrinciplesAI policy observatoryNational AI strategies trackerAI incident monitoring
Funding Unknown
France Est. 2023 Program
Evals Active
Scope Company risk management practice ratings for frontier AI labs.
Programs AI company safety ratingsRisk management benchmarkingResponsible scaling assessments
Funding Unknown
United States Est. 2021 Resource
Technical Active
Scope Meta-profile; not distinct from Redwood org (kept for dedupe log).
Programs Adversarial trainingAlignment faking researchControl evaluations
Funding Unknown
United States Est. 2023 Nonprofit
Evals Active
Scope Model evaluation and threat research; formerly ARC Evals.
Programs Autonomous capability evaluationsTask-based model assessmentsThreat research
Funding Unknown
Canada Est. 2024 Program
Technical Active
Scope Multidisciplinary research program tackling AI safety issues.
Programs AI safety research grantsAcademic safety research coordination
Funding Unknown
Standards Active
Scope Publishing norms to mitigate harms and risks from AI research dissemination.
Programs Publication norms for responsible AIDual-use research guidelines
Funding Unknown
France (OECD) Est. 2019 Standards
Governance Active
Scope Intergovernmental standard promoting trustworthy AI principles.
Programs AI governance principlesInternational policy standards
Funding Unknown
Evals Active
Scope Joint work on scheming evaluations; not a standalone org.
Programs Scheming evaluation collaborationIn-context deception detection
Funding Unknown
United States Est. 2019 Academic
Governance Active
Scope AI policy, national security, and emerging tech governance; safety-adjacent.
Programs AI and national security researchEmerging technology policyAI workforce analysis
Funding Unknown
United States Est. 2023 Nonprofit
Technical Active
Scope Trustworthy, open AI research; safety adjacent.
Programs Trustworthy AI developmentOpen-source AI safety toolsCommunity-driven AI safety
Funding Unknown
United States Est. 2022 Nonprofit
Field-building Active
Scope Funding/support for safety research (ecosystem node).
Programs AI safety research grantsScientific computing infrastructureEmerging technology support
Funding Unknown
United States Est. 2014 Academic
Mixed Active
Scope Academic AI research umbrella; contains safety-aligned groups (e.g., CHAI).
Programs Foundational AI researchSafety-adjacent ML researchRobustness and fairness
Funding Unknown
United States/International Est. 2023 Nonprofit
Standards Active
Scope Industry-supported nonprofit addressing significant risks to public safety and national security from frontier models.
Programs Responsible development guidelinesSafety best practicesAI safety fundRed teaming standards
Funding Unknown
United Kingdom Est. 2012 Academic
Mixed Active
Scope Research on existential and global catastrophic risks, including risks from artificial intelligence (technical + governance).
Programs Existential risk researchAI safety policyExtreme technological risk analysis
Funding Unknown
Governance Active
Scope Interdisciplinary research on the future of intelligence and responsible AI development/governance.
Programs Future of intelligence researchAI narratives projectKinds of intelligenceAI ethics and society
Funding Unknown
International Est. 2023 Resource
Field-building Active
Scope Community that helps people navigate the AI safety ecosystem and find projects.
Programs AI safety educational gamesPublic engagement on AI risk
Funding Unknown
International Est. 2023 Resource
Field-building Active
Scope Fortnightly meetings discussing AI safety papers and essays (community).
Programs Weekly reading group sessionsAI safety paper discussions
Funding Unknown
International Est. 2023 Resource
Field-building Active
Scope Community infrastructure mentioned as organizer for AISafety.com reading group.
Programs Community coordinationAlignment ecosystem development
Funding Unknown
Czech Republic Est. 2018 Resource
Field-building Active
Scope Program empowering students to use theses as a pathway to impact (career support).
Programs Thesis topic coachingAI safety research mentorshipAcademic career guidance
Funding Unknown
United States Est. 2019 Resource
Field-building Active
Scope Funding node for long-term survival and flourishing projects (funding).
Programs AI safety research grantsExistential risk fundingS-process grant allocation
Funding Unknown
International Est. 2023 Resource
Field-building Active
Scope Directory of funders offering financial support to AI safety projects.
Programs Funding directory for AI safetyDonor coordination
Funding Unknown
International Est. 2023 Resource
Field-building Active
Scope Directory to map current AI safety research teams and gaps.
Programs Volunteer project listingsCommunity contribution matching
Funding Unknown
International Est. 2022 Resource
Field-building Active
Scope Meta-post documenting AISafety.com map categories and ecosystem.
Programs AI safety field mappingResearch landscape visualization
Funding Unknown
United States Est. 2022 Resource
Field-building Active
Scope Publishes an impact assessment of AI Safety Camp.
Programs AI safety benchmarkingForecasting researchAlignment evaluation tools
Funding Unknown
United States Est. 2019 Academic
Governance Active
Scope Included as part of the AI safety ecosystem; mission verification may be needed for safety-first criteria.
Programs AI policy researchEmerging technology analysisNational security and AI
Funding Unknown
United States Est. 1948 Nonprofit
Governance Active
Scope Included as part of the AI safety ecosystem; mission verification may be needed for safety-first criteria.
Programs AI policy researchNational security and AIRisk assessment frameworksTechnology governance
Funding Unknown
United States Est. 1916 Nonprofit
Governance Active
Scope Included as part of the AI safety ecosystem; mission verification may be needed for safety-first criteria.
Programs AI governance researchTechnology policy analysisResponsible AI frameworks
Funding Unknown
United Kingdom Est. 2015 Academic
Mixed Active
Scope Included as part of the AI safety ecosystem; mission verification may be needed for safety-first criteria.
Programs AI safety and ethics researchData science for public goodAI governance frameworks
Funding Unknown
United Kingdom Est. 2018 Nonprofit
Governance Active
Scope Included as part of the AI safety ecosystem; mission verification may be needed for safety-first criteria.
Programs AI accountability researchAlgorithmic auditingPublic engagement on AIRegulatory policy
Funding Unknown
United States Est. 2023 Nonprofit
Training Active
Scope Included as part of the AI safety ecosystem; mission verification may be needed for safety-first criteria.
Programs AI policy researchAI governance strategyEmerging technology policy analysis
Funding Unknown
Belgium/EU Est. 2024 Government
Governance Active
Scope Included as part of the AI safety ecosystem; mission verification may be needed for safety-first criteria.
Programs EU AI Act implementationAI governance coordinationGPAI model oversight
Funding Unknown
International Est. 2023 Government
Governance Active
Scope Included as part of the AI safety ecosystem; mission verification may be needed for safety-first criteria.
Programs Global AI governance recommendationsInternational AI safety normsCapacity building
Funding Unknown
Standards Active
Scope Included as part of the AI safety ecosystem; mission verification may be needed for safety-first criteria.
Programs Safety-critical AI guidelinesIndustry safety standards
Funding Unknown
International Est. 2023 Nonprofit
Governance Active
Scope Included as part of the AI safety ecosystem; mission verification may be needed for safety-first criteria.
Programs Bio-AI risk assessmentDual-use technology governanceCross-domain risk analysis
Funding Unknown
Governance Active
Scope Included as part of the AI safety ecosystem; mission verification may be needed for safety-first criteria.
Programs Biosecurity researchAI misuse risk analysisHealth security policy
Funding Unknown
United States Est. 2001 Nonprofit
Governance Active
Scope Included as part of the AI safety ecosystem; mission verification may be needed for safety-first criteria.
Programs Nuclear risk reductionAI and WMD riskBiosecurity governance
Funding Unknown
Mixed Active
Scope Included as part of the AI safety ecosystem; mission verification may be needed for safety-first criteria.
Programs Existential risk researchAI governance theoryMacrostrategy research
Funding Unknown
United Kingdom Est. 2005 Academic
Governance Active
Scope Included as part of the AI safety ecosystem; mission verification may be needed for safety-first criteria.
Programs AI governance researchDigital ethicsFuture of work and AI
Funding Unknown
United States Est. 1910 Nonprofit
Governance Active
Scope Included as part of the AI safety ecosystem; mission verification may be needed for safety-first criteria.
Programs AI and international orderTechnology and democracyDigital governance
Funding Unknown
United States Est. 2019 Academic
Mixed Active
Scope Included as part of the AI safety ecosystem; mission verification may be needed for safety-first criteria.
Programs AI Index reportPolicy researchInterdisciplinary AI researchAI audit tools
Funding Unknown
United States Est. 2020 Resource
Evals Active
Scope Included as part of the AI safety ecosystem; mission verification may be needed for safety-first criteria.
Programs AI incident trackingIncident taxonomySafety learning from failures
Funding Unknown
United States Est. 2023 Nonprofit
Governance Active
Scope Works on preventing misuse of advanced AI and strengthening safeguards; mission verification needed.
Programs AI security advocacyPolicy engagement on AI risk
Funding Unknown
United States Est. 2023 Nonprofit
Governance Active
Scope Publishes analysis/forecasts of AI trajectories; safety-adjacent.
Programs AI governance researchFuture scenarios analysisPolicy recommendations
Funding Unknown

PauseAI

T2
Netherlands Est. 2023 Nonprofit
Governance Active
Scope Advocacy group focused on slowing AI progress until safe.
Programs AI development moratorium advocacyPublic protests and campaignsInternational chapters
Funding Unknown
Belgium Est. 2018 Government
Governance Active
Scope EU monitoring and policy support for AI.
Programs AI landscape monitoringPolicy analysis for EUAI uptake tracking
Funding Unknown
Belgium Est. 2018 Government
Field-building Active
Scope EU community platform; not a dedicated safety org.
Programs Stakeholder consultationAI policy input to EUCommunity engagement
Funding Unknown
France Est. 2014 Nonprofit
Governance Active
Scope AI governance think tank.
Programs AI governance researchUN and multilateral engagementResponsible AI frameworks
Funding Unknown
United Kingdom Est. 2021 Nonprofit
Governance Active
Scope Catastrophic risk org with AI relevance.
Programs AI policy for UK governmentExtreme risk policyBiosecurity and AI governance
Funding Unknown
United States Est. 2017 Nonprofit
Field-building Active
Scope Funder; ecosystem node.
Programs AI safety research grantsTechnical alignment fundingAI governance grantsBiosecurity and AI
Funding Unknown
United States Est. 2023 Nonprofit
Governance Active
Scope AI policy research and advocacy.
Programs Public opinion polling on AIAI policy advocacyCongressional engagement
Funding Unknown
United States Est. 2018 Resource
Field-building Active
Scope Community forum; meta node.
Programs Technical alignment discussionResearch publication platform
Funding Unknown
United States Est. 2009 Resource
Field-building Active
Scope Community platform; meta node.
Programs Rationality community platformAI safety discussion forumResearch publication
Funding Unknown
United States Est. 2022 Nonprofit
Governance Active
Scope Tracks AI progress; safety-adjacent metrics.
Programs AI trends forecastingCompute analysisKey trends in AI publication
Funding Unknown
Canada Est. 2017 Academic
Technical Active
Scope Research institute with safety-related initiatives.
Programs AI for humanity researchResponsible AI developmentAI safety researchTalent training
Funding Unknown
Governance Active
Scope Think tank work on AI governance.
Programs Digital governance researchAI and data governanceInternational policy
Funding Unknown
United States Est. 1999 Nonprofit
Governance Active
Scope AI accountability and governance work.
Programs Open Technology Institute AI workTech policy researchAI accountability
Funding Unknown
France Est. 2020 Government
Governance Active
Scope International governance partnership.
Programs Responsible AI working groupsInternational AI governanceInnovation and commercialization
Funding Unknown
Switzerland Est. 2017 Standards
Standards Active
Scope International AI standardization committee.
Programs AI management system standardsAI risk management standardsAI terminology standards
Funding Unknown
United States Est. 2016 Standards
Standards Active
Scope Standards work for A/IS.
Programs Ethically aligned designP7000 series AI ethics standardsAutonomous systems standards
Funding Unknown
Switzerland Est. 2016 Nonprofit
Governance Active
Scope AI governance and risk work.
Programs AI governance frameworksResponsible AI toolkitGlobal technology governance
Funding Unknown
United States Est. 2016 Nonprofit
Governance Active
Scope Fairness/harms; safety-adjacent.
Programs Algorithmic bias researchCoded Bias documentaryEquitable AI advocacy
Funding Unknown
United States Est. 2014 Nonprofit
Governance Active
Scope AI governance/harms research.
Programs AI and automation researchMedia manipulation studiesLabor and technology
Funding Unknown
United Kingdom Est. 1961 Nonprofit
Governance Active
Scope Human rights risks; safety-adjacent.
Programs AI and human rights researchSurveillance technology advocacyBan on autonomous weapons
Funding Unknown
United States Est. 1994 Nonprofit
Governance Active
Scope Policy and governance of AI risks.
Programs AI governance policyPrivacy and surveillanceFree expression and AI
Funding Unknown
United States Est. 2020 Resource
Evals Active
Scope Incident tracking; evaluation data.
Programs AI incident trackingIncident taxonomy developmentSafety learning database
Funding Unknown
United States Est. 2000 Academic
Governance Active
Scope Policy work including AI governance.
Programs Internet governanceAI policy researchDigital rights
Funding Unknown
United States Est. 1997 Academic
Governance Active
Scope Research on technology policy and AI governance.
Programs AI governance researchInternet and societyEthics of AI
Funding Unknown
United Kingdom Est. 2015 Academic
Mixed Active
Scope AI safety interest group page.
Programs AI safety interest groupData science researchEthics advisory
Funding Unknown
United Kingdom Est. 2018 Nonprofit
Governance Active
Scope AI ethics & governance org.
Programs AI and society researchAlgorithmic accountabilityPublic deliberation on AI
Funding Unknown
Belgium Est. 2024 Government
Governance Active
Scope EU governance office.
Programs EU AI Act implementationAI governance coordinationGPAI oversight
Funding Unknown