AI and Biotechnology Risks

Emerging Risks from Artificial Intelligence

Artificial intelligence has the potential to make immense contributions in areas such as healthcare, drug discovery, basic science research, agriculture, and more. However, the same capabilities that allow advanced AI systems to accelerate scientific discovery may also aid malicious actors in misusing AI for harm. To understand these risks, SecureBio is building measurement tools to assess the potential for AI systems to contribute to global catastrophic biological risks, as well as mitigation strategies that can reduce risks once AI capabilities cross specific risk thresholds. We are a member of the NIST US AI Safety Consortium.

Measurements: Building Robust Evaluations of Frontier Models

SecureBio is generating benchmarks and evaluations of biosecurity-relevant capabilities of large language models to understand misuse risks. We aim to assess capabilities that lower the barrier to entry for reconstructing dangerous biological agents, as well as ways in which models could provide additional scientific insights to develop novel biological threats. Good measurements can provide clear assessments of risk, allowing for proactive deployment of interventions once model capabilities cross a certain threshold, and can feed directly into model developers’ responsible scaling policies.

Mitigations: Reducing Risks When Identified

Once risks are identified through comprehensive measurements, interventions should be deployed to address these risks. Examples of such mitigations could include increased adversarial robustness against dual-use model queries, or unlearning methods to reduce models’ knowledge of highly consequential biological information. SecureBio works with a range of organizations, including model developers and non-profits, to develop and implement such mitigations.