FeaturedWorld

Over 100 biologists call for tighter controls on infectious disease data amid AI misuse fears

Biologists from top universities worldwide are calling on governments and research funders to impose stricter controls on sensitive infectious disease datasets, warning that unrestricted access could allow artificial intelligence to design deadly viruses and other biological weapons.

A group of more than 100 biologists from leading American and international universities has called on governments and research funding bodies to introduce stricter controls on certain infectious disease datasets that could potentially enable artificial intelligence (AI) systems to design deadly viruses.

The researchers — representing institutions including Johns Hopkins, Oxford, Stanford, Columbia and New York universities — issued their appeal in the scientific journal Science, published by the American Association for the Advancement of Science, under the title “Biological Data Governance in the Age of Artificial Intelligence.”

In their statement, the scientists highlighted the rapid progress being made in developing AI models trained on biological data and integrating them with advanced AI inference systems and intelligent agents.

They warned that while these technological advances are accelerating scientific discovery, they also raise serious biosecurity concerns if misused.

The group emphasized that more than 100 researchers worldwide have endorsed measures aimed at preventing advanced AI systems from being exploited for malicious purposes, including the potential development of biological weapons.

The researchers proposed introducing targeted controls on access to a limited category of pathogen-related data that may pose specific risks when combined with powerful AI tools.

They stressed, however, that open access to scientific data has historically benefited science and society and should remain the norm in most areas. Any restrictions, they said, should be carefully limited, regularly reassessed and improved to ensure they do not unnecessarily hinder legitimate research.

According to the appeal, modern biological AI models already allow scientists to design molecules with precision, predict protein structures, analyze genetic mutations and conduct complex life sciences experiments more efficiently.

While these capabilities hold enormous promise for medicine and scientific innovation, the researchers cautioned that increasingly advanced systems could also be used for unintended and dangerous applications.

They noted that current AI models already possess capabilities that could be misused, including designing new viral structures, predicting how pathogens may evolve, generating nucleic acid sequences capable of bypassing safety screening systems and creating new bacteriophage genomes — specialized viruses that infect and destroy bacteria — with enhanced laboratory efficiency.

The scientists expressed concern that developers are releasing increasingly powerful biological AI models without conducting sufficient safety assessments, a practice they described as unacceptable compared with safety standards applied in other life science fields.

The group argued that governments can reduce biosecurity risks by strengthening privacy policies and introducing regulated access frameworks for sensitive biological data, while maintaining scientific collaboration and innovation.

They expressed confidence that balanced regulation could significantly lower the risk of misuse without slowing scientific progress.

At the same time, they recommended that biosecurity measures be designed carefully to ensure researchers from less wealthy institutions and countries are not excluded from contributing to or benefiting from scientific advancements.

— KUNA


Follow The Times Kuwait on X, Instagram and Facebook for the latest news updates









Read Today's News TODAY...
on our Telegram Channel
click here to join and receive all the latest updates t.me/thetimeskuwait



Back to top button