The promise and perils of artificial intelligence (AI) are perhaps most starkly demonstrated in the worlds of healthcare and biomedical sciences. In these fields, striking the right balance between innovation and regulation is quite literally a matter of life and death. As scientists, policymakers, and practitioners around the world contemplate using AI to support research, transform the delivery of care, and protect us from the next pandemic, they must also contend with the potential of this technology to compromise privacy, curtail freedoms, and even create new biological threats.
These themes were at the center of “AI, Health, and the Future of Scientific Cooperation,” a seminar convened by Harvard’s Belfer Center on October 20, 2025. Moderated by Dr. Syra Madad, Belfer Fellow and Chief Biopreparedness Officer at New York City Health + Hospitals, the discussion brought together Dr. Derya Unutmaz (Jackson Laboratory for Genomic Medicine), Dr. Elizabeth Cameron (the Pandemic Center at Brown University), and Mr. Edward You (Founder of EHY Consulting LLC and retired FBI Supervisory Special Agent). The panel explored how AI is transforming discovery, biosecurity, and healthcare while surfacing the guardrails, governance choices, and cooperative frameworks that will determine whether this revolution improves global health or deepens existing risks. What follows is a synthesis of the conversation’s core insights and the key questions it raises for researchers, policymakers, and practitioners navigating this rapidly evolving landscape.
Key Point #1: AI is leading to a renaissance in biomedical discovery – Dr. Derya Unutmaz, Professor at the Jackson Laboratory for Genomic Medicine, spoke about the ability of AI to rapidly digest and analyze research data well beyond the capabilities of individual researchers. Drug discovery processes or complex simulations of biological systems which once took hundreds of hours and millions of dollars to achieve may soon be narrowed by trained large language models (LLMs) to a matter of hours. These models can also be used to suggest improvements to research design, predict results, and define future areas of exploration. While accuracy and verification remains a challenge, the number of novel treatments for thousands of conditions is expected to increase exponentially with the advent of AI-enabled research.
Key Point #2: AI can strengthen biosecurity, with the right guardrails – Dr. Elizabeth Cameron, Professor and Senior Advisor to the Pandemic Center at Brown University, focused on the potential of AI to undergird global pandemic prevention and response efforts. Massive streams of data, from wastewater systems to hospitals and even climate surveillance, can be fed into algorithms that are capable of detecting disease hotspots in real-time. This can in turn inform the AI-enabled design of novel vaccines and treatments tailored to each potential pathogen. These exciting developments come with two major caveats, however: 1. AI can only inform a response, not replace the capacity and willingness to act and 2. Without built-in guardrails, these same tools can be leveraged by bad actors to develop new bioweapons or other threats to public health and safety.
Key Point #3: Data is everything – and creates risks of its own – Mr. Edward You, Founder and Principal of EHY Consulting LLC, and former FBI Supervisory Special Agent, warned against the rapid, unexamined adoption of AI in the health space without consideration of the attendant risks to data privacy, security, and sovereignty writ large. The companies that create and manage AI tools are ever-hungry for new data to train their algorithms; in our brave new world of biometric tracking, Internet of Things (IoT)-enabled devices, and widespread social media usage, human data of all kinds, down to even our personal genome, is available for their use. Furthermore, the concentration of this data creates vulnerabilities beyond traditional bioterror, as cyberattacks and hacking can compromise health records or even halt the production of lifesaving drugs. New paradigms for privacy and security are necessary to ensure that the risks of AI adoption do not outweigh its benefits.
Key Point #4: AI can be a tool of cooperation or competition - Dr. Madad and the panelists spoke about the potential of AI as a means to either enhance cooperation or promote further division in the global health landscape. Issues of equity which came into particular focus during the COVID pandemic, such as unequal sharing of intellectual property and production of vaccines and treatments, could be ameliorated by the use of open-access AI tools that can reduce differences in research and manufacturing capacity. However, competition over the development of AI models threatens this cooperation, as governments seek to hoard critical technologies and gain advantage in the AI and biotechnology races. Global data-sharing arrangements also have significant potential to promote health diplomacy, but may run up against issues of sovereignty and differentiated regulatory environments.
The next revolution in cancer treatment, or the next deadly incident of bioterrorism, are both potential results of the integration of AI into health. Lowering barriers to cooperation, installing robust safeguards, and promoting responsible usage may mean the difference between one ending or the other.
Kalwani, Gaurav and Syra Madad. “AI, Health, and the Future of Scientific Cooperation.” February 2, 2026