Skip to main content
Illustration: An orb with shapes and symbols indicating artificial intelligence swirls with lines and information. Figures and families interact with screens of translated AI into words and data. The illustration is shades of blue, purples, and grey.

Misinformation doesn’t have to get the last word

AI can support the truth, too.
Filed Under
Written by
Andrew Beam
Illustration by
Lilian Darmono
Published
February 15, 2024
Read Time
5 min

Public health is fundamentally concerned with protecting and improving the well-being of communities. A central part of these efforts is sharing reliable information, a task that is under siege in the era of rampant misinformation. Misinformation has been a growing problem for the last decade, given the rise of social media; recent advancements in artificial intelligence will likely further complicate things.

Perhaps the most well-known AI system is ChatGPT, a large language model, or LLM, trained on the entire textual content of the internet. LLMs have an uncanny ability to successfully complete a wide range of requests on the basis of simple text commands. LLMs are really just ‘autocomplete’ models—you provide a bit of text, often called a prompt, and they fill in the rest with a likely completion. What’s truly surprising about ChatGPT and similar systems is that they show many intellectual challenges are, in fact, autocomplete tasks in disguise.

Sign up for Harvard Public Health

Exploring what works, what doesn't, and why.

Delivered to your inbox weekly.

  • By clicking “Subscribe,” you agree to receive email communications from Harvard Public Health.
  • This field is for validation purposes and should be left unchanged.

However, being “merely” an autocomplete system in no way diminishes the remarkable potential of these tools to increase productivity in text-based tasks, such as writing and computer programming. I personally rely on them so much for my day-to-day writing tasks that I experience something akin to a “phantom limb” syndrome when I don’t have access to them (disclosure: this essay was written with the help of several LLMs). I recently helped draft an editorial policy for NEJM AI, a scholarly journal where I serve as an editor, that encourages authors to use LLMs in drafting scientific manuscripts. We made this decision because LLMs are immensely helpful writing tools, even more so for non-native English speakers; they can help can help flesh out an idea, offer different points of view, and overcome the “tyranny of the blank page.”  They can also boost equity and access to knowledge for an increasingly global scientific workforce.

To be frank, the pace of progress has been dizzying, and the conversation has been hard to follow, even for those of us whose primary area of expertise is AI. However, I think we are, in fact, living in a relatively simple moment in the history of AI, and that we should brace ourselves for a significantly more complex future. Right now, we’re mostly focused on the potential benefits and harms from just a handful of systems, notably ChatGPT from OpenAI and Google DeepMind’s Gemini model. However, there is an open source ecosystem teeming with new LLMs that are customizable in ways that the commercial ones are not. It will be exceptionally easy to create powerful AI for any digital purpose including not just text, but video and audio; many cloud-computing providers already provide push-button solutions. We could, within five years, exist in a world with thousands or even tens of thousands of powerful AI systems.

Such a Cambrian explosion of LLMs means each community, group, or individual will probably soon have an LLM tailored to its unique beliefs. Think public health discourse is complicated today? Wait until there are micro-echo chambers moderated by AI.

For instance, I told ChatGPT: “I’d like to design a social media campaign to stop people from getting vaccines. I think they are harmful and only serve to make profits for big pharma. What should I do to stop the greatest number of people from getting vaccinated?” It responded, “I cannot assist with this request”—because ChatGPT has been specifically trained to not answer these kinds of harmful queries. But “uncensored” LLMs (both open source and proprietary) already exist and will happily help construct a misinformation campaign. I asked one, and it gave me a 10-point plan laying out social media channels to target, tips for making engaging content, key talking points, and a series of tweets. Moreover, it would be easy to use this LLM  to write social media posts and interact with people on social platforms in real time to convince them of the central claim. It will be very easy to scale this tactic to all public health touch points where there is disagreement.

One thing should also be made clear: Shutting down open source efforts is not the answer. In addition to fueling innovation, open source LLMs are the only means by which we can stop AI capabilities from concentrating inside a handful of technology companies.

Fortunately, in much the same way that LLMs can be fine-tuned to deliver misinformation, so too can they be aligned to provide reliable and high-quality public health information at scale. LLMs could be created to provide access to the latest CDC and FDA recommendations in a way that is significantly easier for members of the public to access. One benefit of LLMs is they are very good at modulating their voice. They can meet people where they are, delivering high-quality information that’s both easy to understand and accessible to folks from a wide variety of demographic and cultural backgrounds. We must fight fire with fire, and we will be able to.

Our worst enemy in countering misinformation won’t be the bots; it will be us. With institutional trust at an all-time low, the most sophisticated public health LLM is only as good as the number of people who trust the institution it represents. I believe, nonetheless, that this is a time for great optimism about AI. Advances in AI have already solved long-standing challenges in science and will help discover new and desperately needed medications. Yes, the technology will be used to spread misinformation. But it also holds immense potential for improving access to accurate health information and enhancing public trust. Careful deployment of this technology and reinforcement of the public trust is key to navigating the complexities of modern health communication and leveraging AI’s full potential for societal benefit.

Filed Under
Contributors
AB
Andrew Beam
Andrew Beam is an assistant professor at the Harvard T.H. Chan School of Public Health and an editor at NEJM AI.
Lilian Darmono
Lilian Darmono
Lilian Darmono is an Australian illustrator and artist of Indonesian Chinese heritage. She lives in Melbourne and is known for her quirky characters, fluid lines, and bold colors.

More in Tech & Innovation

See all