Analysis & Opinions - Gizmodo
Can We Build Trustworthy AI?
AI isn't transparent, so we should all be preparing for a world where AI is not trustworthy, write two Harvard researchers.
We will all soon get into the habit of using AI tools for help with everyday problems and tasks. We should get in the habit of questioning the motives, incentives, and capabilities behind them, too.
Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your preferences, or because the company is getting a kickback from the hotel chain? Later, when you’re using another AI chatbot to learn about a complex economic issue, is the chatbot reflecting your politics or the politics of the company that trained it?
For AI to truly be our assistant, it needs to be trustworthy. For it to be trustworthy, it must be under our control; it can’t be working behind the scenes for some tech monopoly. This means, at a minimum, the technology needs to be transparent. And we all need to understand how it works, at least a little bit.
Amid the myriad warnings about creepy risks to well-being, threats to democracy, and even existential doom that have accompanied stunning recent developments in artificial intelligence (AI)—and large language models (LLMs) like ChatGPT and GPT-4—one optimistic vision is abundantly clear: this technology is useful. It can help you find information, express your thoughts, correct errors in your writing, and much more. If we can navigate the pitfalls, its assistive benefit to humanity could be epoch-defining. But we’re not there yet.
Want to Read More?
The full text of this publication is available via Gizmodo.
For more information on this publication:
Belfer Communications Office
For Academic Citation:
Sanders, Nathan and Bruce Schneier.“Can We Build Trustworthy AI?.” Gizmodo, May 4, 2023.
- Recommended
- In the Spotlight
- Most Viewed
Recommended
Analysis & Opinions
- Slate
Big Tech Isn't Prepared for A.I.'s Next Chapter
Paper
- Belfer Center for Science and International Affairs, Harvard Kennedy School
Addressing Russian and Chinese Cyber Threats: A Transatlantic Perspective on Threats to Ukraine and Beyond
Policy Brief
- Harvard Initiative to Reduce Global Methane Emissions
Updating Estimates of Methane Emissions: The Case of China
In the Spotlight
Most Viewed
Analysis & Opinions
- Project Syndicate
If Trump Returns
Belfer Center for Science and International Affairs, Harvard Kennedy School
- Belfer Center Fellow Peter Ajak Navigates Challenges from Lost Boy to South Sudanese Activist
Analysis & Opinions
- Slate
Big Tech Isn't Prepared for A.I.'s Next Chapter
We will all soon get into the habit of using AI tools for help with everyday problems and tasks. We should get in the habit of questioning the motives, incentives, and capabilities behind them, too.
Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your preferences, or because the company is getting a kickback from the hotel chain? Later, when you’re using another AI chatbot to learn about a complex economic issue, is the chatbot reflecting your politics or the politics of the company that trained it?
For AI to truly be our assistant, it needs to be trustworthy. For it to be trustworthy, it must be under our control; it can’t be working behind the scenes for some tech monopoly. This means, at a minimum, the technology needs to be transparent. And we all need to understand how it works, at least a little bit.
Amid the myriad warnings about creepy risks to well-being, threats to democracy, and even existential doom that have accompanied stunning recent developments in artificial intelligence (AI)—and large language models (LLMs) like ChatGPT and GPT-4—one optimistic vision is abundantly clear: this technology is useful. It can help you find information, express your thoughts, correct errors in your writing, and much more. If we can navigate the pitfalls, its assistive benefit to humanity could be epoch-defining. But we’re not there yet.
Want to Read More?
The full text of this publication is available via Gizmodo.- Recommended
- In the Spotlight
- Most Viewed
Recommended
Analysis & Opinions - Slate
Big Tech Isn't Prepared for A.I.'s Next Chapter
Paper - Belfer Center for Science and International Affairs, Harvard Kennedy School
Addressing Russian and Chinese Cyber Threats: A Transatlantic Perspective on Threats to Ukraine and Beyond
Policy Brief - Harvard Initiative to Reduce Global Methane Emissions
Updating Estimates of Methane Emissions: The Case of China
In the Spotlight
Most Viewed
Analysis & Opinions - Project Syndicate
If Trump Returns
Belfer Center for Science and International Affairs, Harvard Kennedy School
-Belfer Center Fellow Peter Ajak Navigates Challenges from Lost Boy to South Sudanese Activist
Analysis & Opinions - Slate
Big Tech Isn't Prepared for A.I.'s Next Chapter