Technology

The Download: conspiracy-debunking chatbots, and fact-checking AI

Plus: OpenAI’s new AI model can reason

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Chatbots can persuade people to stop believing in conspiracy theories

The internet has made it easier than ever before to encounter and spread conspiracy theories. And while some are harmless, others can be deeply damaging, sowing discord and even leading to unnecessary deaths.

Now, researchers believe they’ve uncovered a new tool for combating false conspiracy theories: AI chatbots. Researchers from MIT Sloan and Cornell University found that chatting about a conspiracy theory with a large language model (LLM) reduced people’s belief in it by about 20%—even among participants who claimed that their beliefs were important to their identity

The findings could represent an important step forward in how we engage with and educate people who espouse baseless theories. Read the full story.

—Rhiannon Williams

Google’s new tool lets large language models fact-check their responses

The news: Google is releasing a tool called DataGemma that it hopes will help to reduce problems caused by AI ‘hallucinating’, or making incorrect claims. It uses two methods to help large language models fact-check their responses against reliable data and cite their sources more transparently to users. 

What next: If it works as hoped, it could be a real boon for Google’s plan to embed AI deeper into its search engine. But it comes with a host of caveats. Read the full story.

—James O’Donnell

Neuroscientists and architects are using this enormous laboratory to make buildings better

Have you ever found yourself lost in a building that felt impossible to navigate? Thoughtful building design should center on the people who will be using those buildings. But that’s no mean feat.

A design that works for some people might not work for others. People have different minds and bodies, and varying wants and needs. So how can we factor them all in?

To answer that question, neuroscientists and architects are joining forces at an enormous laboratory in East London—one that allows researchers to build simulated worlds. Read the full story.

—Jessica Hamzelou

This story is from The Checkup, our weekly biotech and health newsletter. Sign up to receive it in your inbox every Thursday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI has released an AI model with ‘reasoning’ capabilities
It claims it’s a step toward its broader goal of human-like artificial intelligence. (The Verge)
+ It could prove particularly useful for coders and math tutors. (NYT $)
+ Why does AI being good at math matter? (MIT Technology Review)

2 Microsoft wants to lead the way in climate innovation
While simultaneously selling AI to fossil fuel companies. (The Atlantic $)
+ Google, Amazon and the problem with Big Tech’s climate claims. (MIT Technology Review)

3 The FDA has approved Apple’s AirPods as hearing aids
Just two years after the body first approved over-the-counter aids. (WP $)
+ It could fundamentally shift how people access hearing-enhancing devices. (The Verge)

4 Parents aren’t using Meta’s child safety controls 
So claims Nick Clegg, the company’s global affairs chief. (The Guardian)
+ Many tech execs restrict their own childrens’ exposure to technology. (The Atlantic $)

5 How AI is turbo boosting legal action
Especially when it comes to mass litigation. (FT $)

6 Low-income Americans were targeted by false ads for free cash
Some victims had their health insurance plans changed without their consent. (WSJ $)

7 Inside the stratospheric rise of the ‘medical’ beverage
Promising us everything from glowier skin to increased energy. (Vox)

8 Japan’s police force is heading online
Cybercrime is booming, as criminal activity in the real world drops. (Bloomberg $)

9 AI can replicate your late loved ones’ handwriting ✍️
For some, it’s a touching reminder of someone they loved. (Ars Technica)
+ Technology that lets us “speak” to our dead relatives has arrived. Are we ready? (MIT Technology Review)

10 Crypto creators are resorting to dangerous stunts for attention
Don’t try this at home. (Wired $)

Quote of the day

“You can’t have a conversation with James the AI bot. He’s not going to show up at events.”

—A former reporter for Garden Island, a local newspaper in Hawaii, dismisses the company’s decision to invest in new AI-generated presenters for its website, Wired reports.

The big story

AI hype is built on high test scores. Those tests are flawed.

August 2023In the past few years, multiple researchers claim to have shown that large language models can pass cognitive tests designed for humans, from working through problems step by step, to guessing what other people are thinking.

These kinds of results are feeding a hype machine predicting that these machines will soon come for white-collar jobs. But there’s a problem: There’s little agreement on what those results really mean. Read the full story.
 
—William Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ It’s almost time for Chinese mooncake madness to celebrate the Moon Festival! 🥮
+ Pearl the Wonder Horse isn’t just a therapy animal—she’s also an accomplished keyboardist.
+ We love you Peter Dinklage!
+ Money for Nothing sounds even better on a lute.

Related Articles

Back to top button