Technology

Amazon Is Investigating Perplexity Over Claims of Scraping Abuse

Amazon’s cloud division has launched an investigation into Perplexity AI. At issue is whether the AI search startup is violating Amazon Web Services rules by scraping websites that attempted to prevent it from doing so, WIRED has learned.

An AWS spokesperson, who spoke to WIRED on the condition that they would not be named, confirmed the company’s investigation of Perplexity. WIRED had previously found that the startup—which has backing from the Jeff Bezos family fund, Nvidia, and was recently valued at $3 billion—appears to rely on content from scraped websites that had forbidden access through the Robots Exclusion Protocol, a common web standard. While the Robots Exclusion Protocol is not legally binding, terms of service generally are.

The Robots Exclusion Protocol is a decades-old web standard that involves placing a plaintext file (like wired.com/robots.txt) on a domain to indicate which pages should not be accessed by automated bots and crawlers. While companies that use scrapers can choose to ignore this protocol, most have traditionally respected it. The Amazon spokesperson told WIRED that AWS customers must adhere to the robots.txt standard while crawling websites.

“AWS’s terms of service prohibit customers from using our services for any illegal activity, and our customers are responsible for complying with our terms and all applicable laws,” the spokesperson said in a statement.

Scrutiny of Perplexity’s practices follows a June 11 report from Forbes that accused the startup of stealing at least one of its articles. WIRED investigations confirmed the practice and found further evidence of scraping abuse and plagiarism by systems linked to Perplexity’s AI-powered search chatbot. Engineers for Condé Nast, WIRED’s parent company, block Perplexity’s crawler across all its websites using a robots.txt file. But WIRED found the company had access to a server using an unpublished IP address—44.221.181.252—which visited Condé Nast properties at least hundreds of times in the past three months, apparently to scrape Condé Nast websites.

The machine associated with Perplexity appears to be engaged in widespread crawling of news websites that forbid bots from accessing its content. Spokespeople for the Guardian, Forbes, and The New York Times also say they detected the IP address on its servers multiple times.

WIRED traced the IP address to a virtual machine known as an Elastic Compute Cloud (EC2) instance hosted on AWS, which launched its investigation after we asked whether using AWS infrastructure to scrape websites that forbade it violated the company’s terms of service.

Last week, Perplexity CEO Aravind Srinivas responded to WIRED’s investigation first by saying the questions we posed to the company “reflect a deep and fundamental misunderstanding of how Perplexity and the Internet work.” Srinivas then told Fast Company that the secret IP address WIRED observed scraping Condé Nast websites and a test site we created was operated by a third-party company that performs web crawling and indexing services. He refused to name the company citing a nondisclosure agreement. When asked if he would tell the third-party to stop crawling WIRED, Srinivas replied “it’s complicated.”

Sara Platnick, a Perplexity spokesperson, tells WIRED that the company responded to Amazon’s inquiries on Wednesday and characterized the investigation as standard procedure. Platnick says the Perplexity made no changes to its operation in response to Amazon’s concerns.

“Our PerplexityBot — which runs on AWS—respects robots.txt, and we confirmed that Perplexity-controlled services are not crawling in any way that violates AWS Terms of Service,” Platnick says. She adds, however, that PerplexityBot will ignore robots.txt when a user enters a specific URL in their prompt—a use-case Platnick describes as “very infrequent.”

“When a user prompts with a specific URL, that doesn’t trigger crawling behavior,” Platnick says. “The agent acts on the user’s behalf to retrieve the URL. It works the same way as if the user went to a page themselves, copied the text of the article, and then pasted it into the system.”

This description of Perplexity’s functionality confirms WIRED’s findings that its chatbot is ignoring robots.txt in certain instances.

Digital Content Next is a trade association for the digital content industry whose members include The New York Times, The Washington Post, and Condé Nast. Last year, the organization shared draft principles for governing generative AI to prevent potential copyright violations. CEO Jason Kint tells WIRED that if the allegations against Perplexity are true, the company is violating many of those principles.

“By default, AI companies should assume they have no right to take and reuse publishers’ content without permission,” Kint says. If Perplexity is skirting terms of service or robots.txt, he adds, “the red alarms should be going off that something improper is going on.”

Related Articles

Back to top button