Business

This Week in AI: OpenAI, Condé Nast collab; ‘Swifties for Trump’ campaign sparks debate

How OpenAI’s partnership with Condé Nast aims to offset AI’s impact on media revenue

OpenAI has recently announced a partnership with Condé Nast, the media company that owns brands such as Vogue, The New Yorker, GQ, and others. This collaboration will result in content from Condé Nast publications being integrated into ChatGPT, SearchGPT, and other OpenAI product outputs. This is not the first time OpenAI has inked a deal with a media company; at the end of 2023, OpenAI announced a deal with Axel Springer, the owner of Politico, Business Insider, and other major outlets.

These media deals provide OpenAI with a rich archive of high-quality content that can be used to train and refine its AI models. Media companies like Condé Nast see value in these collaborations because they help offset the declines in web traffic and revenue caused by integrating AI into search engines. With AI tools now embedded in popular search platforms, users often get direct answers to their queries without needing to click through links to articles. This has significantly reduced the number of clicks on media websites, which has impacted their revenue models, which are heavily dependent on advertising and user engagement.

Although the financial terms of the agreement were not disclosed, Condé Nast’s CEO, Roger Lynch, hinted at the financial benefits, saying:

“Our partnership with OpenAI begins to make up for some of that revenue, allowing us to continue to protect and invest in our journalism and creative endeavors.”

This underscores the growing trend of media companies seeking new revenue streams as they are forced to adapt to a changing digital landscape catalyzed by AI innovation. As long as AI innovation continues, especially in chatbots and AI search functions, media companies—which have historically been the gatekeepers of content—will need to find new ways to stay relevant and profitable in an increasingly AI-driven world.

AI data centers repurpose old power stations amid growing energy demands

AI computing service providers are taking a page out of the book of digital asset miners when it comes to solving their scaling problems—they are looking to transform old power stations and defunct industrial manufacturing sites into artificial intelligence data centers. This trend is driven by the substantial amount of computing power required to train and run AI models, which in turn requires significant amounts of physical space and a continuous supply of electricity.

Once used for energy-intensive manufacturing operations, old power stations and warehouses are top choices for conversion into AI data centers. These locations already have the necessary infrastructure to support high energy consumption, making them prime candidates for housing AI operations.

However, transforming these sites is not as easy as just moving in and flipping a switch. It involves significant upgrades to the existing infrastructure, installation of advanced cooling systems, and ensuring that the power supply can meet the demands of AI workloads.

This approach mirrors the strategies used in digital asset mining, where companies repurpose similar sites to host mining operations. However, the parallels between AI and blockchain infrastructure raise an important question: Why doesn’t AI face the same level of scrutiny over its electricity consumption as blockchain operations?

Blockchain operations, particularly those involving digital asset mining, have been criticized for their high energy usage and environmental impact. Yet, AI—despite its equally demanding energy needs—has escaped that scrutiny. AI operations reportedly consume as much electricity as small countries, and these numbers are expected to rise as AI adoption grows. Arguments can be made about what each respective operation produces in return for the energy it consumes, but regardless, as AI operations continue to expand, the conversation around their environmental impact will likely gain traction if the industry cannot find a more efficient way to power its AI models.

Donald Trump’s ‘Swifties for Trump’ campaign sparks debate over AI-generated content

Former President Donald Trump recently shared images of Taylor Swift fans allegedly supporting his 2024 presidential campaign in what has been dubbed the “Swifties for Trump” movement. However, controversy quickly followed when it was revealed that only two of the images featured real people, while over 15 were AI-generated.

Critics accused Trump of attempting to mislead voters by using AI-generated content. In response, Trump claimed he was unaware that the images were AI-generated and issued a warning about the potential dangers of AI.

“I don’t know anything about them, other than somebody else generated them,” Trump said. “I didn’t generate them… AI is always very dangerous in that way… It’s happening with me too. They’re making — having me speak. I speak perfectly, I mean, absolutely perfectly on AI, and I’m, like, endorsing other products and things. It’s a little bit dangerous out there,” he added.

This incident highlights a growing concern in the age of AI: the use of AI-generated content to influence public opinion. As AI tools become more sophisticated, their ability to create compelling content—whether images, videos, or voices—has increased to the point where it is often difficult to distinguish between what is real and what is a replica. This becomes even more dangerous in the context of elections, where misinformation can have significant consequences.

The use of AI in political campaigns is not entirely new, but its prevalence and impact are growing. AI-generated content can be used to sway public opinion, spread misinformation, and even create deepfakes—realistic but fake videos or audio recordings that can deceive viewers. As the technology continues to evolve, the methods used to regulate and prevent its misuse must keep pace.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Improving logistics, finance with AI & blockchain

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.

Related Articles

Back to top button