Search results for

All search results
Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

Microsoft AI CEO thinks you agreed to a 'social contract' for AI training

Is everything on the open web fair game for AI training?
By

Published onJune 28, 2024

Microsoft logo at MWC
Kris Carlon / Android Authority
TL;DR
  • Microsoft AI CEO sparked controversy when he likened the Internet to “freeware” for AI training.
  • He suggested that the Internet’s “social contract” allows for the unrestricted use of public content for AI training.
  • The online community reacted strongly, seeing his stance as a misinterpretation of fair use and a disregard for content creators’ rights.

Mustafa Suleyman, CEO of Microsoft AI, recently found himself at the center of a heated debate following a contentious statement made at the Aspen Ideas Festival. He suggested that the Internet essentially functions as “freeware” for training AI models, a claim that has drawn sharp criticism from content creators and general users.

Around the 13-minute mark in this interview, the host raised concerns about AI training using online content, addressing the presence of many authors in the audience and mentioning OpenAI’s use of YouTube video transcripts for training its models.

The interviewer questioned who should own the intellectual property (IP) in such cases and how commercial agreements around them should be structured, hinting that AI companies might be “stealing” the world’s IP.

Here’s Suleyman’s response to the question:

With respect to content that’s already on the open web, the social contract of that content since the 90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been “freeware,” that’s been the understanding. There’s a separate category where a website, a publisher, or a news organization has explicitly said do not crawl or scrape me for any other reason than indexing me so that other people can find this content. That’s a grey area, and I think it’s going to work its way through the courts.

Suleyman’s remarks suggest that AI developers can freely use the vast amount of data available online to train their models. This view seems to overlook the complex legal and ethical issues surrounding content ownership and usage rights. Fair use does allow limited use of copyrighted material for purposes like criticism, teaching, or research. However, using vast amounts of content to develop AI models goes beyond these boundaries, especially when there are clear commercial motives involved.

The comment wasn’t taken so well by the online community, and many X (formerly Twitter) users have since reposted the video with their takes on his views. Prominent figures in the tech industry, such as Tom Warren, questioned Microsoft’s double standards, asking if the company would be comfortable with its Windows operating system being treated as freeware.

Others, like artist Denman Rooke, highlighted the difference between viewing or downloading art online and using it for commercial purposes without permission, emphasizing that the latter constitutes theft.

The Internet is full of content created by journalists, artists, and many others who rely on making money from their work. When AI companies use this content to train their models without permission, they’re taking value away from them without compensating the original creators. The interviewer compared this to an author referencing other books while writing their own. While the author doesn’t pay the referenced authors, they still need to buy the books or pay library fees.

To this, Suleyman argued that the cost of producing information would soon drop to almost zero because of AI. Traditionally, creating information was expensive, but AI models can potentially bring the cost of information production to nearly nothing.

For what it’s worth, OpenAI has been on a spree recently, actively securing content licensing deals with major media houses and online platforms, including Reddit, to use their content for training its GPT models.

This debate underscores the urgent need for clear guidelines and ethical standards in the field of AI and also raises broader questions about the future of information economics and the need to adapt to a rapidly changing technological landscape.

What do you think about this issue? Share your thoughts in the comments below.

Got a tip? Talk to us! Email our staff at news@androidauthority.com. You can stay anonymous or get credit for the info, it's your choice.
You might like