Artificial Intelligence Runs outside the Rules


The use of Scarlett Johansson’s voice without her consent is just the latest example, but there are too many “gray areas.”

Sam Altman confirmed OpenAI’s dominance by presenting its new “multimodal” artificial intelligence model 24 hours before Google launched its new model, once again relegating Google to runner-up. Just a few hours later, Altman was reduced to correcting, clarifying and apologizing to Scarlett Johansson for making his digital assistant speak with a voice identical to hers. Has the leader driving us toward a fascinating and mysterious world been caught red-handed?

The case is controversial (for example, using an unknown actress with a similar voice) but behind it is something else altogether: the defiance and condemnation coming from the Hollywood star, who twice turned down OpenAI’s request to use her voice, has focused general attention on what experts have been noticing for some time. Altman, who began his artificial intelligence adventure nine years ago as a philanthropic entrepreneur who was allergic to Google’s profit-making mindset, promising to be a transparent, collaborative creator of safe open source technologies available to all, has not only given up his idealism (which was sometimes based on wishful thinking) but increasingly resembles Mark Zuckerberg and his bulldozer mantra, “Move fast and break things.”

The Johansson incident shows how carelessly not only Altman, but also a large part of the tech industry world, believe they can treat the rights of content producers used to train their AI models. Notwithstanding Johansson refusing permission twice, OpenAI used a voice that sought to imitate that of Johansson. Now the company is beating a retreat, telling The Washington Post that it sought and hired a voice last summer before Altman approached Johansson — a different voice, says the paper, quoting the actress (who prefers to remain anonymous) and her agent (who is also anonymous). OpenAI is trying to show goodwill (and has suspended the use of the “incriminating” voice), but it is difficult to do that when an overly confident Altman showed how much he was pushing it by posting a message, just as the new product launched, with a single word, “Her.” This was the title of the 2013 cult film in which Joaquin Phoenix falls in love with an AI figure voiced by Johansson.

This is not an isolated case. A few days ago, Mira Murati, OpenAI’s chief technology officer, told The Wall Street Journal that she didn’t know whether content from YouTube, Instagram or other platforms had been used to train Sora, the new generative intelligence model that transforms texts into videos. The New York Times jogged her memory with an investigative inquiry by five of its journalists. Having run out of good quality texts it could access online to train its AI models, OpenAI scraped millions of hours of video from YouTube. A violation of our terms of service, thundered YouTube, owned by Google — which itself uses the same technology to train its own AI.

Of course, YouTube belongs to Google, but those who have channels on this platform have their own rights. Competition among AI manufacturers is becoming increasingly heated; therefore, they are turning to the gray area of “fair use” — the use of everything that is online, as long as it is not for commercial purposes — to grab everything.

The issue of copyright violation has a serious impact on the publishing industry and content providers, a complex area that will end up working against the very same big tech giants, which will have increasing difficulty finding quality material to feed to their generative AI models. This will be a subject for further investigation.

Here we should point out that, from the time he returned to head OpenAI after the failed coup by the ethical wing of the company’s board, Altman has been pushing the technology’s development to the max, greatly reducing (or completely removing) the safeguards that he promised to adopt, having learned from the negative consequences of social media’s unregulated development and aware of AI’s enormous potential.

Ilya Sutskever, cofounder of OpenAI, developer of ChatGPT, but also leader of the plot to oust Altman who was accused of betraying the company’s ethical principles, has departed. Other scientists left both before and after Sutskever, scientists who had been critical of the way the company was run but who were bound to lifelong silence. They had been forced to sign non-disparagement agreements that covered everything, including the very existence of these non-disclosure agreements, on pain of losing a major portion of the vested equity owed them for their work in the company.

When this extreme practice was leaked to the press, Altman appeared surprised. “I did not know this was happening and I should have.” This is yet further confirmation that now even Altman has fully embraced the rationale of pushing to the limit without worrying about damages or violating rights, always doing so just in time to excuse himself afterward. And commitments to transparency will have to wait. Meanwhile, the Superalignment team — a team of 20 scientists created 10 months ago to ensure the safety of the new technologies developed by OpenAI and offered to the public — is being dismantled.

Altman continues to call for regulations and claims he is determined to avoid dangerous outcomes from AI, but in the end, even he is going back to the roots of Silicon Valley’s dominant culture: “It will happen, whether you like it or not,” he said.* Perhaps it is an inevitable springboard to success. But we should stop treating these pioneers as statesmen — even if they are more powerful than many heads of state.

*Translator’s Note: Although accurately translated, this quote could not be independently verified.

About this publication


Be the first to comment

Leave a Reply