In recent years, Sam Altman’s company has made extraordinary contributions to the progress of artificial intelligence, even before ChatGPT, which is on everyone’s lips because, at least initially, it was offered for free to the general public worldwide. However, Altman’s relationship with ethics has changed.
OpenAI, which gave rise to ChatGPT, was created in 2015 in a room at the Rosewood Hotel in the heart of Silicon Valley by a group of visionary entrepreneurs (which initially even included Elon Musk) convinced of AI’s amazing potential, both for good and for bad. They thus decided to dedicate themselves to researching AI, planning to use it objectives that would both benefit society and maximize profits. To avoid a transformation like Google’s, which morphed from the decency of its old slogan “Don’t be evil” to the cutthroat capitalism of Wall Street, OpenAI was born as a philanthropic foundation; a nongovernmental organization meant to ensure “that artificial general intelligence benefits all of humanity.” Those who licensed the software were forbidden to exploit it in ways “that may be used to influence the political process or to campaign” … [or] “for research related to citizens’ access to credit, the labor market, or other essential services.”*
In recent years, Sam Altman’s company has made extraordinary contributions to the progress of artificial intelligence, even before ChatGPT, which is on everyone’s lips because, at least initially, it was offered for free to the general public worldwide. However, Altman’s relationship with ethics has changed. Philanthropy was not enough to support complex research; the open-source model — gifting the world with elaborate code — limited the company’s horizons. Enter Microsoft, where alongside the foundation, a new company was founded with commercial objectives.
Altman has continued to display ethical sensibilities. Last year when he launched Dall-E 2 — which generates images from text, a revolution in art — Altman imposed restrictions to avoid aberrant use, from disinformation to pornography. Other, more unscrupulous start-ups have adopted similar software without imposing limitations. And many artists have preferred these tools, which give them more freedom. To return to pole position, Altman launched ChatGPT, which is extraordinary and problematic from many points of view. Disinformation experts who have tested ChatGPT say it could greatly multiply the conspiracy theories that already infect every corner of the web, producing new and more credible ones. This is yet more proof that self-regulation, even when pursued sincerely and with determination as in Altman’s case, is not enough.
*Translator’s note: Although accurately translated, this original quoted remark could not be independently verified.
Leave a Reply
You must be logged in to post a comment.