• Rolando@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    14 days ago

    OpenAI has historically positioned itself as a responsible actor trying to transcend mere commercial incentives

    Remember when Google positioned itself in the same way? See how that turned out?

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    3
    ·
    15 days ago

    This is the best summary I could come up with:


    Editor’s note, May 18, 2024, 7:30 pm ET: This story has been updated to reflect OpenAI CEO Sam Altman’s tweet on Saturday afternoon that the company was in the process of changing its offboarding documents.

    “Her,” tweeted OpenAI CEO Sam Altman, referencing the movie in which a man falls in love with an AI assistant voiced by Scarlett Johansson.

    But the product release of ChatGPT 4o was quickly overshadowed by much bigger news out of OpenAI: the resignation of the company’s co-founder and chief scientist, Ilya Sutskever, who also led its superalignment team, as well as that of his co-team leader Jan Leike (who we put on the Future Perfect 50 list last year).

    Sutskever publicly regretted his actions and backed Altman’s return, but he’s been mostly absent from the company since, even as other members of OpenAI’s policy, alignment, and safety teams have departed.

    All of this is highly ironic for a company that initially advertised itself as OpenAI — that is, as committed in its mission statements to building powerful systems in a transparent and accountable manner.

    “Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems,” a recruitment page for Leike and Sutskever’s team at OpenAI states.


    The original article contains 1,615 words, the summary contains 213 words. Saved 87%. I’m a bot and I’m open source!