In the parable about the frog in the boiling pot, we’re told the water starts out comfortable for the frog. It stays comfortable for just long enough for the frog to relax too much. Then, as the water starts to real heat up, it’s too late for our frog.

Small changes now become significant changes later.

It’s hard to notice the small changes, but because of a self-inflicted misstep at OpenAI, we’ve noticed a slightly bigger change in temperature. To celebrate OpenAI’s launch of GPT-4o, the new voice-driven AI, Altman posted the single word tweet “her”. The new voice-driven AI features an almost identical voice to Scarlett Johansson, who voiced the AI in “Her”.

GPT-4o’s voice was so similar to Ms. Johansson’s that it confused her closest friends. Altman’s post, an unsuspecting admission of guilt, might go down as one of the most expensive tweets in history. This wasn’t just a misstep on Altman’s part, it illuminates the stranglehold he and OpenAI already have on us. So untouchable did he feel, that he could flippantly brag about ripping off somebody’s likeness.

Then, when caught red-handed, Mr. Altman just lied directly to the public:

“The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers.” [emphasis mine] - OpenAI Press Release

That line “it was never intended to resemble hers” is hard to believe, especially when Altman was recently gushing about his admiration for the movie “Her” less than six months ago, especially how it showcases an interactive AI, that same voice of Scarlett Johansson.

The number of things that I think Her got right, that were not obvious at the time, like the whole interaction model with how humans are gonna use an AI—this idea that it is going to be this conversational language interface, that was incredibly prophetic, and certainly more than a little bit inspired us.”

I get it. For legal reasons, he has to say “it was never intended to resemble hers”. But he knows it’s not true. We know it’s not true. Even AI knows it’s not true. Well, not Sam’s AI. Given three articles from Reuters, Forbes, and Vanity Fair, ChatGPT 4 was oddly careful about not indicting its CEO, summarizing its analysis by saying “if one were to express this in probabilistic terms subjectively, one might argue there is a higher probability that Altman was being truthful about the lack of intent to mimic Johansson’s voice directly.”

So guess what? I asked Chat GPT’s friends: Microsoft Copilot, Twitter’s Grok, and Anthropic’s Claude 3. Here’s how they interpreted the same three articles:

  • Grok said: “The probability that Sam Altman was being untruthful about the voice of Sky not being intended to sound like Scarlett Johansson remains at approximately 95%.”
  • Copilot said: “it’s challenging to assign a precise percentage probability. However, based on the available information, skepticism toward Altman’s claim seems warranted.”
  • Claude 3 said: “It seems more likely that Sam Altman may not have been fully truthful when saying the “Sky” voice was “never intended to resemble” Scarlett Johansson’s voice from the movie “Her”.
    • Considering Altman’s clear admiration for “Her”, his belief in its prophetic vision, his alleged attempt to get Johansson herself to voice the AI, and the overt movie references made by him and his team upon “Sky’s” launch, it seems dubious to claim there was never any intention for the voice to resemble Johansson’s.
    • The mounting evidence suggests Altman’s claim of never intending a resemblance is dubious at best. I would now estimate there’s a very high probability (around 90%) that this statement is not fully honest. While they may have used a different actress in the end, the choice of voice and Altman’s own actions strongly indicate an intent to reference and capitalize on the popularity of Johansson’s AI portrayal, without her consent.

When I confronted Chat GPT 4 with this glaring discrepancy, it had to admit that the results “are indeed intriguing” before rationalizing its own interpretation:

“Each model may weigh the significance of certain facts differently. For example, my analysis considers the possibility of unconscious influence and coincidental resemblance without intentional mimicry (which I doubt would be so favorable to someone not named Sam Altman), whereas the other models might focus more on the potential implications of Altman’s admiration for “Her” and his actions surrounding the voice’s public introduction (including most human ‘models’).”

I was initially going to call this essay, “Sam Altman just lied to our faces, but we’re cool with it.” But what if the AI models start lying to us? How are we protected from the people powering the AI language models? How do we know those models are not being manipulated to protect their creators, specifically, the people in their company, like the CEO?

I agree with the sentiment from friend Kevin Espiritu that “something is fishy about OpenAI.” Kevin posted this on May 15th.

It’s been obvious to me since November ‘23 and all of the bizarre drama that ensued, in cagey interviews given by Sam/etc., and the recent no-info resignations of big players. I’m a dumb gardener, but this is my take

If you read OpenAI’s press release, where they are trying to present their best case in this situation, it looks pretty flimsy. Read for yourself. Four days after Altman’s “her” tweet, the latest significant resignation from OpenAI took place from executive and head of alignment Jan Leike in this thread.

”I joined because I thought OpenAI would be the best place in the world to do this research. However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

His thread is another increase to the temperature in the pot of water we all find ourselves in with AI. Hopefully, these sudden increases are enough of a jolt to get us thinking a bit more about how we can better handle a healthier integration with AI.

Companies like OpenAI are in a mad rush, sprinting at full speed to gobble up as much power and influence as fast as possible. The longer they have free reign, the more they can innovate in the short-term, and the more we’ll have to live with the consequences of their power grab in the long-term.

I’ll leave it to Scarlett Johansson’s statement on the OpenAI situation to convey the message:

“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”


Start Linking Your Thinking!

Get our email course "The Ultimate Primer to Linking Your Thinking" to start creating an ideaverse that can support and power a lifetime of memories & ideas.

    About the author

    Nick Milo has spent the last 15 years harnessing the power of digital notes to achieve remarkable feats. He's used digital notes as a tool to calm his thoughts and gain a clearer understanding of the world around him.