Jony Ive and Laurene Powell Jobs say 'humanity deserves better' from technology

Jump to First Reply
Posted:
in General Discussion

As Jony Ive's AI startup is bought by OpenAI, he and investor Laurene Powell Jobs talk about collaboration, and how technology has damaged society as well as benefited it.

Three people sitting on red chairs, engaged in conversation. The person on the left wears glasses and a leather jacket, the middle person in white, and the right person in black.
Jony Ive (center) and Laurene Powell Jobs (right) in 2022 -- image credit: Recode



Jony Ive and Sam Altman's startup "io" was bought in May 2025 by Altman's OpenAI company, and is working on a new AI device. Prior to being bought, Laurene Powell Jobs invested in "io," and in a new Financial Times interview with both of them, Ive reveals that she has been crucial to his post-Apple work, including forming his own LoveFrom company.

"If it wasn't for Laurene," said Jony Ive, "there wouldn't be LoveFrom."

Alongside speaking of how they first met in the 1990s at Powell Jobs's and Steve Jobs's house, both also talked about how technology has not always been a force for good.

"We now know, unambiguously, that there are dark uses for certain types of technology," said Powell Jobs. "You can only look at the studies being done on teenage girls and on anxiety in young people, and the rise of mental health needs, to understand that we've gone sideways."

"Certainly, technology wasn't designed to have that result," she continued. "But that is the sideways result."

Ive echoes that point about how technology is usually developed with positive aims, yet it has gone wrong, or been misused. As a designer of the world-changing iPhone, Ive includes his own work in this.

"If you make something new, if you innovate, there will be consequences unforeseen, and some will be wonderful and some will be harmful," he said.

"While some of the less positive consequences were unintentional, I still feel responsibility," continued Ive. "And the manifestation of that is a determination to try and be useful."

Neither Ive nor Powell Jobs would be drawn on details of what the new OpenAI device will be, but Powell Jobs says she is following its development closely. "Just watching something brand new be manifested, it's a wondrous thing to behold."

The full interview also touches on Ive's personal investment in redeveloping parts of San Francisco, and Powell Jobs rescuing the San Francisco Art Institute out of bankruptcy. But the piece also touches on the long-running friendship between the two.

"It's funny... as I've got older, to me, it's [about] who, not what," said Ive. "The very few precious relationships become so increasingly valuable, don't they?"

Separately, Ive and Powell Jobs -- together with Tim Cook -- launched the Steve Jobs Archive in 2022.



Read on AppleInsider

Comments

  • Reply 1 of 12
    mattinozmattinoz Posts: 2,647member
    It could start by delivering on existing promises before getting distracted by the next shiny thing it will also drop just as it needs a polish
    entropys
     1Like 0Dislikes 0Informatives
  • Reply 2 of 12
    thedbathedba Posts: 844member
    Honestly, does anyone have a clue as to what this new thing is?
    AI device?  Why'd didn't you say so? Put me down for two. /s
     0Likes 0Dislikes 0Informatives
  • Reply 3 of 12
    I dunno.. It looks like a show driven by a group with inferiority complexes which wanted to try so hard, but never recognized in public because Steve Jobs was SOOOO good to be true. 

    Yeah... "Humanity deserves better", but with Sam Altman? With OpenAI, which is not open source?? Data collection for better humanity??

    This is a s*it show. 
    edited June 2
    blastdoorWhizvillewilliamlondonmattinozentropys
     2Likes 3Dislikes 0Informatives
  • Reply 4 of 12
    Bottom line: unless AI programs are only using public domain material and/or material with the appropriate rights permissions secured then it's nothing more than theft. The LLMs themselves have no value whatsoever without the database used for training. It's basically a gigantic torrent with some techno jargon slathered on top as a smokescreen. 
    edited June 2
    thtWhizville
     2Likes 0Dislikes 0Informatives
  • Reply 5 of 12
    Bottom line: unless AI programs are only using public domain material and/or material with the appropriate rights permissions secured then it's nothing more than theft. The LLMs themselves have no value whatsoever without the database used for training. It's basically a gigantic torrent with some techno jargon slathered on top as a smokescreen. 
    Does Apple have this database used for training? (Serious question).
    williamlondon
     0Likes 1Dislike 0Informatives
  • Reply 6 of 12
    AppleZuluapplezulu Posts: 2,456member
    Bottom line: unless AI programs are only using public domain material and/or material with the appropriate rights permissions secured then it's nothing more than theft. The LLMs themselves have no value whatsoever without the database used for training. It's basically a gigantic torrent with some techno jargon slathered on top as a smokescreen. 
    Does Apple have this database used for training? (Serious question).
    Undoubtedly they do. 

    Here's the thing. Current LLM AI appears to be combining two relatively recent technological developments to essentially do what Eliza did a half century ago. Computational speed combined with teraflops of data is used to brute-force probabilistic mimicry of human language and thought. What's got everyone fooled is the fact that the mimicry is pretty good. What comes out of that process is something that looks something a human thought and said or wrote. There is, however, no thought. It's an illusion. It is a much larger database doing Eliza's trick of coyly responding with "How does [regurgitate user input] make you feel?"

    The tell of course is the fact that, despite the use of all the internet's data and massive computational power, these things still produce output that includes "hallucinations" that an average human with nominal subject-manner knowledge can spot.

    Look at it another way. A reasonably intelligent high school student, trained on a tiny fraction of the data used to train LLM AI, and with access to an equally tiny fraction of subject-matter data, is perfectly capable of writing a better, more accurate research essay than these massive AI models can create. It'll take the human more time, but with very little experience and only a few (compared to LLM training) very low-bandwidth lessons from their teachers, the human kid is capable of sorting good sources of information from bad, and writing a simple research paper with no made-up content and no plagiarism. 

    Likewise, in creative arts like writing and musical composition and performance, most of the best start producing original and inspiring work in their teens and early twenties, when their "training data" is, once again, a tiny fraction of what's used for AI. While lots of young people show their influences and create derivative work, those who advance to originality are not unicorns by any stretch. Meanwhile, while violating IP rights consuming all the world's data, AI still can't produce original work, only sliced and diced collages of the training data. 

    Until such point that AI can be trained on relatively small amounts of data, you can be confident that it's all just advanced mimicry. It's not evident that Apple is doing that, and it's abundantly clear that the other well-known AI projects (including whatever it is Ive and Altman are up to) are still just brute-force mimicry. There's a utility and place for that, but none of it is really the paradigm shift or the impending robot overlords that the hype says is already here.
    MassiveAttackentropys
     2Likes 0Dislikes 0Informatives
  • Reply 7 of 12
    michelb76michelb76 Posts: 748member
    We definitely do deserve better, but first we need to survive the far-right dystopian era.
    Xed
     1Like 0Dislikes 0Informatives
  • Reply 8 of 12
    Referring to the drastically increased anxiety, depression, suicide, and mental instability in our young people as "sideways" would be laughable if it weren't so sickeningly out of touch.
     0Likes 0Dislikes 0Informatives
  • Reply 9 of 12
    Wesley_Hilliardwesley_hilliard Posts: 522member, administrator, moderator, editor
    Bottom line: unless AI programs are only using public domain material and/or material with the appropriate rights permissions secured then it's nothing more than theft. The LLMs themselves have no value whatsoever without the database used for training. It's basically a gigantic torrent with some techno jargon slathered on top as a smokescreen. 
    Does Apple have this database used for training? (Serious question).
    So, it's complicated. While companies like OpenAI, Perplexity, Google, and others scraped every ounce of data they could conceivably touch or (at least in Meta's case) steal, Apple seems to have taken the best route we could ask for.

    Apple did scrape the "open" web, which means anything published without paywall, trademark, or copyright. That's still unfortunate because it still reduces the works of humanity to a pile of churn for an AI, essentially devaluing it, but in order to compete in this space, it was necessary. The only other option was not doing AI as we know it.

    It seems Apple's targeting of the open web makes it the most conservative of the AI groups, and technically most ethical. Though arguments can be made they shouldn't have bothered at all to avoid the conundrum. However, Apple went a step further and spent piles of money on licensing resources.

    So, Apple's AI data gathering and training base is a combination of public data and paid content. As far as I can tell, they're the only company that went so far out of its way to try and build its AI ethically.
    MassiveAttack
     1Like 0Dislikes 0Informatives
  • Reply 10 of 12
    JamesCudejamescude Posts: 106member
    So the solution is more devices? SMH
     0Likes 0Dislikes 0Informatives
  • Reply 11 of 12
    entropysentropys Posts: 4,450member
    AI = new marketing term for machine learning. Marketing. 
    It is not capable of thinking for itself. This discussion is the kind of blow hard pontificating that would have been done by a bunch of rich old boys at the club over a few glasses of port.
    Now they do it on a studio floor. And sadly, without the port.
     0Likes 0Dislikes 0Informatives
  • Reply 12 of 12
    macguimacgui Posts: 2,597member
    It's possible you know enough about a subject to think you're right but, not enough about the subject to know that you're wrong.

    So much said with authority, while possessing little to no clue about something that hasn't been announced.

     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.