Apple is reportedly not investing in OpenAI

Posted:
in iPhone

Apple is said to have dropped out of a new funding round for OpenAI at the last minute, but this will have no impact on its plans to integrate optional ChatGPT queries into Apple Intelligence.

The OpenAI company logo, resembling an interwoven knot on a green background.
Apple has decided not to invest money into OpenAI at present.



OpenAI, the company behind ChatGPT, is closing a funding round that is expected to raise a $6.5 billion, that was previously expected to be funded in part by Apple. Other tech giants Microsoft and Nvidia are among those expected to participate in the new funding drive. Microsoft is expected to add another $1 billion to the $13 billion it has already invested.

The report isn't clear as to why Apple isn't investing into the funding round.

While rare, it's not unheard of for Apple to invest in other tech firms and promising startups. The company set up a $430 billion Advanced Manufacturing Fund to invest in companies that help provide it with new technology, such as optical technology firm II-VI.

Another example of this is Finisar, a US-based company that provides some of the technology behind FaceID and Portrait Mode that Apple later acquired outright. Apple has also invested in Globalstar, which provides the infrastructure that makes the Emergency SOS via satellite possible.

Changes at OpenAI



The pullout by Apple could be related to the recent move by OpenAI to abandon its nonprofit status and become a for-profit company. The changeover will be complicated, and if OpenAI doesn't complete the transition within two years, investors in the current round may ask to have their investment money returned.

Apple's reported withdrawal from the funding round is not expected to have any effect on the company's current relationship with OpenAI. ChatGPT will continue to be an optional feature in Apple Intelligence as it rolls out across late 2024 and early 2025.

As Apple has noted, users will have the option of having queries answered by ChatGPT if the nature of the request is beyond Siri's knowledge base. The integration will allow users to access ChatGPT knowledge without having to create an account, though existing subscribers can integrate their paid features within those experiences.



Read on AppleInsider

Comments

  • Reply 1 of 15
    PemaPema Posts: 118member
    No surprise there. Apple already invested a veritable fortune on Apple Car. Then decided that it was not a profitable venture and pulled out. Look at what happened next: the Chinese flooded the market with EVs: Haval, BYD and whatever else brands. That made the European, Japanese and American EV car markers pull back their plans to release an EV. Instead they are focusing their manufacturing on Hybrids and PHEVs. Why? Main reason: buyers are reluctant to purchase an EV due to range anxiety - insufficient charging infrastructure. The EV infrastructure has to come up to snuff to match the ICE. It has taken the latter 100 years to have servos at nearly every corner. You drive up, gas up and 10 minutes later you are on your merry way. With an EV, even if you can find a charging station you are going to have to wait in the queue. Next you will need to cool your heels for at least an hour while your battery is charging. The EV thing is one prime example of putting the cart before the horse. 

    Then Apple poured billions into a mixed reality headset that has no mass market appeal. Whoops. It's not even a price issue. The Apple Vision Pro is the best out there but like the WSJ said, it's a beautiful device but no one wants it. 

    And the next crazy thing is AI. Like the Y2K hysteria. Much ado about nothing. Airplane hangers full of old discarded data being mined by algorithms to generate useful data. Haven't we done this nonsense before under a different name: Data Warehousing? 

    To have true AI you will need to mine realtime data not old junk. 

    Apple wisely decided that wasted billions on Apple Car and Vision Pro was enough of Google Moon Shots. Apple is focused on making money. Lots of it. So the next Apple venture will be on a money making potential not another moon shot. 
  • Reply 2 of 15
    jdwjdw Posts: 1,418member
    As I've mentioned under other articles in the past, my experience with ChatGPT4o isn't that great.  I want like to use it to check multiple online sources quickly, in the hope it can Google faster than I can on my own.  And it is fast.  But the problem is, it lies a lot.  So I always ask it for sources.  Then it gives me stupid links that when clicked on, open nothing.  So I have to then as it for plain text URLs.  It complies, but none of them ever work.  EVER!  They lead to the expected domain, but they always result a 404 file not found.  ALWAYS!  I then complain to ChatGPT saying it needs to read the articles it links for me to ensure the article truly exists and exists at the plain text URL it will give to me.  It apologizes and seemingly complies, but it continues to give me more bogus URLs.  I have repeated that cycle multiple times in a row, until my free sessions with GPT4o expires.  It never learns from its mistakes.  It never gets it right.  I've been using it for months, and it hasn't improved at all in that regard.  So I mostly find it useless.  And this experience remains valid even if some GPT lover comes along a raves about how well it summarizes text.  Fine and well, but it still lies and gives bogus URLs to its source info.  

    This is why I won't shed many tears when and if OpenAI finally goes under.  There was so much promise with their creation.  But they've not done anything I can see to show it's worthy of sticking around when the funds run dry.  Let a better company come along and do an actual good job on AI for once.  Whether that can be Apple or not is yet to be seen.  Apple did come out with Apple Maps despite the global love for Google Maps, so you never know.  They may release their own ChatGPT style AI chatbot one day, with true intelligence that doesn't lie and gives working URLs.
    muthuk_vanalingamforgot usernamewatto_cobra
  • Reply 3 of 15
    MarvinMarvin Posts: 15,446moderator
    jdw said:
    As I've mentioned under other articles in the past, my experience with ChatGPT4o isn't that great.  I want like to use it to check multiple online sources quickly, in the hope it can Google faster than I can on my own.  And it is fast.  But the problem is, it lies a lot.  So I always ask it for sources.  Then it gives me stupid links that when clicked on, open nothing.  So I have to then as it for plain text URLs.  It complies, but none of them ever work.  EVER!  They lead to the expected domain, but they always result a 404 file not found.  ALWAYS!  I then complain to ChatGPT saying it needs to read the articles it links for me to ensure the article truly exists and exists at the plain text URL it will give to me.  It apologizes and seemingly complies, but it continues to give me more bogus URLs.  I have repeated that cycle multiple times in a row, until my free sessions with GPT4o expires.  It never learns from its mistakes.  It never gets it right.  I've been using it for months, and it hasn't improved at all in that regard.  So I mostly find it useless.  And this experience remains valid even if some GPT lover comes along a raves about how well it summarizes text.  Fine and well, but it still lies and gives bogus URLs to its source info.    
    I find it gives very good answers for technical questions that have a correct answer that would be difficult to find online but it does make mistakes.

    Duckduckgo has it integrated now:


    It offers GPT-4o, Claude 3, Llama 3 and Mistral. Try the other models to see how they compare.

    Very subjective questions like political, social, moral questions will have subjective answers depending on the training data.

    Getting access to high quality training data is going to be the biggest challenge for AI models. It needs a trust/authority model to weight the answers. Medical answers should give more weight to peer-reviewed medical texts over random Reddit/Youtube commenters.

    It's important to remember that the models are not continually trained, they are snapshots. You can ask a model directly when it was trained. GPT-4o answers up to October 2021 so it doesn't know about the past 3 years. Some of its online sources will have expired since it was trained. The new upcoming models have been trained after 2021 with more computer power:


    They now have metrics for how the AI compares to human baseline performance, future models will keep trying to outperform these baselines in different areas:


    It's easier for AI to excel at deterministic problems. Non-deterministic problems need huge amounts of high quality data.

    The processing power they are using on the servers is increasing significantly every year and the models will improve significantly when they are updated.



    Some people won't be impressed with AI models until they reach AGI level, there are people projecting this before 2030.
    muthuk_vanalingamgilly33forgot usernamebyronlwatto_cobra
  • Reply 4 of 15
    danoxdanox Posts: 3,294member
    It’s getting better every day through iteration and hard work by a lot of people, most people however seem to be disinterested when they don’t get an instantaneous answer or solution most seem to want the lottery, bitcoin, crypto solution it must be now it must be instantaneous or it’s no good. Eyes usually glaze over when you say it’s gonna take time and the only way to get there (personal finance) is through compounding your investment over a period of time. AI is currently in the same boat.
    forgot usernamewatto_cobra
  • Reply 5 of 15
    Maybe Apple poached enough OpenAI engineers to the point they don’t need waste anymore money on OpenAI. 
    watto_cobra
  • Reply 6 of 15
    Maybe Apple poached enough OpenAI engineers to the point they don’t need waste anymore money on OpenAI. 
    Or maybe apple realized it makes more sense to spend billions on M4 Ultras to train their own models than to line the pockets of openAI VCs
    danoxforgot usernamebyronlwatto_cobra
  • Reply 7 of 15
    danoxdanox Posts: 3,294member
    blastdoor said:
    Maybe Apple poached enough OpenAI engineers to the point they don’t need waste anymore money on OpenAI. 
    Or maybe apple realized it makes more sense to spend billions on M4 Ultras to train their own models than to line the pockets of openAI VCs
    Apple over the years it’s been very frugal with the way they spend money unlike some of their competition Microsoft and Google, hopefully by using their own in-house servers, maybe it will lead Apple towards offering and supporting a more timely release of higher end desktops.
    forgot usernamewatto_cobra
  • Reply 8 of 15
    chasmchasm Posts: 3,525member
    The "Harry Potter" models are interesting, but I don't quite understand why "Harry" always looks freshly beaten up in the graphic provided.
    watto_cobra
  • Reply 9 of 15
    gatorguygatorguy Posts: 24,605member
    chasm said:
    The "Harry Potter" models are interesting, but I don't quite understand why "Harry" always looks freshly beaten up in the graphic provided.
    Voldemort?
    byronl
  • Reply 10 of 15
    thttht Posts: 5,619member
    chasm said:
    The "Harry Potter" models are interesting, but I don't quite understand why "Harry" always looks freshly beaten up in the graphic provided.
    My initial thought was that it just trended towards a cartoon Daniel Radcliffe at 20yo. I never read books. Does Radcliffe look like Harry Potter?
    watto_cobra
  • Reply 11 of 15
    gatorguygatorguy Posts: 24,605member
    Oh, so this is new.

    Apple will be partnering with both Open AI and Google to empower Apple Intelligence Visual Search, a feature very similar to Google Lens. 
    Coming later next year. 
  • Reply 12 of 15
    MarvinMarvin Posts: 15,446moderator
    chasm said:
    The "Harry Potter" models are interesting, but I don't quite understand why "Harry" always looks freshly beaten up in the graphic provided.
    The AI model training works like search engines where they scan images and then add labels. This is why high quality training data is needed for good models. The marketing material for Harry Potter shows images like this were he is injured:


    The AI does face recognition on images like this to find it contains an image pattern for Harry Potter and stores the pattern. When someone asks for Harry Potter, it uses this pattern as a source. It may not understand that this pattern contains injuries but AI models will eventually be trained to understand this. For now, people have to use negative prompts to tune image output so they request that the output doesn't match certain patterns.
    watto_cobra
  • Reply 13 of 15
    jdwjdw Posts: 1,418member
    Marvin said:
    I find it gives very good answers for technical questions that have a correct answer that would be difficult to find online but it does make mistakes.

    Duckduckgo has it integrated now:


    It offers GPT-4o, Claude 3, Llama 3 and Mistral. Try the other models to see how they compare.

    Some people won't be impressed with AI models until they reach AGI level, there are people projecting this before 2030.
    While I do appreciate the DuckDuckGo recommend, everything else you said was rather dismissive of what I wrote and basically just preaches "wait and see."  Sorry, but I live in the here and now.

    My past experience with Bing integration of ChatGPT has been mixed.  Yesterday, when I told it to give me a summary of 10 paragraphs or less of the new book by Melania Trump, it responded stupidly about not being able to comply with any request pertaining to politics.  When I said the book didn't have to do with that, it regurgitated the same text.  I did a back-and-forth with it until it suggested we end that topic.  Infuriating!

    But for the sake of doing something new, I followed your link and tested all 4 chatbots using this text:

    Give me a plain text URL to a vintage news article from the 1980's that talks about Apple releasing the Macintosh 128K (which was released in 1984).

    Copy/Paste that line of text yourself in your chosen chatbot on DuckDuckGo and you'll find that two of the bots give the same URL to the New York Times, to a web page that doesn't exist.  Another said it couldn't help me. Another one told me to find it myself.  Crazy!

    All said, it doesn't matter to me if the current broken AI functionality gets fixed by 2030.  I want it to do BASIC STUFF today.  But it can't even do that.  And before anybody stupidly chastises me for having asked about a "vintage" news article, I've asked about many different things that are current or from a few years ago, and all the URLs provided to me lead to the correct domain, but to a non-existent web page.  That was the entire point of my previous post, and that point remains loud and clear.

  • Reply 14 of 15
    MarvinMarvin Posts: 15,446moderator
    jdw said:
    Marvin said:
    I find it gives very good answers for technical questions that have a correct answer that would be difficult to find online but it does make mistakes.

    Duckduckgo has it integrated now:


    It offers GPT-4o, Claude 3, Llama 3 and Mistral. Try the other models to see how they compare.

    Some people won't be impressed with AI models until they reach AGI level, there are people projecting this before 2030.
    Yesterday, when I told it to give me a summary of 10 paragraphs or less of the new book by Melania Trump, it responded stupidly about not being able to comply with any request pertaining to politics. 

    Many companies are censoring their AI chatbots. I imagine you'd get the most appropriate responses for your particular questions from Elon Musk's Grok chatbot, which is available on premium x.com accounts.


    However, if a book came out in 2024, it's not likely that the AI chatbots are trained on it yet. Also, given the lawsuits about training AI on copyright text, I expect many training systems will avoid using copyright text, except where it provides a measurable gain in quality.

    jdw said:

    Copy/Paste that line of text yourself in your chosen chatbot on DuckDuckGo and you'll find that two of the bots give the same URL to the New York Times, to a web page that doesn't exist.  Another said it couldn't help me. Another one told me to find it myself.  Crazy!

    AI chatbots aren't search engines, they are knowledge engines. If you ask them if they can search the web, they will reply that they can't and many haven't been trained on a web index.

    A search engine is for finding URLs and information on recent events, an AI model uses the information it was trained on to provide informative responses to questions.

    If you ask it for sources for the reception of the Macintosh 128K, it says:

    "Books:
    "Revolution in The Valley" by Andy Hertzfeld
    "Steve Jobs" by Walter Isaacson
    Publications:
    Early reviews from technology magazines like Byte and Macworld from the 1980s
    For specific citations, you may want to look up these sources or similar materials that cover the history of the Macintosh and its impact on the tech industry."

    The AI tells you to look these up using search engines, catalogs in libraries, digital archives and technology websites.
  • Reply 15 of 15
    jdwjdw Posts: 1,418member
    Marvin said:

    AI chatbots aren't search engines, they are knowledge engines. If you ask them if they can search the web, they will reply that they can't and many haven't been trained on a web index.
    Therein lies the crux.  Knowledge is every-changing.  AI tech cannot rightfully claim genuinely useful "Intelligence" when only trained on a static model.  You need to stay current with the latest information, and the way that is achieved today is largely through online search engines, online libraries, databases, etc.

    With that said, I do use ChatGPT4o directly each day, and I know from experience that it does in fact search the web. I realize that other Chatbots do not, but ChatGPT4o (the key being "4o") does.  It has told me it would prefer if I tell it more specifically with "search the web for..." or "browse the web for..." commands.  Even if I do everything perfectly though, and even though I see it thinking out loud "searching the web..." it always returns with a URL that yields a 404 (right domain, but missing article, even on things published online YEARS ago).  So this is a real world flaw that I am calling out here, and that stands true even if others come along and say they use ChatGPT4o differently.  A flaw is a flaw that ultimately needs to be fixed.

    I really do hope AI chatbots improve over time.  And I hope they do that quickly, not like the sluggish pace of "improvements" we've seen with SIRI.  My fear is that if these companies take their sweet time, they may run dry of cash and go out of business, taking their AI tech down with them.
Sign In or Register to comment.