Apple has been working on its own ChatGPT AI tool for some time

2»

Comments

  • Reply 21 of 36
    mayflymayfly Posts: 385member
    Japhey said:
    mayfly said:
    Japhey said:
    mayfly said:
    timmillea said:
    There was a time when Apple always led with new technologies - mostly a deeply unprofitable time. In latter years, they work in secret, study what the competition is doing, innovates on top, patents to the hill, then embarrasses the competition. 

    My first degree at Durham University starting 1992 was 50% in AI and 50% software engineering. Then no one I met outside the University had even heard of artificial intelligence nor believed in it when I explained what it was. Now AI is on the main broadcast news all the time. Even now, Nick Clegg of Meta was on the airwaves this morning explaining that the current generation of AI is simply predicting the next word or 'token' from big data. Back in 1992, Durham had a huge natural language processing system called LOLITA which was based on deep semantic understanding - an internal, language-independant representation based on semantic graphs. LOLITA read the Wall Street Journal everyday and could answer questions on it with intelligence, not parrot fashion. For my final year project, I worked on the dialogue module including 'emotion'. Then the LOLITA funding ended and that was the end of that. Had it been in the US, I can't help feeling that LOLITA would have morphed into one of the top corporates in the World. We don't support genius or foresight in the UK. 

    It is truly depressing that 30 years later, the current state of AI is still neural nets trained on mediocre data sets. 




    But to bemoan the fact that AI hasn't achieved singularity in 30 years shows a lack of understanding the enormous technical challeges involved. It will take processing power that does not even exist at the scale required at this time. Perhaps quantum computing will be the answer to the advances you're seeking. Decades from now.
    Did you study AI and software engineering in college? If you did, well done  But if you didn’t, what makes you think that you know more than someone who did? 

    Also, who said anything about the Singularity?
    When I was in college, there was no AI. There was no software. The only computer in Chicago was an IBM 360 mainframe at the Illinois Institute of Technology. That's where I went to college, and where I majored in EE, with a computer science minor. The first engineering job I had was at Robert Bosch corp, developing electronic fuel injection hardware and software. Then in the engineering dept. at Siemens, working on the implementation of integrated circuit technology into their medical devices. Followed by 17 years of self employment in graphic arts (you could find my name on the original Adobe Pagemaker/Indesign and Illustrator teams). Followed by working at Apple until I retired in 2014.

    Other than that, you're right, I'm probably unqualified to opine about the resources necessary to advance AI to pass the Imitation Game.
    Well done  Thanks for the CV. 
    Now, what about the Singularity? The only thing I remember Mayfly “bemoaning” is the slow rate of progress in the area in which they specialized. Does this not make them qualified to opine as well? At least, without being told that they don’t understand and without putting words into their mouth? I think maybe it does. 
    That was timmillea bemoaning the lack of progress, not me. And the "Imitation Game," the Turing Test and the "Singularity," are all the same thing: the point where it is impossible to distinguish between an AI and a human in extended conversation. Surpised you didn't know that!
    Alex1N
  • Reply 22 of 36
    JapheyJaphey Posts: 1,770member
    mayfly said:
    Japhey said:
    mayfly said:
    Japhey said:
    mayfly said:
    timmillea said:
    There was a time when Apple always led with new technologies - mostly a deeply unprofitable time. In latter years, they work in secret, study what the competition is doing, innovates on top, patents to the hill, then embarrasses the competition. 

    My first degree at Durham University starting 1992 was 50% in AI and 50% software engineering. Then no one I met outside the University had even heard of artificial intelligence nor believed in it when I explained what it was. Now AI is on the main broadcast news all the time. Even now, Nick Clegg of Meta was on the airwaves this morning explaining that the current generation of AI is simply predicting the next word or 'token' from big data. Back in 1992, Durham had a huge natural language processing system called LOLITA which was based on deep semantic understanding - an internal, language-independant representation based on semantic graphs. LOLITA read the Wall Street Journal everyday and could answer questions on it with intelligence, not parrot fashion. For my final year project, I worked on the dialogue module including 'emotion'. Then the LOLITA funding ended and that was the end of that. Had it been in the US, I can't help feeling that LOLITA would have morphed into one of the top corporates in the World. We don't support genius or foresight in the UK. 

    It is truly depressing that 30 years later, the current state of AI is still neural nets trained on mediocre data sets. 




    But to bemoan the fact that AI hasn't achieved singularity in 30 years shows a lack of understanding the enormous technical challeges involved. It will take processing power that does not even exist at the scale required at this time. Perhaps quantum computing will be the answer to the advances you're seeking. Decades from now.
    Did you study AI and software engineering in college? If you did, well done  But if you didn’t, what makes you think that you know more than someone who did? 

    Also, who said anything about the Singularity?
    When I was in college, there was no AI. There was no software. The only computer in Chicago was an IBM 360 mainframe at the Illinois Institute of Technology. That's where I went to college, and where I majored in EE, with a computer science minor. The first engineering job I had was at Robert Bosch corp, developing electronic fuel injection hardware and software. Then in the engineering dept. at Siemens, working on the implementation of integrated circuit technology into their medical devices. Followed by 17 years of self employment in graphic arts (you could find my name on the original Adobe Pagemaker/Indesign and Illustrator teams). Followed by working at Apple until I retired in 2014.

    Other than that, you're right, I'm probably unqualified to opine about the resources necessary to advance AI to pass the Imitation Game.
    Well done  Thanks for the CV. 
    Now, what about the Singularity? The only thing I remember Mayfly “bemoaning” is the slow rate of progress in the area in which they specialized. Does this not make them qualified to opine as well? At least, without being told that they don’t understand and without putting words into their mouth? I think maybe it does. 
    That was timmillea bemoaning the lack of progress, not me. And the "Imitation Game," the Turing Test and the "Singularity," are all the same thing: the point where it is impossible to distinguish between an AI and a human in extended conversation. Surpised you didn't know that!
    Sorry about that typo, I fixed it. 
    And I do know about the Turning Test and the Singularity. I’m not sure why that was your conclusion as it wasn’t even close to my point. 
  • Reply 23 of 36
    chasmchasm Posts: 3,400member
    lukei said:
    Seems strange given all of this that Siri remains so poor
    I am so tired of hearing this crap.

    Siri does absolutely everything I ask it to do, accurately, and mistakes or misunderstandings are very rare IME.

    I will be the first to say Siri is more limited in WHAT it can do than some other voice assistants, but in helping me organize my life, check my schedule, play my requests for entertainment, create reminders and calendar dates, and answer general math questions, or give me directions to where I’m going — which is the bulk of what I ask it, it rarely fails — and doesn’t have to pry into my private life and online history to do any of this, unlike Amazon and Google.

    I’ve taken to observing people who say they have problems with Siri, and asking them to demonstrate the problem for me, and upon observation I note two consistent user-caused issues:

    1. They cannot articulate a clear question or request. Instead of “Hey Siri, what is 1125 UK pounds in US dollars,” which will get you the correct answer EVERY SINGLE TIME, their actual diction is more like “Hey Siri, if I have … wait … what is the UK name for their dollar … what’s 1100 and … wait, let me look it up … okay, what is 1125 UK money in US money?”.

    Of course, you’re going to get “here’s some information I found on the web” or “I’m sorry, I don’t understand” every time if you talk like that. Pre-forming the question in your mind before you ask it results in a far higher result rate.

    2, People ask questions Siri simply isn’t designed to answer. It’s not a trivia machine, and though it can answer some questions like “who was the drummer for the Beatles” Apple has made it pretty clear that Siri is intended as more of a productivity tool that ties to other Apple software and services than a toy to amuse you.

    That said, Siri IS in urgent need of better jokes. :)
    jas99pscooter63igorskywilliamlondonwatto_cobraAlex1N
  • Reply 24 of 36
    eriamjheriamjh Posts: 1,681member
    lukei said:
    Seems strange given all of this that Siri remains so poor
    Came here to agree.   Siri is pretty damn worthless when it comes to actual information.   I told her to call my brother today.   Did she ask which contact my brother was?  Nope.   Just said, “I don’t know who that is” and then sat there like a dumb@ss.   She knows who my wife is… most of the time.   Every now and then she forgets.   WTF?   If I swear at her, she should at least have better responses than “I don’t know how to respond to that”.   It gets worse if you name a contact like “bob and Jill Smith” and then say call Bob Smith.   She can’t do it.   

    Dumb as a rock. 

    I feel like she has not improved since iPhone 6.   

    I’d like to point out that Siri isn’t AI.  She’s barely a language model.   The responses are canned.   It’s keyword recognition and nothing more.   I’ve played adventure games in the 90s that seemed more sophisticated.  

    Sigh…
    byronlwilliamlondonAlex1N
  • Reply 25 of 36
    timmilleatimmillea Posts: 247member
    mayfly said:
    Other than that, you're right, I'm probably unqualified to opine about the resources necessary to advance AI to pass the Imitation Game.
    Yes, Alan Turing is considered the father of computer science, created the first actual computer, shortened WW2 by at least a year and set the goals and tests for artificial intelligence before he was convicted of homosexuality, chemically castrated, shunned by society and committed suicide.

    Even now, the mathematical proof required to demonstrate that something is a 'computer' is if it can implement the purely theoretical 'Turing Machine' - published decades before any actual computer existed - 'Turing Machine Equivalence' is the necessary proof. In AI, the 'Turing Test' is whether a human cannot discern whether they are interacting with a human or a computer. 

    The adage regarding computers goes, "junk-in, junk out" and that is precisely what current machine learning is - it learns from junk. ML is a tiny branch of AI which can at its very optimum, perform at an average level but is very unlikely to ever get even to that level. There is so much more to AI that can theoretically do so much better than ML. Let the big corporates exploit rather ancient tech for profit but they will never produce a Beethoven, a Shakespeare or a Da Vinci. AI theoretically can but not with neural nets and big data. 
    edited July 2023 pscooter63mayflywilliamlondonwatto_cobraAlex1N
  • Reply 26 of 36
    October 4th, 2011 is when Siri, was done coding -Delivered. Twelve years of upgrades, and listening and learning. Siri knows.  We don't talk about Siri much do we.  This is a joke - Who talks to Siri more - you or Siri.  Siri answers, you ask.
    williamlondon
  • Reply 27 of 36
    mayflymayfly Posts: 385member
    timmillea said:
    mayfly said:
    Other than that, you're right, I'm probably unqualified to opine about the resources necessary to advance AI to pass the Imitation Game.
    Yes, Alan Turing is considered the father of computer science, created the first actual computer, shortened WW2 by at least a year and set the goals and tests for artificial intelligence before he was convicted of homosexuality, chemically castrated, shunned by society and committed suicide.

    Even now, the mathematical proof required to demonstrate that something is a 'computer' is if it can implement the purely theoretical 'Turing Machine' - published decades before any actual computer existed - 'Turing Machine Equivalence' is the necessary proof. In AI, the 'Turing Test' is whether a human cannot discern whether they are interacting with a human or a computer. 

    The adage regarding computers goes, "junk-in, junk out" and that is precisely what current machine learning is - it learns from junk. ML is a tiny branch of AI which can at its very optimum, perform at an average level but is very unlikely to ever get even to that level. There is so much more to AI that can theoretically do so much better than ML. Let the big corporates exploit rather ancient tech for profit but they will never produce a Beethoven, a Shakespeare or a Da Vinci. AI theoretically can but not with neural nets and big data. 
    AI will probably never attain the ability to create great art that connects to people on an emotional level. But Inevitably access to advanced AI models will become as ubiquitous as personal computers are today, and will lead to astonishing benefits to mankind. And inevitably, many will fall into the wrong hands and will most certainly be taught to manipulate financial markets, global weather patterns, disrupt ground, water and air transportation, disable electric grids, foment civil disorder, create weapons we can't even imagine, topple governments, and eventually collapse societal order writ large.

    Never been happier to be 72 years old and childless. 1984 is going to look like Utopia compared to 2084.
    watto_cobraAlex1N
  • Reply 28 of 36
    igorskyigorsky Posts: 761member
    eriamjh said:
    lukei said:
    Seems strange given all of this that Siri remains so poor
    Came here to agree.   Siri is pretty damn worthless when it comes to actual information.   I told her to call my brother today.   Did she ask which contact my brother was?  Nope.   Just said, “I don’t know who that is” and then sat there like a dumb@ss.   She knows who my wife is… most of the time.   Every now and then she forgets.   WTF?   If I swear at her, she should at least have better responses than “I don’t know how to respond to that”.   It gets worse if you name a contact like “bob and Jill Smith” and then say call Bob Smith.   She can’t do it.   

    Dumb as a rock. 

    I feel like she has not improved since iPhone 6.   

    I’d like to point out that Siri isn’t AI.  She’s barely a language model.   The responses are canned.   It’s keyword recognition and nothing more.   I’ve played adventure games in the 90s that seemed more sophisticated.  

    Sigh…
    Siri has never had an issue with reaching out to family members once I had Siri set up the relationship (eg. “Hey Siri X is my sister”).

    Sounds like you’re the problem in this situation. 
    williamlondonwatto_cobra
  • Reply 29 of 36
    canukstormcanukstorm Posts: 2,727member
    Prediction: The 2024 developers conference will focus on a major AI announcement implemented across the ecosystem. Particular emphasis will be on integration  into Apple Vision Pro. Particular emphasis will be on integration into VisionOS to allow the consumer to create 3-D environments and applications.
    AKA "Siri 2.0"
    watto_cobraAlex1N
  • Reply 30 of 36
    mayfly said:
    Japhey said:
    mayfly said:
    timmillea said:
    There was a time when Apple always led with new technologies - mostly a deeply unprofitable time. In latter years, they work in secret, study what the competition is doing, innovates on top, patents to the hill, then embarrasses the competition. 

    My first degree at Durham University starting 1992 was 50% in AI and 50% software engineering. Then no one I met outside the University had even heard of artificial intelligence nor believed in it when I explained what it was. Now AI is on the main broadcast news all the time. Even now, Nick Clegg of Meta was on the airwaves this morning explaining that the current generation of AI is simply predicting the next word or 'token' from big data. Back in 1992, Durham had a huge natural language processing system called LOLITA which was based on deep semantic understanding - an internal, language-independant representation based on semantic graphs. LOLITA read the Wall Street Journal everyday and could answer questions on it with intelligence, not parrot fashion. For my final year project, I worked on the dialogue module including 'emotion'. Then the LOLITA funding ended and that was the end of that. Had it been in the US, I can't help feeling that LOLITA would have morphed into one of the top corporates in the World. We don't support genius or foresight in the UK. 

    It is truly depressing that 30 years later, the current state of AI is still neural nets trained on mediocre data sets. 




    But to bemoan the fact that AI hasn't achieved singularity in 30 years shows a lack of understanding the enormous technical challeges involved. It will take processing power that does not even exist at the scale required at this time. Perhaps quantum computing will be the answer to the advances you're seeking. Decades from now.
    Did you study AI and software engineering in college? If you did, well done  But if you didn’t, what makes you think that you know more than someone who did? 

    Also, who said anything about the Singularity?
    When I was in college, there was no AI. There was no software. The only computer in Chicago was an IBM 360 mainframe at the Illinois Institute of Technology. That's where I went to college, and where I majored in EE, with a computer science minor. The first engineering job I had was at Robert Bosch corp, developing electronic fuel injection hardware and software. Then in the engineering dept. at Siemens, working on the implementation of integrated circuit technology into their medical devices. Followed by 17 years of self employment in graphic arts (you could find my name on the original Adobe Pagemaker/Indesign and Illustrator teams). Followed by working at Apple until I retired in 2014.

    Other than that, you're right, I'm probably unqualified to opine about the resources necessary to advance AI to pass the Imitation Game.
     you could find my name on the original Adobe Pagemaker/Indesign and Illustrator teams”

    that’s pretty neat. (The other stuff is, too.) Congrats on retirement! (I’m kinda low-key dreading the financial aspects. But I’ve got another ten years to go.)
    mayflywatto_cobra
  • Reply 31 of 36
    MacProMacPro Posts: 19,778member
    lukei said:
    Seems strange given all of this that Siri remains so poor
    I used to think that.  So many times, Siri couldn't answer a question.

     I have spent months using Chat GPT and, more recently,  BARD.  They 'know' everything.  The only problem is they are completely and utterly wrong, even in simple math, so often, I now appreciate Siri admitting she doesn't know something.

    Chapt GPT and BARD have a response to almost every one of my responses to their first answer, 'Sorry, you are correct, I got that wrong, I am only an LLM... bla, bla bla."

    I fear the folks without the brains or education to spot when the answers are wrong are happily accepting the complete drivel these things so often spew.  

    I hope Apple does better.
    edited July 2023 waveparticlewatto_cobraAlex1N
  • Reply 32 of 36
    danoxdanox Posts: 3,077member
    MacPro said:
    lukei said:
    Seems strange given all of this that Siri remains so poor
    I used to think that.  So many times, Siri couldn't answer a question.

     I have spent months using Chat GPT and, more recently,  BARD.  They 'know' everything.  The only problem is they are completely and utterly wrong, even in simple math, so often, I now appreciate Siri admitting she doesn't know something.

    Chapt GPT and BARD have a response to almost every one of my responses to their first answer, 'Sorry, you are correct, I got that wrong, I am only an LLM... bla, bla bla."

    I fear the folks without the brains or education to spot when the answers are wrong are happily accepting the complete drivel these things so often spew.  

    I hope Apple does better.

    Apple will do hardware and software like it always does, and in doing both, their path will not be the same as Google or Microsoft, but I have a feeling in the end their solution will be more usable to the public at large because it will combine the best of both hardware and software created in house.
    williamlondonwatto_cobraAlex1N
  • Reply 33 of 36
    avon b7avon b7 Posts: 7,861member
    timmillea said:
    mayfly said:
    Other than that, you're right, I'm probably unqualified to opine about the resources necessary to advance AI to pass the Imitation Game.
    Yes, Alan Turing is considered the father of computer science, created the first actual computer, shortened WW2 by at least a year and set the goals and tests for artificial intelligence before he was convicted of homosexuality, chemically castrated, shunned by society and committed suicide.

    Even now, the mathematical proof required to demonstrate that something is a 'computer' is if it can implement the purely theoretical 'Turing Machine' - published decades before any actual computer existed - 'Turing Machine Equivalence' is the necessary proof. In AI, the 'Turing Test' is whether a human cannot discern whether they are interacting with a human or a computer. 

    The adage regarding computers goes, "junk-in, junk out" and that is precisely what current machine learning is - it learns from junk. ML is a tiny branch of AI which can at its very optimum, perform at an average level but is very unlikely to ever get even to that level. There is so much more to AI that can theoretically do so much better than ML. Let the big corporates exploit rather ancient tech for profit but they will never produce a Beethoven, a Shakespeare or a Da Vinci. AI theoretically can but not with neural nets and big data. 
    That's a very pessimistic take on ML. 

    ML has pushed many areas of science and insldustry forward in recent years. Yes, it is largely grunt work but the results are there.

    If you don't want junk out, don't put junk in. 

    https://arxiv.org/abs/2211.02556

    Moderna claims that within five years it will have vaccines for some cancers and within ten years many more. 

    ML plays a big role in hunting down proteins and genes which might be key to developing treatments. 

    During the pandemic ML was used to diagnose Covid infections through analysis of MRI scans in minutes. 

    It is being used all over industry to increase efficiencies. 
    edited July 2023 Alex1N
  • Reply 34 of 36
    mayflymayfly Posts: 385member
    mayfly said:
    Japhey said:
    mayfly said:
    timmillea said:
    There was a time when Apple always led with new technologies - mostly a deeply unprofitable time. In latter years, they work in secret, study what the competition is doing, innovates on top, patents to the hill, then embarrasses the competition. 

    My first degree at Durham University starting 1992 was 50% in AI and 50% software engineering. Then no one I met outside the University had even heard of artificial intelligence nor believed in it when I explained what it was. Now AI is on the main broadcast news all the time. Even now, Nick Clegg of Meta was on the airwaves this morning explaining that the current generation of AI is simply predicting the next word or 'token' from big data. Back in 1992, Durham had a huge natural language processing system called LOLITA which was based on deep semantic understanding - an internal, language-independant representation based on semantic graphs. LOLITA read the Wall Street Journal everyday and could answer questions on it with intelligence, not parrot fashion. For my final year project, I worked on the dialogue module including 'emotion'. Then the LOLITA funding ended and that was the end of that. Had it been in the US, I can't help feeling that LOLITA would have morphed into one of the top corporates in the World. We don't support genius or foresight in the UK. 

    It is truly depressing that 30 years later, the current state of AI is still neural nets trained on mediocre data sets. 




    But to bemoan the fact that AI hasn't achieved singularity in 30 years shows a lack of understanding the enormous technical challeges involved. It will take processing power that does not even exist at the scale required at this time. Perhaps quantum computing will be the answer to the advances you're seeking. Decades from now.
    Did you study AI and software engineering in college? If you did, well done  But if you didn’t, what makes you think that you know more than someone who did? 

    Also, who said anything about the Singularity?
    When I was in college, there was no AI. There was no software. The only computer in Chicago was an IBM 360 mainframe at the Illinois Institute of Technology. That's where I went to college, and where I majored in EE, with a computer science minor. The first engineering job I had was at Robert Bosch corp, developing electronic fuel injection hardware and software. Then in the engineering dept. at Siemens, working on the implementation of integrated circuit technology into their medical devices. Followed by 17 years of self employment in graphic arts (you could find my name on the original Adobe Pagemaker/Indesign and Illustrator teams). Followed by working at Apple until I retired in 2014.

    Other than that, you're right, I'm probably unqualified to opine about the resources necessary to advance AI to pass the Imitation Game.
    “ you could find my name on the original Adobe Pagemaker/Indesign and Illustrator teams”

    that’s pretty neat. (The other stuff is, too.) Congrats on retirement! (I’m kinda low-key dreading the financial aspects. But I’ve got another ten years to go.)
    You're getting real close. Max out that 401K or 403B. If offered by your employer, designate it as a Roth 401K. With a ten year horizon, you won't have to pay a penny of taxes when you choose to take distributions. If you have to, cut your expenses to where you can max that retirement plan. You'll be just fine.
    edited July 2023 watto_cobra
  • Reply 35 of 36
    Me: Can you time flies?

    ChatGPT: As an AI language model, I don't have the ability to control time or make it pass faster or slower. I can provide information and answer questions, but I can't manipulate time. Time is a fundamental aspect of the physical world, and it flows continuously without any intervention from me or any other AI. If you have any questions about time or any other topic, feel free to ask!
  • Reply 36 of 36
    mac_dogmac_dog Posts: 1,073member
    igorsky said:
    Sad day for the “Apple is falling behind” crowd. 
    I would think just the opposite. This is gonna stir them into a feeding frenzy. 
Sign In or Register to comment.