Apple's study proves that LLM-based AI models are flawed because they cannot reason

13»

Comments

  • Reply 41 of 43
    JohnC1959 said:
    hexclock said:
    Of course they can’t reason. It’s not a living mind. It’s the illusion of intelligence. 
    I’m legitimately interested in hearing the definition of intelligence from anyone who will offer it.
    How about a very basic definition:   give correct answers  to  questions  given.
    Which would knock out 99% of humans, depending on the questions and the people involved. So that’s a terrible definition: if human beings can’t manage it, why expect a machine that mimics human functions to do so? The underlying precepts are simply bad. We’re not talking adding machines and ballistic plotters here —things that have identifiable and agreed upon constants in a layer of reality easily perceived by human beings. 
  • Reply 42 of 43
    We are at least one decade and more likely two decades before AI will reach the basic level of college-educated reasoning. All this panic about AI taking jobs in near term is nonsense. 
  • Reply 43 of 43
    ChatGPT and other "large language models" are not "artificial intelligence", they are "imitation intelligence". I.e. They try to imitate what they were trained on within the constraints of the "query" that they are presented with.

    Knowledge and intelligence are orthogonal. I.e. You can be intelligent (smart) but uneducated, or conversely you can be well educated but not "smart". I think that current large language models should be considered as a form of knowledge representation that can constrain (by a query) the presentation of that knowledge, but without intelligence. Hence we get pizza recipes with Elmer's glue as an ingredient.
Sign In or Register to comment.