Of course they can’t reason. It’s not a living mind. It’s the illusion of intelligence.
I’m legitimately interested in hearing the definition of intelligence from anyone who will offer it.
How about a very basic definition: give correct answers to questions given.
Which would knock out 99% of humans, depending on the questions and the people involved. So that’s a terrible definition: if human beings can’t manage it, why expect a machine that mimics human functions to do so? The underlying precepts are simply bad. We’re not talking adding machines and ballistic plotters here —things that have identifiable and agreed upon constants in a layer of reality easily perceived by human beings.
We are at least one decade and more likely two decades before AI will reach the basic level of college-educated reasoning. All this panic about AI taking jobs in near term is nonsense.
ChatGPT and other "large language models" are not "artificial intelligence", they are "imitation intelligence". I.e. They try to imitate what they were trained on within the constraints of the "query" that they are presented with.
Knowledge and intelligence are orthogonal. I.e. You can be intelligent (smart) but uneducated, or conversely you can be well educated but not "smart". I think that current large language models should be considered as a form of knowledge representation that can constrain (by a query) the presentation of that knowledge, but without intelligence. Hence we get pizza recipes with Elmer's glue as an ingredient.
Comments
Knowledge and intelligence are orthogonal. I.e. You can be intelligent (smart) but uneducated, or conversely you can be well educated but not "smart". I think that current large language models should be considered as a form of knowledge representation that can constrain (by a query) the presentation of that knowledge, but without intelligence. Hence we get pizza recipes with Elmer's glue as an ingredient.