Can computer code and silicon have reason and logic and consciousness? How does one program AI to know right from wrong? Where is the morality in all of this?
It isn't present because machines cannot think. ....
You need to watch "The Imitation Game" where Alan Turing, the inventor of today's computers addressed that question very directly....
'Because, somebody or something thinks differently, does that mean that he/it is not thinking?' ... It's not clear whether he was speaking of computers or autistics -- but the question remains relevant.
You don't have to be Alan Turing to think like that, men ask that question in their every interaction with women...
Joke aside, this is the wishful thinking of the writers.. In fact Alan Turing is the one who submitted one of the most powerful proofs against thinking machines: Halting Problem. Basically it is something like that: it is impossible to write a computer program that will determine whether it will get stuck or terminate succesfully with a given input.
Another proof comes from Gödel's Incompleteness Theorem. British scientist Roger Penrose has deeply handled both proofs in his famous book "Shadows of the Mind". This may not be available in pdf/epub but two papers from British philosopher J. R. Lucas are available online: "Mind, Machines and Gödel" and "The Godelian Argument".
A video on halting problem:
That takes it right back to the question in the Imitation Game: 'Because somebody or something thinks differently, does that mean that he/it is not thinking?'
Because machine A thinks differently than C or H or X, does that mean that they are not thinking? One of the failures of the logical argument assumes that "thinking" means "thinking the way that I think".
A play that I saw years ago, "La Apine Agile" (sp?) portrayed a theoretically possible meeting between Einstein and Picasso. They were both geniuses, but neither could comprehend the thinking of the other. Does that mean that neither was thinking?
In the Imitation Game, the true story was of people trying to out think a machine -- and failing badly at it. Turing postulates that "What if: Only a machine can out think another machine" -- and proceeds to build it: the world's first computer....
I would say that much of the debate about AI and thinking machines is semantic: What do you mean by "Thinking"?
Added later: Perhaps the self-driving cars epitomize these questions best: To be honest, I really do not understand these vehicles. And frankly, I really don't understand how I even drive: it requires countless decisions and judgements every moment of driving -- most of which are done below the levels of conscious thought. For instance: you see a kid with ball or little dog standing by the side of the road: So, how do you as a driver respond? Or even, do you respond?
Does failing to solve the Halting Problem mean that machines cannot ever be capable of reasoned response, "thinking" for all intents even if not actually sentient?
Actually I'd like machines get that capability and that's all we need, nothing more. Only if that could be possible...
Suppose that machine learning will acquire some primary school level reasoning within a timespan of three years. That machine really talks with true reasoning and it is a very friendly and emotional companion to humans.
Then a startup emerges with a blazingly fast database solution and in three years it builds a companion able to provide university level companionship. That one doesn't use machine learning, it uses pre-programmed reasoning supported with continuously updated information vault, decorated with a speech-synthesis and natural language parsing module.
Which one would prevail? That is the problem of AI. As long as we commit resources and energy to an indeterminate AI journey, someone will always come up with more powerful conventional non-AI solutions to shadow our AI efforts.
Here's the best part of all of this: Unless you directly work in that field... it's not your (or our) problem!
macplusplus said: Alan Turing is the one who submitted one of the most powerful proofs against thinking machines: Halting Problem. Basically it is something like that: it is impossible to write a computer program that will determine whether it will get stuck or terminate successfully with a given input.
This isn't a proof against thinking machines, human beings can't determine if given a question, they will be able to provide a definitive response. The solution to this is a timeout or give a rambling, incoherent response to cover the fact they didn't figure it out or do something else while the problem is being considered and it always uses asynchronous processing. People don't dwell on questions or inputs indefinitely and what differentiates machines from people here is motivations and priorities.
Machines don't have overarching drives or motives to produce a result, they are commanded to do tasks by people. An AI simulation is controlled by a person as to what inputs it is exposed to, how it processes the inputs and how it delivers the output. Human beings have biological motivations that determine how they process information. Every day there is an energy cycle that produces motivations for dealing with hunger, tiredness, boredom and so on. The motivations are what determine how long people dwell on certain activities and they are dynamically culled if other motivations exceed their priority level.
Another motivation that drives future decisions is the biological response, which is also missing in machines, partly formed from our identity. We have motives to survive or enhance quality of life that make us do certain activities and failure to achieve the desired results creates an emotional response that determines future priorities.
Fundamentally people don't want AI because we want to stay in control. We never want a response from an AI assistant to be 'do it yourself', we don't want a vehicle AI to decide that it wants to see Canada on the commute to work so we fence them in to think like a human but not behave like one. But you can't separate those two things and produce the same output because they drive each other.
This limiting of inputs limits how well an AI can perform. The mind doesn't just have lots of processing power but also huge capacity, high bandwidth to it and complex mapping between it:
The data covers a huge variety of information of differing classes and importance. There's visual and audio data and data on how to interpret those differing classes of data, which includes understanding of cultural/social/technological differences.
If we have machines with enough storage, properly formatted data, enough bandwidth, enough processing power, fully asynchronous processing and give them simulations of motivations that drive people then there's no reason they can't accurately simulate the output. I don't think there's a practical way to do that and like I say there's not really a desire to do that because it can't be controlled.
We can certainly have AI in specific areas that do as good a job as a human. One of the most obvious areas would be customer support as there are limited inputs and responses. Driving also involves limited decision making. AI doesn't need to reach the level of a human being to replace a limited role that human beings do.
Comments
'Because somebody or something thinks differently, does that mean that he/it is not thinking?'
Because machine A thinks differently than C or H or X, does that mean that they are not thinking? One of the failures of the logical argument assumes that "thinking" means "thinking the way that I think".
A play that I saw years ago, "La Apine Agile" (sp?) portrayed a theoretically possible meeting between Einstein and Picasso. They were both geniuses, but neither could comprehend the thinking of the other. Does that mean that neither was thinking?
In the Imitation Game, the true story was of people trying to out think a machine -- and failing badly at it. Turing postulates that "What if: Only a machine can out think another machine" -- and proceeds to build it: the world's first computer....
I would say that much of the debate about AI and thinking machines is semantic:
What do you mean by "Thinking"?
Added later:
Perhaps the self-driving cars epitomize these questions best: To be honest, I really do not understand these vehicles. And frankly, I really don't understand how I even drive: it requires countless decisions and judgements every moment of driving -- most of which are done below the levels of conscious thought. For instance: you see a kid with ball or little dog standing by the side of the road: So, how do you as a driver respond? Or even, do you respond?
Machines don't have overarching drives or motives to produce a result, they are commanded to do tasks by people. An AI simulation is controlled by a person as to what inputs it is exposed to, how it processes the inputs and how it delivers the output. Human beings have biological motivations that determine how they process information. Every day there is an energy cycle that produces motivations for dealing with hunger, tiredness, boredom and so on. The motivations are what determine how long people dwell on certain activities and they are dynamically culled if other motivations exceed their priority level.
Another motivation that drives future decisions is the biological response, which is also missing in machines, partly formed from our identity. We have motives to survive or enhance quality of life that make us do certain activities and failure to achieve the desired results creates an emotional response that determines future priorities.
Fundamentally people don't want AI because we want to stay in control. We never want a response from an AI assistant to be 'do it yourself', we don't want a vehicle AI to decide that it wants to see Canada on the commute to work so we fence them in to think like a human but not behave like one. But you can't separate those two things and produce the same output because they drive each other.
This limiting of inputs limits how well an AI can perform. The mind doesn't just have lots of processing power but also huge capacity, high bandwidth to it and complex mapping between it:
https://www.scientificamerican.com/article/what-is-the-memory-capacity/
The data covers a huge variety of information of differing classes and importance. There's visual and audio data and data on how to interpret those differing classes of data, which includes understanding of cultural/social/technological differences.
If we have machines with enough storage, properly formatted data, enough bandwidth, enough processing power, fully asynchronous processing and give them simulations of motivations that drive people then there's no reason they can't accurately simulate the output. I don't think there's a practical way to do that and like I say there's not really a desire to do that because it can't be controlled.
We can certainly have AI in specific areas that do as good a job as a human. One of the most obvious areas would be customer support as there are limited inputs and responses. Driving also involves limited decision making. AI doesn't need to reach the level of a human being to replace a limited role that human beings do.