1,000 years from now, when our personal assistant will be hovering right near us talking directly to our brain, she will be eagerly awaiting her next upgrade.
Surely you mean 50 years from now, when our personal assistant is us.
She’ll be looking for a more interesting peripheral.
We won’t see that kind of advance in 50 years. AI has turned out to be much more difficult than thought. Back in the early ‘50’s, it was believed, by scientists working on it, that human-like intelligence would be accomplished in a few years. Here we are, almost 70 years later, and we’re not much further along.
besides, hovering without noisy and fuel inefficient power sources is something we may never get. Hence, 1,000 years.
Not to dispute your point, but Elon Musk is developing a system called Neuralink where thoughts will, eventually be able to trigger an action. That may be coming at that assistant from the back door, but for those with spinal paralysis it could be life changing. But, then, where does it stop?
"We are designing the Link to connect to thousands of neurons in the
brain. It will be able to record the activity of these neurons, process
these signals in real time, and send that information to the Link. As a
first application of this technology, we plan to help people with severe
spinal cord injury by giving them the ability to control computers and
mobile devices directly with their brains. We would start by recording
neural activity in the brain’s movement areas. As users think about
moving their arms or hands, we would decode those intentions, which
would be sent over Bluetooth to the user’s computer. Users would
initially learn to control a virtual mouse. Later, as users get more
practice and our adaptive decoding algorithms continue to improve, we
expect that users would be able to control multiple devices, including a
keyboard or a game controller."
Currently it appears it is at the stage where, after implanting the Neuralink in a pig they can exactly predict the movement of her legs from her thoughts. I suspect that actually moving her legs (or a mechanical substitute) may not be all that far away. But, after that, it could be controlling some external device -- like one of his cars?
1,000 years from now, when our personal assistant will be hovering right near us talking directly to our brain, she will be eagerly awaiting her next upgrade.
Surely you mean 50 years from now, when our personal assistant is us.
She’ll be looking for a more interesting peripheral.
We won’t see that kind of advance in 50 years. AI has turned out to be much more difficult than thought. Back in the early ‘50’s, it was believed, by scientists working on it, that human-like intelligence would be accomplished in a few years. Here we are, almost 70 years later, and we’re not much further along.
besides, hovering without noisy and fuel inefficient power sources is something we may never get. Hence, 1,000 years.
Not to dispute your point, but Elon Musk is developing a system called Neuralink where thoughts will, eventually be able to trigger an action. That may be coming at that assistant from the back door, but for those with spinal paralysis it could be life changing. But, then, where does it stop?
"We are designing the Link to connect to thousands of neurons in the
brain. It will be able to record the activity of these neurons, process
these signals in real time, and send that information to the Link. As a
first application of this technology, we plan to help people with severe
spinal cord injury by giving them the ability to control computers and
mobile devices directly with their brains. We would start by recording
neural activity in the brain’s movement areas. As users think about
moving their arms or hands, we would decode those intentions, which
would be sent over Bluetooth to the user’s computer. Users would
initially learn to control a virtual mouse. Later, as users get more
practice and our adaptive decoding algorithms continue to improve, we
expect that users would be able to control multiple devices, including a
keyboard or a game controller."
Currently it appears it is at the stage where, after implanting the Neuralink in a pig they can exactly predict the movement of her legs from her thoughts. I suspect that actually moving her legs (or a mechanical substitute) may not be all that far away. But, after that, it could be controlling some external device -- like one of his cars?
This is nothing new either. Let Musk be the first to try it. We’ve had similar work done for over a decade, where patients with artificial ARM’s and hands can control them using thought. Again, nothing new that Musk is really doing. He’s full of it .
but that’s still way way down the list of artificial intelligence, which is completely different.
1,000 years from now, when our personal assistant will be hovering right near us talking directly to our brain, she will be eagerly awaiting her next upgrade.
Surely you mean 50 years from now, when our personal assistant is us.
She’ll be looking for a more interesting peripheral.
We won’t see that kind of advance in 50 years. AI has turned out to be much more difficult than thought. Back in the early ‘50’s, it was believed, by scientists working on it, that human-like intelligence would be accomplished in a few years. Here we are, almost 70 years later, and we’re not much further along.
besides, hovering without noisy and fuel inefficient power sources is something we may never get. Hence, 1,000 years.
Heh, I have a pet thought that when the singularity is reached, when a computer program will become "sentient" or however it is defined, we are going to learn humans aren't sentient - humans are just biological input-output machines without "sentience" - and therefore, there isn't a singularity.
Anyways, yes, there is always a need for more performance along all axes. I want an Apple Glass that is basically like my current Rx glasses, but provides a virtual display to augment my laptop's display, or my phone's display, or my desktop's display, or to be the display. This is going to require some low latency computing and rock solid virtual placement of AR objects to not make me puke.
For phone tasks, the big gate is probably the latency from the Internet and the performance of whatever website is doing the work to deliver the data for apps. If more of this is done on-device, the better. So, I'd like Apple to continually drive web/javascript performance as much as they can, and to encourage developers to move the compute portions of their services to be on-device.
1,000 years from now, when our personal assistant will be hovering right near us talking directly to our brain, she will be eagerly awaiting her next upgrade.
Surely you mean 50 years from now, when our personal assistant is us.
She’ll be looking for a more interesting peripheral.
We won’t see that kind of advance in 50 years. AI has turned out to be much more difficult than thought. Back in the early ‘50’s, it was believed, by scientists working on it, that human-like intelligence would be accomplished in a few years. Here we are, almost 70 years later, and we’re not much further along.
besides, hovering without noisy and fuel inefficient power sources is something we may never get. Hence, 1,000 years.
Heh, I have a pet thought that when the singularity is reached, when a computer program will become "sentient" or however it is defined, we are going to learn humans aren't sentient - humans are just biological input-output machines without "sentience" - and therefore, there isn't a singularity.
Anyways, yes, there is always a need for more performance along all axes. I want an Apple Glass that is basically like my current Rx glasses, but provides a virtual display to augment my laptop's display, or my phone's display, or my desktop's display, or to be the display. This is going to require some low latency computing and rock solid virtual placement of AR objects to not make me puke.
For phone tasks, the big gate is probably the latency from the Internet and the performance of whatever website is doing the work to deliver the data for apps. If more of this is done on-device, the better. So, I'd like Apple to continually drive web/javascript performance as much as they can, and to encourage developers to move the compute portions of their services to be on-device.
1,000 years from now, when our personal assistant will be hovering right near us talking directly to our brain, she will be eagerly awaiting her next upgrade.
Surely you mean 50 years from now, when our personal assistant is us.
She’ll be looking for a more interesting peripheral.
We won’t see that kind of advance in 50 years. AI has turned out to be much more difficult than thought. Back in the early ‘50’s, it was believed, by scientists working on it, that human-like intelligence would be accomplished in a few years. Here we are, almost 70 years later, and we’re not much further along.
besides, hovering without noisy and fuel inefficient power sources is something we may never get. Hence, 1,000 years.
Not to dispute your point, but Elon Musk is developing a system called Neuralink where thoughts will, eventually be able to trigger an action. That may be coming at that assistant from the back door, but for those with spinal paralysis it could be life changing. But, then, where does it stop?
"We are designing the Link to connect to thousands of neurons in the
brain. It will be able to record the activity of these neurons, process
these signals in real time, and send that information to the Link. As a
first application of this technology, we plan to help people with severe
spinal cord injury by giving them the ability to control computers and
mobile devices directly with their brains. We would start by recording
neural activity in the brain’s movement areas. As users think about
moving their arms or hands, we would decode those intentions, which
would be sent over Bluetooth to the user’s computer. Users would
initially learn to control a virtual mouse. Later, as users get more
practice and our adaptive decoding algorithms continue to improve, we
expect that users would be able to control multiple devices, including a
keyboard or a game controller."
Currently it appears it is at the stage where, after implanting the Neuralink in a pig they can exactly predict the movement of her legs from her thoughts. I suspect that actually moving her legs (or a mechanical substitute) may not be all that far away. But, after that, it could be controlling some external device -- like one of his cars?
This is nothing new either. Let Musk be the first to try it. We’ve had similar work done for over a decade, where patients with artificial ARM’s and hands can control them using thought. Again, nothing new that Musk is really doing. He’s full of it .
but that’s still way way down the list of artificial intelligence, which is completely different.
I think the difference is that MusK is tapping into the central nervous system rather than the peripheral. And no, I agree, (and so would he) that it's not artificial intelligence. But it is "mind reading" which perhaps has even more potential to change the world. For example: You think the word and your computer types it for you.... (Which could be embarrassing. Normally we think before we type. Well, some of us do.)
Musk doesn't seem to get much respect. But personally, I would not want to bet against the guy.
1,000 years from now, when our personal assistant will be hovering right near us talking directly to our brain, she will be eagerly awaiting her next upgrade.
Surely you mean 50 years from now, when our personal assistant is us.
She’ll be looking for a more interesting peripheral.
We won’t see that kind of advance in 50 years. AI has turned out to be much more difficult than thought. Back in the early ‘50’s, it was believed, by scientists working on it, that human-like intelligence would be accomplished in a few years. Here we are, almost 70 years later, and we’re not much further along.
besides, hovering without noisy and fuel inefficient power sources is something we may never get. Hence, 1,000 years.
Not to dispute your point, but Elon Musk is developing a system called Neuralink where thoughts will, eventually be able to trigger an action. That may be coming at that assistant from the back door, but for those with spinal paralysis it could be life changing. But, then, where does it stop?
"We are designing the Link to connect to thousands of neurons in the
brain. It will be able to record the activity of these neurons, process
these signals in real time, and send that information to the Link. As a
first application of this technology, we plan to help people with severe
spinal cord injury by giving them the ability to control computers and
mobile devices directly with their brains. We would start by recording
neural activity in the brain’s movement areas. As users think about
moving their arms or hands, we would decode those intentions, which
would be sent over Bluetooth to the user’s computer. Users would
initially learn to control a virtual mouse. Later, as users get more
practice and our adaptive decoding algorithms continue to improve, we
expect that users would be able to control multiple devices, including a
keyboard or a game controller."
Currently it appears it is at the stage where, after implanting the Neuralink in a pig they can exactly predict the movement of her legs from her thoughts. I suspect that actually moving her legs (or a mechanical substitute) may not be all that far away. But, after that, it could be controlling some external device -- like one of his cars?
This is nothing new either. Let Musk be the first to try it. We’ve had similar work done for over a decade, where patients with artificial ARM’s and hands can control them using thought. Again, nothing new that Musk is really doing. He’s full of it .
but that’s still way way down the list of artificial intelligence, which is completely different.
I think the difference is that MusK is tapping into the central nervous system rather than the peripheral. And no, I agree, (and so would he) that it's not artificial intelligence. But it is "mind reading" which perhaps has even more potential to change the world. For example: You think the word and your computer types it for you.... (Which could be embarrassing. Normally we think before we type. Well, some of us do.)
Musk doesn't seem to get much respect. But personally, I would not want to bet against the guy.
I doubt very much that there would be any mind reading involved. We simply don’t know enough to do that.
1,000 years from now, when our personal assistant will be hovering right near us talking directly to our brain, she will be eagerly awaiting her next upgrade.
Surely you mean 50 years from now, when our personal assistant is us.
She’ll be looking for a more interesting peripheral.
We won’t see that kind of advance in 50 years. AI has turned out to be much more difficult than thought. Back in the early ‘50’s, it was believed, by scientists working on it, that human-like intelligence would be accomplished in a few years. Here we are, almost 70 years later, and we’re not much further along.
besides, hovering without noisy and fuel inefficient power sources is something we may never get. Hence, 1,000 years.
Not to dispute your point, but Elon Musk is developing a system called Neuralink where thoughts will, eventually be able to trigger an action. That may be coming at that assistant from the back door, but for those with spinal paralysis it could be life changing. But, then, where does it stop?
"We are designing the Link to connect to thousands of neurons in the
brain. It will be able to record the activity of these neurons, process
these signals in real time, and send that information to the Link. As a
first application of this technology, we plan to help people with severe
spinal cord injury by giving them the ability to control computers and
mobile devices directly with their brains. We would start by recording
neural activity in the brain’s movement areas. As users think about
moving their arms or hands, we would decode those intentions, which
would be sent over Bluetooth to the user’s computer. Users would
initially learn to control a virtual mouse. Later, as users get more
practice and our adaptive decoding algorithms continue to improve, we
expect that users would be able to control multiple devices, including a
keyboard or a game controller."
Currently it appears it is at the stage where, after implanting the Neuralink in a pig they can exactly predict the movement of her legs from her thoughts. I suspect that actually moving her legs (or a mechanical substitute) may not be all that far away. But, after that, it could be controlling some external device -- like one of his cars?
This is nothing new either. Let Musk be the first to try it. We’ve had similar work done for over a decade, where patients with artificial ARM’s and hands can control them using thought. Again, nothing new that Musk is really doing. He’s full of it .
but that’s still way way down the list of artificial intelligence, which is completely different.
I think the difference is that MusK is tapping into the central nervous system rather than the peripheral. And no, I agree, (and so would he) that it's not artificial intelligence. But it is "mind reading" which perhaps has even more potential to change the world. For example: You think the word and your computer types it for you.... (Which could be embarrassing. Normally we think before we type. Well, some of us do.)
Musk doesn't seem to get much respect. But personally, I would not want to bet against the guy.
I doubt very much that there would be any mind reading involved. We simply don’t know enough to do that.
In the demo he read thoughts from the pig's mind and used them to accurately predict the movement of her legs. I call that mind reading.
While there is a long way between reading the brain's instructions to move a limb and reading, say, emotions or desires ("I would like a chocolate ice cream cone"), it is still reading a mind's thoughts. In the case of moving limbs, those regions of the brain are fairly isolated and don't involve a lot of collaboration with other regions so it is probably the simplest region of the brain to read. But, it is reading it.
1,000 years from now, when our personal assistant will be hovering right near us talking directly to our brain, she will be eagerly awaiting her next upgrade.
Surely you mean 50 years from now, when our personal assistant is us.
She’ll be looking for a more interesting peripheral.
We won’t see that kind of advance in 50 years. AI has turned out to be much more difficult than thought. Back in the early ‘50’s, it was believed, by scientists working on it, that human-like intelligence would be accomplished in a few years. Here we are, almost 70 years later, and we’re not much further along.
besides, hovering without noisy and fuel inefficient power sources is something we may never get. Hence, 1,000 years.
Not to dispute your point, but Elon Musk is developing a system called Neuralink where thoughts will, eventually be able to trigger an action. That may be coming at that assistant from the back door, but for those with spinal paralysis it could be life changing. But, then, where does it stop?
"We are designing the Link to connect to thousands of neurons in the
brain. It will be able to record the activity of these neurons, process
these signals in real time, and send that information to the Link. As a
first application of this technology, we plan to help people with severe
spinal cord injury by giving them the ability to control computers and
mobile devices directly with their brains. We would start by recording
neural activity in the brain’s movement areas. As users think about
moving their arms or hands, we would decode those intentions, which
would be sent over Bluetooth to the user’s computer. Users would
initially learn to control a virtual mouse. Later, as users get more
practice and our adaptive decoding algorithms continue to improve, we
expect that users would be able to control multiple devices, including a
keyboard or a game controller."
Currently it appears it is at the stage where, after implanting the Neuralink in a pig they can exactly predict the movement of her legs from her thoughts. I suspect that actually moving her legs (or a mechanical substitute) may not be all that far away. But, after that, it could be controlling some external device -- like one of his cars?
This is nothing new either. Let Musk be the first to try it. We’ve had similar work done for over a decade, where patients with artificial ARM’s and hands can control them using thought. Again, nothing new that Musk is really doing. He’s full of it .
but that’s still way way down the list of artificial intelligence, which is completely different.
I think the difference is that MusK is tapping into the central nervous system rather than the peripheral. And no, I agree, (and so would he) that it's not artificial intelligence. But it is "mind reading" which perhaps has even more potential to change the world. For example: You think the word and your computer types it for you.... (Which could be embarrassing. Normally we think before we type. Well, some of us do.)
Musk doesn't seem to get much respect. But personally, I would not want to bet against the guy.
I doubt very much that there would be any mind reading involved. We simply don’t know enough to do that.
In the demo he read thoughts from the pig's mind and used them to accurately predict the movement of her legs. I call that mind reading.
While there is a long way between reading the brain's instructions to move a limb and reading, say, emotions or desires ("I would like a chocolate ice cream cone"), it is still reading a mind's thoughts. In the case of moving limbs, those regions of the brain are fairly isolated and don't involve a lot of collaboration with other regions so it is probably the simplest region of the brain to read. But, it is reading it.
We can do that already. It’s an autonomic response that was measured. We know that with humans, the “black box” in our brain makes a decision about 300 milliseconds before we’re aware of it. That 300 millisecond gap allows us to respond at the end of it. This has been tested, and known for almost two decades. So again, nothing new here. We don’t need to physically get into the brain for this, just a bunch of sensors in the head.
1,000 years from now, when our personal assistant will be hovering right near us talking directly to our brain, she will be eagerly awaiting her next upgrade.
Surely you mean 50 years from now, when our personal assistant is us.
She’ll be looking for a more interesting peripheral.
We won’t see that kind of advance in 50 years. AI has turned out to be much more difficult than thought. Back in the early ‘50’s, it was believed, by scientists working on it, that human-like intelligence would be accomplished in a few years. Here we are, almost 70 years later, and we’re not much further along.
besides, hovering without noisy and fuel inefficient power sources is something we may never get. Hence, 1,000 years.
Not to dispute your point, but Elon Musk is developing a system called Neuralink where thoughts will, eventually be able to trigger an action. That may be coming at that assistant from the back door, but for those with spinal paralysis it could be life changing. But, then, where does it stop?
"We are designing the Link to connect to thousands of neurons in the
brain. It will be able to record the activity of these neurons, process
these signals in real time, and send that information to the Link. As a
first application of this technology, we plan to help people with severe
spinal cord injury by giving them the ability to control computers and
mobile devices directly with their brains. We would start by recording
neural activity in the brain’s movement areas. As users think about
moving their arms or hands, we would decode those intentions, which
would be sent over Bluetooth to the user’s computer. Users would
initially learn to control a virtual mouse. Later, as users get more
practice and our adaptive decoding algorithms continue to improve, we
expect that users would be able to control multiple devices, including a
keyboard or a game controller."
Currently it appears it is at the stage where, after implanting the Neuralink in a pig they can exactly predict the movement of her legs from her thoughts. I suspect that actually moving her legs (or a mechanical substitute) may not be all that far away. But, after that, it could be controlling some external device -- like one of his cars?
This is nothing new either. Let Musk be the first to try it. We’ve had similar work done for over a decade, where patients with artificial ARM’s and hands can control them using thought. Again, nothing new that Musk is really doing. He’s full of it .
but that’s still way way down the list of artificial intelligence, which is completely different.
I think the difference is that MusK is tapping into the central nervous system rather than the peripheral. And no, I agree, (and so would he) that it's not artificial intelligence. But it is "mind reading" which perhaps has even more potential to change the world. For example: You think the word and your computer types it for you.... (Which could be embarrassing. Normally we think before we type. Well, some of us do.)
Musk doesn't seem to get much respect. But personally, I would not want to bet against the guy.
I doubt very much that there would be any mind reading involved. We simply don’t know enough to do that.
In the demo he read thoughts from the pig's mind and used them to accurately predict the movement of her legs. I call that mind reading.
While there is a long way between reading the brain's instructions to move a limb and reading, say, emotions or desires ("I would like a chocolate ice cream cone"), it is still reading a mind's thoughts. In the case of moving limbs, those regions of the brain are fairly isolated and don't involve a lot of collaboration with other regions so it is probably the simplest region of the brain to read. But, it is reading it.
We can do that already. It’s an autonomic response that was measured. We know that with humans, the “black box” in our brain makes a decision about 300 milliseconds before we’re aware of it. That 300 millisecond gap allows us to respond at the end of it. This has been tested, and known for almost two decades. So again, nothing new here. We don’t need to physically get into the brain for this, just a bunch of sensors in the head.
Well, yeh, sure... The brain operates using electrochemical energies and those energies can be 'seen' and measured outside of the brain. And, we can even glean some bit of information from them. It's analogous to using a telescope to look at the moon. You can tell a lot, but its abilities are limited.
But to do what Musk is doing and plans to do, with that level of precision, you gotta get to the source of that energy -- the part of the brain responsible for that body part. Otherwise, you wouldn't know if the brain is signaling to move a finger or a toe much less in what direction and with what force it is signaling to do so.
A few geekbench numbers won't silence the naysayers. Geekbench (SPEC2017 would be much, much better) will need to be complemented with some standard productivity, e.g. PC Mark, and graphics, e.g. GFXBench and 3D Mark, benchmarks to fill out the overall picture on A14/A14X performance. Even then many Intel fanboys probably won't find it easy to accept the fact that these 'puny' custom Apple (and licensed ARM) processors perform very well.
As to the 70% improvement in Geekbench compute metal score even for an advocate of the virtues of ARM based computing it is hard to believe that such large performance increases in a mobile processor are just there for the taking. We will be wanting further confirmation of that performance claim in particular. If Imagination Technologies IP can really achieve such high performance at low power everybody is going to want it.
1,000 years from now, when our personal assistant will be hovering right near us talking directly to our brain, she will be eagerly awaiting her next upgrade.
Surely you mean 50 years from now, when our personal assistant is us.
She’ll be looking for a more interesting peripheral.
We won’t see that kind of advance in 50 years. AI has turned out to be much more difficult than thought. Back in the early ‘50’s, it was believed, by scientists working on it, that human-like intelligence would be accomplished in a few years. Here we are, almost 70 years later, and we’re not much further along.
besides, hovering without noisy and fuel inefficient power sources is something we may never get. Hence, 1,000 years.
Not to dispute your point, but Elon Musk is developing a system called Neuralink where thoughts will, eventually be able to trigger an action. That may be coming at that assistant from the back door, but for those with spinal paralysis it could be life changing. But, then, where does it stop?
"We are designing the Link to connect to thousands of neurons in the
brain. It will be able to record the activity of these neurons, process
these signals in real time, and send that information to the Link. As a
first application of this technology, we plan to help people with severe
spinal cord injury by giving them the ability to control computers and
mobile devices directly with their brains. We would start by recording
neural activity in the brain’s movement areas. As users think about
moving their arms or hands, we would decode those intentions, which
would be sent over Bluetooth to the user’s computer. Users would
initially learn to control a virtual mouse. Later, as users get more
practice and our adaptive decoding algorithms continue to improve, we
expect that users would be able to control multiple devices, including a
keyboard or a game controller."
Currently it appears it is at the stage where, after implanting the Neuralink in a pig they can exactly predict the movement of her legs from her thoughts. I suspect that actually moving her legs (or a mechanical substitute) may not be all that far away. But, after that, it could be controlling some external device -- like one of his cars?
This is nothing new either. Let Musk be the first to try it. We’ve had similar work done for over a decade, where patients with artificial ARM’s and hands can control them using thought. Again, nothing new that Musk is really doing. He’s full of it .
but that’s still way way down the list of artificial intelligence, which is completely different.
I think the difference is that MusK is tapping into the central nervous system rather than the peripheral. And no, I agree, (and so would he) that it's not artificial intelligence. But it is "mind reading" which perhaps has even more potential to change the world. For example: You think the word and your computer types it for you.... (Which could be embarrassing. Normally we think before we type. Well, some of us do.)
Musk doesn't seem to get much respect. But personally, I would not want to bet against the guy.
I doubt very much that there would be any mind reading involved. We simply don’t know enough to do that.
In the demo he read thoughts from the pig's mind and used them to accurately predict the movement of her legs. I call that mind reading.
While there is a long way between reading the brain's instructions to move a limb and reading, say, emotions or desires ("I would like a chocolate ice cream cone"), it is still reading a mind's thoughts. In the case of moving limbs, those regions of the brain are fairly isolated and don't involve a lot of collaboration with other regions so it is probably the simplest region of the brain to read. But, it is reading it.
We can do that already. It’s an autonomic response that was measured. We know that with humans, the “black box” in our brain makes a decision about 300 milliseconds before we’re aware of it. That 300 millisecond gap allows us to respond at the end of it. This has been tested, and known for almost two decades. So again, nothing new here. We don’t need to physically get into the brain for this, just a bunch of sensors in the head.
Well, yeh, sure... The brain operates using electrochemical energies and those energies can be 'seen' and measured outside of the brain. And, we can even glean some bit of information from them. It's analogous to using a telescope to look at the moon. You can tell a lot, but its abilities are limited.
But to do what Musk is doing and plans to do, with that level of precision, you gotta get to the source of that energy -- the part of the brain responsible for that body part. Otherwise, you wouldn't know if the brain is signaling to move a finger or a toe much less in what direction and with what force it is signaling to do so.
But we don’t know exactly which parts of the brain we need to sense. The problem is that even one mm difference in position makes a big difference. There has been no study that has analyzed this on the level needed, and isn’t expected to for a couple of decades as the technology, and our knowledge of the brain improves.
Right now, we don’t even have a working theory of how the brain thinks, much less how to interpret it. Yes, we can find basic, gross workings, but that not enough to do what Musk is claiming he can do.
‘’the thing is, we can know how to move fingers and such. We’ve been doing it for some time. That’s easy. But to know what soneonevis thinking, well, that orders of magnitude harder. We don’t as yet know where to begin.
Comments
but that’s still way way down the list of artificial intelligence, which is completely different.
Anyways, yes, there is always a need for more performance along all axes. I want an Apple Glass that is basically like my current Rx glasses, but provides a virtual display to augment my laptop's display, or my phone's display, or my desktop's display, or to be the display. This is going to require some low latency computing and rock solid virtual placement of AR objects to not make me puke.
For phone tasks, the big gate is probably the latency from the Internet and the performance of whatever website is doing the work to deliver the data for apps. If more of this is done on-device, the better. So, I'd like Apple to continually drive web/javascript performance as much as they can, and to encourage developers to move the compute portions of their services to be on-device.
"I'm sorry Dave. I'm afraid I can't do that"
In the demo he read thoughts from the pig's mind and used them to accurately predict the movement of her legs.
I call that mind reading.
While there is a long way between reading the brain's instructions to move a limb and reading, say, emotions or desires ("I would like a chocolate ice cream cone"), it is still reading a mind's thoughts. In the case of moving limbs, those regions of the brain are fairly isolated and don't involve a lot of collaboration with other regions so it is probably the simplest region of the brain to read. But, it is reading it.
‘’the thing is, we can know how to move fingers and such. We’ve been doing it for some time. That’s easy. But to know what soneonevis thinking, well, that orders of magnitude harder. We don’t as yet know where to begin.