Many years ago I spoke to Burt Rutan about this exact concept. He referred to a study in which it was shown humans can resolve the azimuth direction of a sound source within about 10°. Apparently our brains do that by resolving the subtle phase difference of sounds as they reach each ear. His idea was to design an audio-based traffic alerting device that would prompt a pilot to look in the direction of the traffic. Of course traffic alerting devices are common now but they're all display-based, keeping pilots' eyes inside the cockpit at just the time when looking outside would be preferable.
As far as I know he never did anything with this idea, it's about time someone did!
It's not that either. Google motion is a set of gestures that control the computer, essentially a form of sign-language used to control a regular screen device with no keyboard or less keyboard - this replaces both screen and keyboard.
This is wild stuff. Never heard of anything like it. Innovation at its finest.
I always wondered, how people with a strong vision handicap might have better control for their iPods. I guess this is the answer, although I still have troubles to see (or should I say hear) how exactly this is going to work.
An other application for this patent might be audio controlled gaming. Will be interesting to see what they are going to build with this.
I always wondered, how people with a strong vision handicap might have better control for their iPods. I guess this is the answer, although I still have troubles to see (or should I say hear) how exactly this is going to work.
An other application for this patent might be audio controlled gaming. Will be interesting to see what they are going to build with this.
Well, placing a tone in the sound field at a certain location is easy. Audio engineers have been using short delays between left/right channels to do this for decades. (Anyone with digital audio software can try this. It works better than pan pots for creating a stereo image.) To point at the tone will be like those games on IOS where you drive a car on a track or fly an airplane without hitting obstacles. I don't think it should be too hard and it certainly will enable people who can't use their eyes for any reason.
Well, placing a tone in the sound field at a certain location is easy. Audio engineers have been using short delays between left/right channels to do this for decades. (Anyone with digital audio software can try this. It works better than pan pots for creating a stereo image.) To point at the tone will be like those games on IOS where you drive a car on a track or fly an airplane without hitting obstacles. I don't think it should be too hard and it certainly will enable people who can't use their eyes for any reason.
I think they plan to use binaural recordings, which require headphones to work properly.
However, a good binaural recording can convey height along with 360 degree localization information. You can hear all around you, and below ad above, but not directly up or directly down.
The utility and practicality of this method for individuals with vision or speech inhibitors was apparently lost on you.
The "lets think of ourselves" mentality at its finest!
Yep completely lost with me. I don't get how its a better idea to listen to a list of menu items and wave a device around compared to listening to a list of menu items and pressing a button.
Yep completely lost with me. I don't get how its a better idea to listen to a list of menu items and wave a device around compared to listening to a list of menu items and pressing a button.
Okay... I'll take it slow for those who are mentally handicapped.
Let's say there are 6 items to choose from (list of songs), and you do not know what this list is comprised of.
Let's go through your method of pressing a button.
You don't choose any of them because you don't know which song you want from the options until you hear all the options.... So now you have to hear the songs a second time through and then make your pick. Same problem can occur on audio queues when there is only a single button to press. You wanted "Song E" so now it's a phukin pain in the arse.
Now... with this method you can hear all the songs and then point the device in the direction where the Song E was located and select it. Also, for many applications you could play all options (ie six songs) at the same time, saving time. You point in the direction where you hear option (ie music) that you like. And they could simulate the phase difference easily with simple audio processing.
I thought the concept was really neat. I'm not sure how accurate it will be when I have, say 20 artists starting with "A" that I want to choose from, but it seems like a very innovative idea to me.
Comments
As far as I know he never did anything with this idea, it's about time someone did!
It's not that either. Google motion is a set of gestures that control the computer, essentially a form of sign-language used to control a regular screen device with no keyboard or less keyboard - this replaces both screen and keyboard.
It's distinctly different.
You're distinctly different!
This is wild stuff. Never heard of anything like it. Innovation at its finest.
I always wondered, how people with a strong vision handicap might have better control for their iPods. I guess this is the answer, although I still have troubles to see (or should I say hear) how exactly this is going to work.
An other application for this patent might be audio controlled gaming. Will be interesting to see what they are going to build with this.
I always wondered, how people with a strong vision handicap might have better control for their iPods. I guess this is the answer, although I still have troubles to see (or should I say hear) how exactly this is going to work.
An other application for this patent might be audio controlled gaming. Will be interesting to see what they are going to build with this.
Well, placing a tone in the sound field at a certain location is easy. Audio engineers have been using short delays between left/right channels to do this for decades. (Anyone with digital audio software can try this. It works better than pan pots for creating a stereo image.) To point at the tone will be like those games on IOS where you drive a car on a track or fly an airplane without hitting obstacles. I don't think it should be too hard and it certainly will enable people who can't use their eyes for any reason.
No, because google voice actions are controlling the phone by voice. This apparently is control with the accelerometer, with sound as the display.
It's a bizarre idea, and I can't see how it would work in practice, but it's definitely not google voice actions.
Works when you don't want/can't look into the device.
Well, placing a tone in the sound field at a certain location is easy. Audio engineers have been using short delays between left/right channels to do this for decades. (Anyone with digital audio software can try this. It works better than pan pots for creating a stereo image.) To point at the tone will be like those games on IOS where you drive a car on a track or fly an airplane without hitting obstacles. I don't think it should be too hard and it certainly will enable people who can't use their eyes for any reason.
I think they plan to use binaural recordings, which require headphones to work properly.
However, a good binaural recording can convey height along with 360 degree localization information. You can hear all around you, and below ad above, but not directly up or directly down.
No. This is clearly a rip-off of Gmail Motion.
http://mail.google.com/mail/help/motion.html
Fab, the gesturing guy in that video could definitely work for the 'Ministry of Silly Walks' - he might even be able to work on 'la marche futile' !!
The utility and practicality of this method for individuals with vision or speech inhibitors was apparently lost on you.
The "lets think of ourselves" mentality at its finest!
Yep completely lost with me. I don't get how its a better idea to listen to a list of menu items and wave a device around compared to listening to a list of menu items and pressing a button.
Yep completely lost with me. I don't get how its a better idea to listen to a list of menu items and wave a device around compared to listening to a list of menu items and pressing a button.
Okay... I'll take it slow for those who are mentally handicapped.
Let's say there are 6 items to choose from (list of songs), and you do not know what this list is comprised of.
Let's go through your method of pressing a button.
Audio reads, "Song A".... pause ..."Song B" ... pause "Song C".... "Song D" .... pause.... "Song E".... pause... "Song F" pause.....
You don't choose any of them because you don't know which song you want from the options until you hear all the options.... So now you have to hear the songs a second time through and then make your pick. Same problem can occur on audio queues when there is only a single button to press. You wanted "Song E" so now it's a phukin pain in the arse.
Now... with this method you can hear all the songs and then point the device in the direction where the Song E was located and select it. Also, for many applications you could play all options (ie six songs) at the same time, saving time. You point in the direction where you hear option (ie music) that you like. And they could simulate the phase difference easily with simple audio processing.
Get it? Yah pompass ass.