That sounds a lot like Apple. I can really see that happening, actually.
Me too.
My other suspicion is that the teaser image isn't a picture of a black Apple logo with rays of light behind it; rather, it's a sheet of aluminum-like substance with a Apple cut out of it that's lit from behind -- with the light leaking through and playing across its surface. Which makes me think that Illuminous will use a kind of unified (non-brushed) aluminum texture for windows, shimmeringly lit from behind (as well some probably some additional light source) via dynamic lighting effects powered by CoreAnimation.
wow didnt know this was so sparse in support! at the rate the throw apple loops jam packs out (or whatever their called) which you would imagine are MORE time intensive to prepare than dictionaries, of course im assumeing most of the work has already been done or is freely avalible in a different format.
The .dict format was probably just made so they could seperate Data from the Application. It still has potential, but not if Apple doesn't care enough about it.
Ok, after sleeping on the issue, I came up with some thoughts.
........
But that aside, we are still left with a 1970's interface. I think natural language and gesturing will provide the basis of an operating system's interface. The question is when?
When Speech Recognition doesn't suck as much as it does right now. When a Computer can properly determine exactly what the user is looking for, fill in the gaps for when a User isn't quite sure what the name of it is, and not require a Hotkey to use it, then we will have a Natural Language Interface, likely combined with a Multi Touch Graphical Interface. Gestures, probably not. I don't see the point in that...
As someone who uses the Keyboard to do almost everything, not so. If you can't type you can learn to type. If the Speech Recognition System cannot hear you clearly, refuses to follow your command, requires you to calibrate your voice, etc. Then it is very inefficient and much more error prone, and not always user error.
Natural language is terribly inefficient and error-prone.
essentially the same argument for CLI over GUIs. CLI was considered to be far more efficient and direct in manipulating computer functions than using a mouse that you had to move all the way across your screen and click (or god forbid) double click on something. plus it was error prone, you could miss!
now look at where we are.
realistically the invention of a new UI paradigm doesnt completely erase the previous one. GUIs obviously dominate, but people still CLI when it makes sense- even in OSX. the mouse didnt make the keyboard obsolete, it just added itself to the mix.
i imagine a bleeding edge computer 20 years from now will read human gestures, talk naturally, predict your needs and all that... but it will still have a keyboard and mouse tucked away somewhere for those times when it makes sense. the more primitive interfaces that we know today (like 10.4) can be emulated using less than 1% of the computer's total processing power for those rare times when you actually need to type something =D
I still want to be able to type. When I want to be more private about my computer interaction, however, I can clearly see a benefit to walking away from my desk while still doing work on my computer. I could get up get a soda, and dictate an e-mail or schedule an event.
That isn't too far off. I would suspect it is almost here now.
... im afraid i just cant see how it becomes EASY.. do i mouse with my Right hand and wave my left to invoke stuff, as well as use my left for key shortcuts? or lift my Right hand to invoke them, thus taking my hand away from the mouse (my primary input) it would get confusing... how many theramin players can you name?
)
Easy. You put your right foot in, you take your right foot out, you put your right foot in, then you shake it all about.
Easy. You put your right foot in, you take your right foot out, you put your right foot in, then you shake it all about.
AHHHH!!! its all so simple now!
Quote:
Originally Posted by Slewis
The Software better be using that Core. A Specialized core would be better, but far less efficient in Computer Design.
Sebastian
thats what i ment dedicate one of the cores to that task via software control, leave you 3 cores to do the usual tasks with... which is one more core than the current iMac hence the iMac needs to go 4 core to allow for all this voice command stuff AND see an increase in performance to boot!
thats what i ment dedicate one of the cores to that task via software control, leave you 3 cores to do the usual tasks with... which is one more core than the current iMac hence the iMac needs to go 4 core to allow for all this voice command stuff AND see an increase in performance to boot!
im warming to this idea! a LOT!
It's too inefficient, especially for laptop design. By that time we'll have far more Quad Cores, but AMD may eventually have something like this now that Intel is off on a race for more Cores... -_-
It's too inefficient, especially for laptop design. By that time we'll have far more Quad Cores, but AMD may eventually have something like this now that Intel is off on a race for more Cores... -_-
thats what i ment dedicate one of the cores to that task via software control, leave you 3 cores to do the usual tasks with... which is one more core than the current iMac hence the iMac needs to go 4 core to allow for all this voice command stuff AND see an increase in performance to boot!
im warming to this idea! a LOT!
If I recall correctly, which I probably don't, I think it was either MIT or Columbia U that was working on a Voice Recognition accelerator chip. A dedicated processor just for that task.
one of the things that will be instrumental in getting VR to work properly is to get it so that the computer can literally have a conversation with a user. One of the primary methods that we use to understand people who are speakiing to us is simple verification. "What?" "Wait, did you want the hotdog or the sausage?" Being able to speak conversationally with the computer, with it able to interrupt and ask for clarification, and derive context, etc, will be critical in making Voice recog. work right. I don't believe that AI will be needed, but a good system that will interrupt you when it doesn't understand, but also keep listening, so that you don't have to start all over again... That's the honey.
What about the use of xgrid in Leopard. This feature has been in the background for several versions of 10.x Will Leopard and 80211.n finally allow all of the other macs on your network to crunch out data intensive tasks. Would an idle itv allow for that dedicated core that we might need for the voice recognition?
I think it's interesting that thanks to Spotlight the Finder is now almost completely redundant in Tiger. Think about it, what do you do on a daily basis that you couldn't do just as easily without the Finder?
The reason that this concept sounds so alien, is that as experienced Mac users we've cut our teeth on the Finder and never questioned its relevance. With some additional meta data controls in the open & save dialog boxes, it would possible to save all our documents in one place and never have to venture in to the Finder again. Instead of stipulating where you want to save a document, you would simply name it, and add any additional meta data such as client name, project number, revision etc. Hell, you could even have the ability to add your own custom metadata categories like 'Design' or 'Artwork', 'LoRes' or 'HiRes', 'RAW' or 'retouched'.
A Finderless (or windowless) system, is a million miles ahead of anything that Microsoft has. Whilst Windows users are having to wade through countless levels of windows, we'd just be steaming full speed ahead. What could be easier than a computer that has no windows and no folder hierarchy? After all, you don't need the Finder to use FrontRow or that iTV interface that was demonstrated.
Perhaps that was what Apple was waiting for - for Microsoft to commit their next generation of operating systems to the WIMP GUI, before launching their own paradigm-shift?
I really don't expect a major paradigm shift. What would all third-party apps do if Apple got rid of windows, menus, etc? That's not the kind of change you can keep hidden and surprise developers with a few months before launch. "Oh, and by the way, you now have to redo your apps entirely"...
I think most of the stuff that needed developer support or developer integration was included in the preview copies they received at WWDC and/or through ADC. Things might look somewhat different in the final version, but functionally they'll be the same. For example, extending an app so that it is able to use Time Machine (changes they made for demonstration purposes to the Finder, iPhoto and Address Book), allowing an app to send it's output through iChat Theatre, and so on. Very little of what they showed at WWDC was stuff that didn't involve the developers in some way.
I think the "Top Secret" features are relatively invisible in terms of application development. A new Finder with enhanced Spotlight features, for instance, is still just the Finder to apps. So long as the way apps talk to the Finder stays the same, everything will still work. A new UI theme would be relatively invisible to applications as well, as a text field will still be a text field, and still accessed by the app in the same way.
Saying that, even a UI theme change is going to require some work for developers - most apps have a lot of custom widgets, and they may need to be updated to match any new UI theme. That's mostly updating artwork, though, so it isn't as intensive as a completely new interaction model.
There's a lot Apple can do without the developers needing to change their code or having to explicitly support the new features, but a wholesale change of UI paradigm isn't one of them.
Yeah, but you're hiding the Finder from the user, not the apps. All you're effectively adding is the facility to add metadata to a file right there in the save dialog box, and the ability to search by metatdata in the open/import dialog boxes. It would be completely invisible to all the apps themselves - after all, the Spotlight search field is already in all the open/import dialog boxes.
You'd still continue to use the Dock to open apps and all the other UI elements like the menus would be there. You'd still go to System Preferences. But instead of waging through countless Finder windows, you'd simply go to spotlight and call up all the files tagged with a particular piece of metadata. This could be all the files relating to a particular job number, or just the artwork files relating to that job number; or it could be all the HiRes CMYK files relating to a particular client.
The paradigm shift is the way the user interacts with the data on the computer - not the way the computer and the apps actually function. We're already so close to it at the moment!
Sorry for a non-tech related comment, but as an environmental scientist I found that video hilarious!!! It obviously wasn't written by anyone with much of a scientific background. I love the fact that they compare deforestation in the Amazon and desertification in the Sahara and imply there is some relationship between the two! It's also funny how they pull up all these fancy animations and think they're wonderful without even asking how it was produced, what the source data were, where they came from and their underlying statistics and assumptions....Sorry again but it's just my critical science background coming out. If that guy was my lecturer I'd be asking for my money back!!!
The time was supposed to be 2010 for the device we are almost there and you can see the basics with ical integration, google, spotlight, addressbook, ichat, ichat answering machine, speech recognition. All that is left is the Apple-ness bringing all of these technologies together.
I wonder if this will be some of the secret features in leopard.
Comments
That sounds a lot like Apple. I can really see that happening, actually.
Me too.
My other suspicion is that the teaser image isn't a picture of a black Apple logo with rays of light behind it; rather, it's a sheet of aluminum-like substance with a Apple cut out of it that's lit from behind -- with the light leaking through and playing across its surface. Which makes me think that Illuminous will use a kind of unified (non-brushed) aluminum texture for windows, shimmeringly lit from behind (as well some probably some additional light source) via dynamic lighting effects powered by CoreAnimation.
wow didnt know this was so sparse in support! at the rate the throw apple loops jam packs out (or whatever their called) which you would imagine are MORE time intensive to prepare than dictionaries, of course im assumeing most of the work has already been done or is freely avalible in a different format.
The .dict format was probably just made so they could seperate Data from the Application. It still has potential, but not if Apple doesn't care enough about it.
Sebastian
Ok, after sleeping on the issue, I came up with some thoughts.
........
But that aside, we are still left with a 1970's interface. I think natural language and gesturing will provide the basis of an operating system's interface. The question is when?
When Speech Recognition doesn't suck as much as it does right now. When a Computer can properly determine exactly what the user is looking for, fill in the gaps for when a User isn't quite sure what the name of it is, and not require a Hotkey to use it, then we will have a Natural Language Interface, likely combined with a Multi Touch Graphical Interface. Gestures, probably not. I don't see the point in that...
Sebastian
So is typing if you can't type.
As someone who uses the Keyboard to do almost everything, not so. If you can't type you can learn to type. If the Speech Recognition System cannot hear you clearly, refuses to follow your command, requires you to calibrate your voice, etc. Then it is very inefficient and much more error prone, and not always user error.
Sebastian
Natural language is terribly inefficient and error-prone.
essentially the same argument for CLI over GUIs. CLI was considered to be far more efficient and direct in manipulating computer functions than using a mouse that you had to move all the way across your screen and click (or god forbid) double click on something. plus it was error prone, you could miss!
now look at where we are.
realistically the invention of a new UI paradigm doesnt completely erase the previous one. GUIs obviously dominate, but people still CLI when it makes sense- even in OSX. the mouse didnt make the keyboard obsolete, it just added itself to the mix.
i imagine a bleeding edge computer 20 years from now will read human gestures, talk naturally, predict your needs and all that... but it will still have a keyboard and mouse tucked away somewhere for those times when it makes sense. the more primitive interfaces that we know today (like 10.4) can be emulated using less than 1% of the computer's total processing power for those rare times when you actually need to type something =D
but then if they wont add extra languages to the dictionary why would they have the will to add speech recognition?
on Learning to type, im partly dyslexic so ive learnt to spell AND type
now if only what i had to say was important enough
That isn't too far off. I would suspect it is almost here now.
i think you need a WHOLE core dedicated for speech recognition ONLY.. so you need 4 cores in an iMac to start with.
but then if they wont add extra languages to the dictionary why would they have the will to add speech recognition?
The Software better be using that Core. A Specialized core would be better, but far less efficient in Computer Design.
Sebastian
... im afraid i just cant see how it becomes EASY.. do i mouse with my Right hand and wave my left to invoke stuff, as well as use my left for key shortcuts? or lift my Right hand to invoke them, thus taking my hand away from the mouse (my primary input) it would get confusing... how many theramin players can you name?
)
Easy. You put your right foot in, you take your right foot out, you put your right foot in, then you shake it all about.
Easy. You put your right foot in, you take your right foot out, you put your right foot in, then you shake it all about.
Terribly inefficient and error prone
Sebastian
Easy. You put your right foot in, you take your right foot out, you put your right foot in, then you shake it all about.
AHHHH!!! its all so simple now!
The Software better be using that Core. A Specialized core would be better, but far less efficient in Computer Design.
Sebastian
thats what i ment dedicate one of the cores to that task via software control, leave you 3 cores to do the usual tasks with... which is one more core than the current iMac hence the iMac needs to go 4 core to allow for all this voice command stuff AND see an increase in performance to boot!
im warming to this idea! a LOT!
AHHHH!!! its all so not simple now!
thats what i ment dedicate one of the cores to that task via software control, leave you 3 cores to do the usual tasks with... which is one more core than the current iMac hence the iMac needs to go 4 core to allow for all this voice command stuff AND see an increase in performance to boot!
im warming to this idea! a LOT!
It's too inefficient, especially for laptop design. By that time we'll have far more Quad Cores, but AMD may eventually have something like this now that Intel is off on a race for more Cores... -_-
Sebastian
It's too inefficient, especially for laptop design. By that time we'll have far more Quad Cores, but AMD may eventually have something like this now that Intel is off on a race for more Cores... -_-
Sebastian
oi! dont quissmote me!
AHHHH!!! its all so simple now!
thats what i ment dedicate one of the cores to that task via software control, leave you 3 cores to do the usual tasks with... which is one more core than the current iMac hence the iMac needs to go 4 core to allow for all this voice command stuff AND see an increase in performance to boot!
im warming to this idea! a LOT!
If I recall correctly, which I probably don't, I think it was either MIT or Columbia U that was working on a Voice Recognition accelerator chip. A dedicated processor just for that task.
one of the things that will be instrumental in getting VR to work properly is to get it so that the computer can literally have a conversation with a user. One of the primary methods that we use to understand people who are speakiing to us is simple verification. "What?" "Wait, did you want the hotdog or the sausage?" Being able to speak conversationally with the computer, with it able to interrupt and ask for clarification, and derive context, etc, will be critical in making Voice recog. work right. I don't believe that AI will be needed, but a good system that will interrupt you when it doesn't understand, but also keep listening, so that you don't have to start all over again... That's the honey.
The reason that this concept sounds so alien, is that as experienced Mac users we've cut our teeth on the Finder and never questioned its relevance. With some additional meta data controls in the open & save dialog boxes, it would possible to save all our documents in one place and never have to venture in to the Finder again. Instead of stipulating where you want to save a document, you would simply name it, and add any additional meta data such as client name, project number, revision etc. Hell, you could even have the ability to add your own custom metadata categories like 'Design' or 'Artwork', 'LoRes' or 'HiRes', 'RAW' or 'retouched'.
A Finderless (or windowless) system, is a million miles ahead of anything that Microsoft has. Whilst Windows users are having to wade through countless levels of windows, we'd just be steaming full speed ahead. What could be easier than a computer that has no windows and no folder hierarchy? After all, you don't need the Finder to use FrontRow or that iTV interface that was demonstrated.
Perhaps that was what Apple was waiting for - for Microsoft to commit their next generation of operating systems to the WIMP GUI, before launching their own paradigm-shift?
Sebastian
I think most of the stuff that needed developer support or developer integration was included in the preview copies they received at WWDC and/or through ADC. Things might look somewhat different in the final version, but functionally they'll be the same. For example, extending an app so that it is able to use Time Machine (changes they made for demonstration purposes to the Finder, iPhoto and Address Book), allowing an app to send it's output through iChat Theatre, and so on. Very little of what they showed at WWDC was stuff that didn't involve the developers in some way.
I think the "Top Secret" features are relatively invisible in terms of application development. A new Finder with enhanced Spotlight features, for instance, is still just the Finder to apps. So long as the way apps talk to the Finder stays the same, everything will still work. A new UI theme would be relatively invisible to applications as well, as a text field will still be a text field, and still accessed by the app in the same way.
Saying that, even a UI theme change is going to require some work for developers - most apps have a lot of custom widgets, and they may need to be updated to match any new UI theme. That's mostly updating artwork, though, so it isn't as intensive as a completely new interaction model.
There's a lot Apple can do without the developers needing to change their code or having to explicitly support the new features, but a wholesale change of UI paradigm isn't one of them.
Neil.
a.k.a. Arnel
You'd still continue to use the Dock to open apps and all the other UI elements like the menus would be there. You'd still go to System Preferences. But instead of waging through countless Finder windows, you'd simply go to spotlight and call up all the files tagged with a particular piece of metadata. This could be all the files relating to a particular job number, or just the artwork files relating to that job number; or it could be all the HiRes CMYK files relating to a particular client.
The paradigm shift is the way the user interacts with the data on the computer - not the way the computer and the apps actually function. We're already so close to it at the moment!
Knowledge Navigator here : http://en.wikipedia.org/wiki/Knowledge_Navigator
Here is the original movie produced by apple in 1987 http://www.bu.edu/jlengel/kn65kfs.mov
The time was supposed to be 2010 for the device we are almost there and you can see the basics with ical integration, google, spotlight, addressbook, ichat, ichat answering machine, speech recognition. All that is left is the Apple-ness bringing all of these technologies together.
I wonder if this will be some of the secret features in leopard.