New imacs to become "Chin-less"??? Say it ain't so! :-)

13»

Comments

  • Reply 41 of 44
    vineavinea Posts: 5,585member
    Quote:
    Originally Posted by Duddits View Post


    I guess they're both computers. However, the fact that iPhone runs a full OS rather than a watered-down mobile version of one, and the fact that the iPhone interface is endlessly versatile and not tethered to physical limitations optimized for (and limited to) specific functions, elevates the iPhone's potential to do more of the things we associate with larger computers.



    Windows mobile is reasonably powerful. There are also Linux based PDA's and smart phones.



    The iPhone interface is no less tethered to physical limitations than any other interface that size. You're not about to run Photoshop on it. Or write a novel (except slowly). It may or may not even be a very good eBook depending on eye strain from the resolution (160 ppi vs 200+ ppi of the better ebooks). You can web surf on an iPhone better than on other PDAs. Mkay, lets grant that but you can hardly say you can't surf the web on PDAs or Smartphones given we've been doing that a while now.



    So which general purpose functions done on a larger PC that an iPhone can do that others could not?



    Quote:

    Again, I half agree. the practical limits of the form factor are... the practical limits of the form factor. But other limits - storage and horsepower - do not have the sole solution of offloading work to more capable machines. Advances in storage and horsepower will at some point make handheld computers sufficiently powerful in for general computing.



    At some point is not in the near future. Going to 45nm is not going to break that barrier. Nor is 33 nm or 23 nm. Sure, by the time we have 33nm or 23nm we'll have Core Duo power in our PDAs. But by then we'll expect desktops to do much more and handle more layers of UI and software abstraction which costs cycles.



    Quote:

    Yes, bigger computers will always have the potential to be more powerful than smaller ones. However, as components continue to decrease in size, the limits you are describing will evaporate.



    Components have been decreasing in size for 20 years and will for the next 20 years. Looking at the rate of change have small form factor computers gained or lost ground from the HP calcualtors of old vs desktop and PDAs vs desktops today? I'd say relative capability has remained about the same given the rapid advances in both.



    Quote:

    From a physics point of view, you are making distinctions between objects whose size differences are negligeable. As the physics of computing drills down into ever smaller particles, the size difference between an iPhone and a desktop machine won't matter for general computing, and one need not access the other strictly for the power benefits of scale.



    From the computer science point of view the distinctions are quite large and will remain so until molecular computing exits the laboratory and enters engineering practice.



    If you with to say that handhelds in 2027 will replace desktop and laptop computing feel free. Of course things don't quite turn out how we expect. Expected paradigm shifts are often replaced by unexpected shifts.



    Quote:

    As a cat, I am often not taken seriously. But I assure you this is simply not true. However, if you are not aware of the various storage technologies in development, then I can understand why you would believe this.



    That was somewhat cute once but getting rather old.



    In any case, I'm reasonably well aware of the various storage devices under development. Holographic storage is the most likely to enter the market first (in terms of high density, non magnetic storage) but like all new technology is going to appear first in larger form factors with higher power requirements. It also isn't going to instantly take over the market when it does appear...the adoption rate will be no faster than other emerging technology (translation: the iPhone will be pretty old news by then).



    Quote:

    The issue is at what point a device the size of iPhone becomes sufficiently powerful and robust that its size alone - and not its power or storage - becomes the limiting factor. You seem to be saying never. I am saying at some point. What that point is could be estimated by combining a bunch of trajectories and seeing where they connect.



    I did not say never. I was talking in the near term within the expected product life of the iPhone Rev 1. Probably Rev 2-5 as well.



    Quote:

    For a computer to interact in a natural way through language is the most difficult task in artificial intelligence. Nonetheless, the field has made solid progress and continues to improve. If you compare your experience with phone voice recognition systems of today (as annoying as they still are) with those from five years ago, you know what I mean.



    Again, I'm reasonably well aware of the state of the art but no means an expert. My office mate uses Dragon a lot, I've sampled the various commercial and non-commercial options. I've seen the demos at various industry shows including some requiring special access.



    Meh. It's close...and yet far away. Tantilizingly close and so much more refined than when I was actually doing speech recognition stuff in the 80s but yet...just not quite there yet. So close you'd be tempted to say "within 5 years" even though every time you've said "in 5 years" over the last 25 you've been wrong...



    NLP...I keep seeing interest to study this and they keep only promising 3 years max research budget and I keep laughing and pick lower hanging fruit where I have a hope at showing tangible progress. Like multi-touch.



    hmurchison: computing power hasn't been a limitation in the lab really...IBM, AT&T and others have thrown huge computing power at it because in the lab...you can. The results 5 years ago were decent and "almost there". The last time I saw one of these demos (AT&T I think) was...hmm...I guess about 4 years ago using speech recognition for language translation (speak english, speech to english text, english text to spanish, spanish voice synth out). Worked GREAT in the constrained demo. Not so much when I started mucking with it. Then it was "you need to train...error rates are higher in a noisy environment...error rates are higher outside the trained domain vocabulary...etc". They had a largish Sun driving the speech recognition if I remember right...maybe not. It wasn't bad given the constraints and if I HAD to guess it was a hybrid NN and HMM or perhaps a HMM backed by a more complex DBN.



    Vinea
     0Likes 0Dislikes 0Informatives
  • Reply 42 of 44
    Marvinmarvin Posts: 15,586moderator
    Quote:
    Originally Posted by hmurchison View Post


    I find that voice recognition is perhaps the most natural of input modalities, it never misspells, and most of us by the time we finish the second grade we are well-versed in how our native language works.



    I agree but its biggest flaw is privacy.



    *Says out loud* Safari go to www.pornotube.com

    quick, quick close down VLC, stop playing For Your Thighs Only, someone's just walked into the room.

    Mute, mute.



    It could also get annoying in offices if you have everyone constantly chattering away.



    I think that both touch gestures and speech together would be a very powerful combination though and could easily replace the keyboard and mouse entirely for most tasks.
     0Likes 0Dislikes 0Informatives
  • Reply 43 of 44
    Quote:
    Originally Posted by Marvin View Post


    I agree but its biggest flaw is privacy.



    *Says out loud* Safari go to www.pornotube.com

    quick, quick close down VLC, stop playing For Your Thighs Only, someone's just walked into the room.

    Mute, mute.



    It could also get annoying in offices if you have everyone constantly chattering away.



    I think that both touch gestures and speech together would be a very powerful combination though and could easily replace the keyboard and mouse entirely for most tasks.



    Speech recognition will definitely have it's place in future, but just as you described it also has it's limitations. Touch gestures can be also considered in future, but they also have their limitations, ergonomics's being the biggest problems. I don't think mouse and keyboard are going anywhere in near future, but I see user interfaces slowly getting more predicting and intuitive, so that the contact to the computer slowly minimizes.
     0Likes 0Dislikes 0Informatives
  • Reply 44 of 44
    vineavinea Posts: 5,585member
    Quote:
    Originally Posted by Marvin View Post


    It could also get annoying in offices if you have everyone constantly chattering away.



    I regularly mess with my office mate that uses Dragon a lot by waiting for him to say a name (like our boss or something) and say loudly "IS A MORON" or something equally juvenile so his mic picks it up. Then I hear cursing, and then more cursing as he deletes his cursing as well as my comment.



    It's the simple things in life.



    The big thing is that he can't use dragon if I'm in a conference call. Needs a better mic setup I think but even if he did, unless it was a mask, he'd interfere with my calls or periods of quiet work (See DeMarco/Lister for the impact of noisy workspaces on productivity).



    Quote:

    I think that both touch gestures and speech together would be a very powerful combination though and could easily replace the keyboard and mouse entirely for most tasks.



    I can still type faster than he can dictate. He's not a touch typist so dictation is faster than hunt and peck. Touch is still less accurate than stylus or mouse. Anything task using fine motor skills will be at a disadvantage.



    Vinea
     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.