I'm relatively certain that whatever gesture vocabulary they've dreamed up will feel very natural and obvious, once you've used it. Which is why I'm also pretty certain the "three finger down" thing is just made up.
Like to think so, but the last shuffle makes me nervous.
Originally Posted by pixelcruncher
I'm not sure what Apple has in store but the idea that any of the regular buying public wants to learn multi-touch gestures is out of line with how much they use the existing multitouch gestures. I would bet 99% of casual Mac buyers don't even know more than a couple gestures or that many of their apps have differing multi-touch gestures of their own.
There's also the very serious consideration that Mac devices don't always register touches correctly. That would be incredibly frustrating for working in an office app suite where speed and precision would be valued.
Then again, this is all rumor. Apple doesn't even think the average user is capable of dealing with folders on their iPhones, much less learning a set of "complex" multi-touch gestures that may differ between apps. In the end the device will be judged by Apple's core fans on its forward thinking, but by the rest of the public on how easy it is to operate vs their laptops. If this is supposed to be a productivity device with office programs like iWork, then it stands to reason that it will need to be able to compete with the devices that offer the same functionality. More than ever, I hope Apple allows the tablet to be used with an external keyboard, something they have refused to allow with the iPhone.
1. CES saw a keyboard you can just plug a Touch and/or an iPhone into.
2. Most of my MB Pro using friends do not in fact really know or use any of the gestures except maybe two finger scrolling.
3. I've been computing for 25 years and since I reboot infrequently when my old iPod click wheel freezes, can never remember which two buttons I'm supposed to press (and actually recently learned online you're supposed to push the hold switch on and off before whatever the keypress is).
So I repeat that multiple memorized gestures are not going to be the next great thing. Mice took some adaptation, sure, but you saw something moving on the screen as your hand moved and you could see your destination. With umpteen gestures doing multiple things most of which are NOT screen targets you're in another universe entirely.
Originally Posted by Gazoobee
I've been an enthusiastic user of computers since almost the day they first became available and in all that time, the one thing a computer has never been able to successfully replace is that simple thing you do when you pull out a pen and a piece of paper, or draw on the back of a napkin in a restaurant. It's one of the most natural human communication activities but it simply can't be done on a computer. The closest I ever got was a series of Palm and PocketPC devices with styluses but there were innumerable problems ...
I don't want
to see a stylus, but if there isn't some kind of solution in the tablet for simple drawing, then it will not be up to spec as far as I can see.
Sure, you'll be able to draw with your finger, and sure the bigger screen will make that even easier, but anyone who uses a pen seriously can tell you that a finger is not a pen and never will be. It's pretty obvious that pens and writing devices wouldn't have been invented at all, if fingers would suffice. The pen is probably the biggest single invention of all time and there is yet no handy, universal, natural, computerised replacement for it. If a tablet is going to replace
pads of paper, it has
to have some kind of pen or pen-like input.
The tablet either has to have a stylus (the easy solution), or it has to have some completely unknown, totally new way of making your fingers mimic
a pen in software. If it doesn't, then it's only really solved the problem of keyboard entry, not stylus entry.
The stylus preceded the pen - likely by a few hundred thousand years when the first man used a long stick to draw in the sand without getting down on the ground, or found s/he could do more complicated output with a sharpened point.
On the other hand, cave paintings show that "ink" has been around for tens of thousands of years, although many ancient cultures opted for inscribing with chisel-like "pens" (much more stylus than pen) in clay tablets, stone, wood (runes, e.g.).
dipping a writing instrument like a quill in ink was the next great innovation - along with paint brushes and other tools for slathering liquid media on solid surfaces, and the fountain pen, carrying its own ink supply was likely the revolutionary output device sensation of its relatively recent era (I hear people lined up outside pen shops for days to be among the early adopters
), holding its own until supplanted by the ballpoint, sharpies, etc.
but meanwhile, mechanical personal typemaking devices - from typewriters to linotype to word processors to PC's took over writing, and cameras and then cameras coupled with software supplanted virtual image-making for the average person - and these technologies currently dominate.
charting the arc of that progress, virtual keyboards and knowing that you mean to make a precise fine line with a finger on a touch screen sometimes and a thick one with another nuance while having carefully learned sets of gestures to control the project at hand cut against that grain and seem a tall order for a tech bent on writing the next chapter in how humans create writing and images.
surprise me, Apple! Please......