Originally Posted by ArticulatedArm
Imagine if you are using the tablet to draw on and your primary PC monitor to view the overall picture? I think this could work very well. I also think you would get used to using our finger to draw on if you chose to do that. Though it would probably be easier for most people to just use a pen stylus as we are used to doing. We are creatures of habit.
Well in a machine that's primarily portable, having to have a monitor attached would be pretty fatal
It's interesting, too, that people are talking about it as if it's a choice between pen OR touch. It doesn't have to be. The current generation of Wacom tablets, for example, let you use either by sensing when a pen is close and "switching off" touch sensitivity at that point, No pen, and your fingers will work.
This solved the problem that was a major issue with the stylus on the Newton: the "lean-on" problem. Newton's touch screen was passive: that is, it responded to anything which touched it, including your hand rather than the pen. Most people who write tend to lean on the surface they're writing on to a lesser or greater degree. So, with Newton, you had to train yourself to not lean on it, which disrupted your normal handwriting - and so made it more "scrawly" and less easy to recognise.
Microsoft's TabletPC solve this initially by simply dictating that all tablets had to have what's called "active" pen digitisers. That is, they didn't respond to touch, but to the proximity of electronics in the pen. This was great, because it meant you could lean on the screen - and as screens get bigger, you find people doing this more. And it massively improved the legibility of what you were writing. Coupled to a seriously good handwriting recognition engine by the time Of XP SP2, and you had a very good system.
Two big problems, though. First, active digitisers were expensive, and accounted for a good proportion of the $200-500 more you'd pay for a TabletPC compared to an equivalent conventional laptop. Second, no touch with the fingers at all locked you out of using gestural stuff in the interface, and meant that if you lost your pen you were screwed.
(Microsoft later relented, and started allowing passive, resistive touch in TabletPC. This meant you could touch the screen - but in classic PC-world actions, manufacturers all used cheap resistive touch screens to drive down the cost, which gave a really shitty user experience.)
Now, though, we have the technology to do touch which is capacitive, giving a good slick experience with fingers, AND active, which gives a good experience with active pens. In graphics tablet form it's not expensive - but I don't know how expensive it would be to integrate it into a screen.
Will Apple use it? I don't know. They could deliver something which allows you to use an active pen for great quality drawing, handwriting or diagrams but also your fingers for a virtual keyboard and that slick iPhone-style user interface.
My gut feeling is that it comes down to cost. If they can make something that combines "the world's greatest touch interface" with "the world's greatest pen interface" they'll do it - but if it's a choice between the two for cost reasons, touch will win out.
We'll see come Wednesday!