Originally Posted by palegolas
I think so too. iPad as a visual control surface will be very handy. The only bad thing about touch screen surfaces is that you have to look at the surface to be precise. You need to look at the surface, touch it, then look at the screen while dragging to see the changes, then look at the surface again etc. A lot of looking up and down and up again. Hardware buttons and screws you can operate blindfolded. But still it'd be much appreciated to have a dedicated control surface. Very neat.
I understand what you are saying about looking up at the screen (to see the modifications) and down at the iPad (to see the controls).
I was using microcomputers for 5+ years before the Mac came out. The indirection
caused by the mouse felt really strange and took some getting used to.
A touch pad or a graphics tablet has a similar level of indirection
that takes getting used to.
What makes indirection
easier is a little hint shown on the computer display -- the cursor or brush.
It doesn't really matter where the mouse is positioned on the desktop
-- it's where the cursor is shown on the computer display.
Now, take that same concept and expand it to 10 multitouch points on a tablet:
If you placed both hands on the tablet -- you could see 10 hints (cursors, fingertips, whatever) on the computer display.
That would be OK, but confusing and not too usefull.
But there are other possibilities -- here's one:
1) Place 1 finger of one hand anywhere on the tablet -- this connects to the cursor, wherever it is currently positioned on the computer display
2) move around as normal
3) Place all 5 fingers of the other hand anywhere on the tablet -- a semitransparent set of sliders appears off to the side of the computer display.
4) The spacing pf the sliders is customized to fit the hand on the tablet and each finger is connected to an individual slider.
5) you can slide the fingers together or lift 2-4 fingers to deal with individual controls.
6) The UI knows which fingers are on the tablet and which are not -- denoting the active sliders.
7) The UI can determine when a finger, not on the tablet, is placed on the tablet (and which one) -- and activate the associated slider
8) as long as 1 of the original 5 fingers remains on the tablet the associated slider "heads-up" remains on the computer display -- with active sliders highlighted.
Here, we have used the tablet as a touch surface with no need to show anything on its screen -- the operative display is the screen on the computer.
Currently we use a few fingers on one hand (on the keyboard) in combination
with a few fingers of the other hand (on the mouse) to manipulate and control a complex UI like FCP
With the iPad, the possibility of using all 10 fingers opens a lot more possibilities -- more combinations, shortcuts, etc,
But, rather than more (and more complex) short cuts, there is a better way -- gestures.
If the FCP application is made "gesture-aware" we can easily do things with simple gestures that are cumbersome and contrived with a kb/mouse.
Pinch/zoom a scene or a time line, for example -- is more natural with the fingers.
I am not suggesting that you need to remove the established kb/mouse interface and unlearn habits -- rather add gestures and use them where the can improve efficiency and workflow -- and learn better, more-efficient habits.