Originally Posted by ThePixelDoc
This, from the Macworld review, is all you need to know... and why at some point in the future Avid and Adobe will have to rewrite their software, or get left in the dust by Apple again.
I can't and wont comment on the state of the art at Avid, but they do seem to be a bit more expensive, and YES... do cater to the traditional tape crowd, as well as a broadcast workflow.
So what's the belly-aching about? Get out your checkbooks and "upgrade" to an Avid station.
I think you are "GETTING IT" as far as where FCPX is going.
I figure that Apple is laying the groundwork for something like a desktop "FLAME", or "NITRO" or other real-time video creation station.
If you look at Motion -- I think you see where their paradigm is.
>> The "STRANGE" missing piece is the Preview on monitor support -- but that, along with the XML file system to me is pretty telling. There is NO EFFORT to waste time on anything that is not forward-looking in this product. Which is a pretty bold move.
So, if "I WERE CREATING A PERFECT EDITING SUITE" -- Id get rid of the old "cut and paste time-line" spreadsheet methodology. What you really WANT, is to have one big interface for your canvas and hot-keyed monitors for available video. INSTEAD of spending all your time cutting out the best performance from 20 takes, and then archiving reals based on where they should go in the movie -- you "audition" 20 takes at one time -- the BIG HINT, there is how "audio tracks" are syncing a clip. I suspect, that in the not to distant future, you can automatically scan for FACES, VOICES, and the "FX" like a CLAP BOARD and some key phrase like; "Scene 3, take 2" and THAT will create your automatic groupings for your project.
The TIMELINE is OLD -- and not really HOW we want to edit a video. You are either wanting to Manipulate or SEE related content to the scene you are manipulating, or you are jumping to some other point to manipulate and see. When you scroll across the time-line, that's really just a spreadsheet for the content you place on it. Sometimes it has thumbnails or names of the content -- sometimes, you might even label a "phrase" someone spoke. But I could imagine that you could "see through time" on multiple tracks, and "see the text" of what is said, like you were navigating "Time Machine" backups in reverse -- with some sort of onion-skinning.
>> I don't think that this latest FCPX will fully realize it's editing and navigating interface yet (with of course a HUD that you can customize) -- I just think that this is a stepping stone on the way to the future interface.
I figured, that the "face detection" and "text to speech" technologies would eventually catch up. It would be SO MUCH BETTER to have a script and have that match up to the timecode and then track to various points in various videos.
The XML file format, to me, hints at MORE INTEGRATION, so that the power of this, might be that someone in a web browser, could be helping clean up dailies or tag things on the project -- only the main editor, needs to deal with the hi-rez data. And it all magically syncs in the cloud. It seems like an oversight that Apple doesn't have the "save these files to this drive" as the default for all projects anymore -- but I've always wanted a "save here for this project" option myself.
With ThunderBolt -- the streaming of Previews and Cache files is going to get a lot quicker, with no need for the CPU to be involved in disk to disk transfers anymore. So, FCPX is moving towards "content based" file system, rather than a Drive/Folder based file system. But BEFORE that can work, it means that Terabytes are cheap, transfers are fast, and that you have something like "iCloud" to make sure that everything stays in sync.
>> MORE than the "real time" special effects, I think that MOST pro video shops are going to start having a "paradigm shift" to over-use a term, when the "content based" file system becomes ubiquitous.
Imagine all that FX, sound clips, clip content that you DON'T use right now, because it takes to much time to find and incorporate it, just "presented itself when and where it made sense." And imagine that you don't have to play ANYTHING from Twelve Hours of video, to have all the clips track the outline, or to find "speaker Bob Smith in Room Twelve on Day 4 of Convention 2009" ... you just have a file on Bob Smith, and every video clip he did EVER, is sortable, and you can view a quick preview or download a hi-rez version for finishing.
>> The CHANGES to workflow/production are going to be staggering once those types of capabilities become ubiquitous -- kind of like when we were first able to easily search for things on our hard drive. But it takes a LOT of infrastructure to make this work;
Speech to Text that works
Voice recognition that identifies the speaker.
XML and a text-based file system to allow these different features to communicate.
Some sort of "cloud" infrastructure that allows editing, reviewing and grabbing files to be transparent.
True multi-threaded multi-core support that allows for networked computers to "pitch in" without getting in the way of dedicated work (Grand Central),
All image and sound manipulations to be processed in optimized engines that can take advantage of graphic cards or OTHER computing power (Open GL, Open GL, etc.)
>> My advice; if you don't have time to "learn" FCPX -- then don't buy it. Don't use it on something critical. But, if you are interested in much faster creative editing and more "content-aware" content control -- it makes sense to jump in.