But that's just a phase--and in no means as big as, much less bigger than Hollywood et al.
Don't confuse bits/color-space for ultimate quality, format specs for caliber of content, or gadgetry for artistry, mon frere, or you'll end up like Lucas (or worse, Lucas acolytes who dream of his success w/out half his tech genius or early inspiration).
Besides the Cohen brothers, Murch, Hammer and Copula, and based on the positions available now, it appears that FCP is well entrenched in the pro market.*
In fact, 9 out of 10 of this year's (2010) nominees in the "Documentary Feature" and "Documentary Short" categories used Final Cut Studio to make their films."
Do you really not know how to spell those director's names?
Outside of some indie films, docus and basic cable reality programming, AVID is the standard in Hollywood, regardless of your FCP fantasies.
I'm can't wait to see the new version, but it's going to take more than slobbering comments from a guy whose life and business revolve around FCP before I get excited.
Don't confuse bits/color-space for ultimate quality, format specs for caliber of content, or gadgetry for artistry, mon frere, or you'll end up like Lucas (or worse, Lucas acolytes who dream of his success w/out half his tech genius or early inspiration).
While I don't equate high end gear with creative excellence, it is worth keeping bits and color spaces in mind if you want quality (not creative quality but the quality of the digital end product). A lot of people will pull their already compressed by the camera 8 bit footage into their editor, change the codec, send it out to AE in one format from FCP without protecting the overbrights, work in 8bit and send it out again in a different format and pull it into FCP and change it again to something else etc. causing loss of quality all along the way. Save your angst for your film school classes.
Do you really not know how to spell those director's names?
Outside of some indie films, docus and basic cable reality programming, AVID is the standard in Hollywood, regardless of your FCP fantasies.
I'm can't wait to see the new version, but it's going to take more than slobbering comments from a guy whose life and business revolve around FCP before I get excited.
But I gather with your expertise, Disney, Pixar, Industrial Light & Magic or NBC, etc., would by below your expectations. They are, amongst hundreds of production houses that seem to have jobs for anybody with FCP expertise. Hell even IL&M Singapore was looking.
First, I am not a pro -- I use FCP as a hobbyist, experimentation, home movie, etc.
I. too, think that node-based editing is opaque.
But then, a post by @palegolas, in another thread, got me thinking that it need not be that way;
Here's my post.
Thanks for the considered answer.
I, too, find the Motion UI a bit "clunky" -- it just seems to take soooo many levels to accomplish something -- you do lose your place.
I dabbled a bit with QC but it quickly becomes too cluttered.
I briefly experimented with a node-based (FCP Color FX) system -- but found it less than intuitive.
Of all the points you made, the following hit home:
Can you spread your "images" out on a light table, and have each effect (filter, whatever) you create display the resultrather than the underlying details of the node?
That, to me, would be very intuitive -- something like:
1) Tap an image to create a duplicate of the original
2) Select the new duplicate and specify whatever filter, effect, etc.
3) Deselect the duplicate and the result is shown
Repeat the above as many times as desired and one result node can be dropped on top of another to form a composite result.
Possible?
Here's his answer.
I didn't see this until a few mninutes ago... it was posted after my bedtime.
What I get from his answer is that there may be a way to:
1) display each node as partial result rather than the underlying processes used to achieve the result.
2) manipulate these results as images on a light table
There is still the business of wires connecting the nodes -- but I can think of several ways of handling that:
-- optionally display the wires or not
-- optionally replace the wires with numbered stubs
-- option-select a result or stub to see the wires
If that's doable, then a node-based system could be more intuitive than a stack like motion uses).
There is a way to spread out all elements in a Motion project, with a simple key command... but I don't remember what it is. See the videos of Mark Spencer at http://www.applemotion.net/
Well, that program can compete with Photoshop Elements, but not with Photoshop CS5. That's an order of magnitude more sophisticated.
Yes, CS5 is pretty nice. Still discovering all the little things that have been changed and how they help my workflow. The 3D implementation is still clunky. They need to hire some good people for the 3D UI refinement... More like Motion or something simpler. The shadow catcher and model textures implementation are especially bad.
"There is a way to spread out all elements in a Motion project, with a simple key command... but I don't remember what it is."
With the Canvas selected, simply press X, and all of the layers at the playhead location will fan out so you can see them. If you press SHIFT+X all of the layers in the entire project will fan out.
I was just training for Apple Pro certification in Motion 4. Brand new version coming out is good and bad news for me: I guess training goes back to square one when the new Final Cut Studio is released. Hopefully the new version of Motion won't freeze and crash so much...
Biggest change was you never saw the RED LINE "RENDER NEEDED" and it just played but very deep with a much easier learning curve.
Of course, that's one major thing they need to fix. Can't believe I forgot one of the biggest annoyances in FCP. The audio curve take so long to render too and I don't get why. Surely the computer can fast forward through hours worth of audio in seconds and just mark the highs and lows to build a graph.
Quote:
Originally Posted by Avidfcp
Longer story short, she is now in ICU with less fluids coming out then going in. I thought my wife and I would ask for positive results, PRAYER (her name is Olga), positive vibes and to just PRAY SHE IS AROUND TO SEE ME AND MY WIFE, Sheri, have a few children and know we are doing great plus we go to Maine every summer which we e haven't done for years.
Yuck, old-people-fluid-talk but I'm sure everyone here wishes your mother well. I know that grandparents seeing their grandkids grow up is one of the best experiences they can have and I hope your mother has that privilege.
Quote:
Originally Posted by Avidfcp
also pray I find some great deals on airline tickets/car, if need be.
Matthew 19:14 Bring all your bargains unto me.
If you use the last minute flight sites, you should be ok and book as far in advance as you can.
Quote:
Originally Posted by Dick Applebaum
If that's doable, then a node-based system could be more intuitive than a stack like motion uses).
It is possible, a stack is essentially just a single branch of a node tree, so you just need to have a way to have multiple branches while also allowing them to interact.
After Effects does this in an annoying way using Pre-Comps and multiple comps and they have a node visualizer. The idea with Shake/Nuke etc is that you have an infinite workspace and you just throw things around as you please and you could connect things in a lossless way so compatible filters can combine before computing an image. AE does this transparently though to some extent and actually does some things better than the node editors like layer transforms.
AE takes pre-comps way too far though - you can't even apply effects to a group of layers with making a separate comp. That combined with no separation of x,y,z in earlier versions and no bezier animation curves is head-smackingly stupid. At least Motion gets this stuff right.
All they need to do is to allow you to link properties and layers. I think they added some link mechanism in a later version but the thing about nodes is it actually is easy and intuitive when you need that flexibility as you literally just pull a cable and connect two things. That's the simplest behaviour for that process.
I agree though that layers are easier to grasp most of the time and easier to control their timeline. Nodes are quite easy to disconnect from the timeline and end up screwing things up where you didn't want to as they have complex relationships. Nodes have stability issues too as you create cyclical dependence, which gets computers confused. A depends on B but B depends on C, which depends on A and now C wants to depend on B but... ah forget it, crash. 5 minute autosaves are a lifesaver.
The solution to multiple root braches is easy as you just allow multiple scenes/comps or even the ability to keep adding root nodes. The complex part comes when you try to connect a child node of one branch to the child node of another or make an entire scene a child of another scene.
The child-child link can be done by using a kind of ghost comp. It could exist separately in the comp but you just set its parent inputs manually and it would display as an icon in the graph to let you select it easily.
The majority of a node tree can condense into a layer-based structure, especially with Motion's grouping and having independent groups can take care of the rest. The question is how complex that process becomes on a complex tree with multiple child-child links like when a child-child branch links to another and then the 'ghost comps' become nested themselves.
Apple already has a node-based compositor anyway in their Quartz Composer and I think you can insert these into Motion but again you can see the disconnect from the timeline.
In the end, they are trying to visualise 2D non-linear spatial connections + 1D temporal constraints = 3D in a 2D interface. A single branch is essentially 1D as you can collapse the hierarchy at every level. As soon as you join two child nodes, you can't.
I don't think a 3D interface (even isometric) is the solution but it's the only way to display the above in full without using these separated comps. You also have to consider that GPUs are data parallel not task parallel so they can't process multiple branches at once. Not that it's a huge deal as they are fast but it means caching results more in memory - this is fine in Quartz Composer but not so much in Motion.
No matter what they do, they either over-complicate the UI for people who just use layers or dumb down the UI for people who need the flexibility of nodes. Apple have some of the best UI designers so if anyone can figure out a compromise, it's probably them but they will side on simplicity primarily and that's not good enough for the Shake users who went to Nuke instead.
It is possible, a stack is essentially just a single branch of a node tree, so you just need to have a way to have multiple branches while also allowing them to interact.
After Effects does this in an annoying way using Pre-Comps and multiple comps and they have a node visualizer. The idea with Shake/Nuke etc is that you have an infinite workspace and you just throw things around as you please and you could connect things in a lossless way so compatible filters can combine before computing an image. AE does this transparently though to some extent and actually does some things better than the node editors like layer transforms.
AE takes pre-comps way too far though - you can't even apply effects to a group of layers with making a separate comp. That combined with no separation of x,y,z in earlier versions and no bezier animation curves is head-smackingly stupid. At least Motion gets this stuff right.
All they need to do is to allow you to link properties and layers. I think they added some link mechanism in a later version but the thing about nodes is it actually is easy and intuitive when you need that flexibility as you literally just pull a cable and connect two things. That's the simplest behaviour for that process.
I agree though that layers are easier to grasp most of the time and easier to control their timeline. Nodes are quite easy to disconnect from the timeline and end up screwing things up where you didn't want to as they have complex relationships. Nodes have stability issues too as you create cyclical dependence, which gets computers confused. A depends on B but B depends on C, which depends on A and now C wants to depend on B but... ah forget it, crash. 5 minute autosaves are a lifesaver.
The solution to multiple root braches is easy as you just allow multiple scenes/comps or even the ability to keep adding root nodes. The complex part comes when you try to connect a child node of one branch to the child node of another or make an entire scene a child of another scene.
The child-child link can be done by using a kind of ghost comp. It could exist separately in the comp but you just set its parent inputs manually and it would display as an icon in the graph to let you select it easily.
The majority of a node tree can condense into a layer-based structure, especially with Motion's grouping and having independent groups can take care of the rest. The question is how complex that process becomes on a complex tree with multiple child-child links like when a child-child branch links to another and then the 'ghost comps' become nested themselves.
Apple already has a node-based compositor anyway in their Quartz Composer and I think you can insert these into Motion but again you can see the disconnect from the timeline.
In the end, they are trying to visualise 2D non-linear spatial connections + 1D temporal constraints = 3D in a 2D interface. A single branch is essentially 1D as you can collapse the hierarchy at every level. As soon as you join two child nodes, you can't.
I don't think a 3D interface (even isometric) is the solution but it's the only way to display the above in full without using these separated comps. You also have to consider that GPUs are data parallel not task parallel so they can't process multiple branches at once. Not that it's a huge deal as they are fast but it means caching results more in memory - this is fine in Quartz Composer but not so much in Motion.
No matter what they do, they either over-complicate the UI for people who just use layers or dumb down the UI for people who need the flexibility of nodes. Apple have some of the best UI designers so if anyone can figure out a compromise, it's probably them but they will side on simplicity primarily and that's not good enough for the Shake users who went to Nuke instead.
Wow! Thanks for the great post. I think I understand everything except pre-comps -- I have never used AE.
All the talk about node-based editing piqued my interest. I had an old trial version of Shake -- but it would not open... sigh.
As a last resort I found a torrent -- haven't gone that route in years -- but seemed valid as there was no other way to try.
I have beed watching some tutorials and playing around with Shake 4.1.
It looks pretty good.
It is superior to Motion for some things and inferior for others -- to be expected, I guess.
I previously bought Silhouette, a fairly expensive plugin for FCP, to do complex rotos.
Anyway, after a few more Shake tutorials, I think I'll try using it for some of the the things I found difficult to do with FCP, Silhouette and Motion.
I suspect that I will like Shake.
In some of the tutorials they show how Shake can optionally display the visual result of a node: the node/image -- I think that's part of what's needed to be able to simplify and visualize what's going on.
As was discussed earlier. this is similar to the power and elegant simplicity of looking at images on a light table.
Where, Shake fails at this, IMO, is showing all the noodles, all the time -- it becomes a snake-pit or can of worms that clutters the picture.
QC, on the other hand, goes too far -- they encapsulate complexity in a summary node with no patch cords (good) but no visual clue as to what the summary nod does.
What I think would be superior would be just to show the node/images juxtaposed in some natural way -- left to right, top to bottom, whatever.
Then the user could Option-click-hold (or two-finger-press-hold) and expose the noodles connected to that node/image, and their connection to other node/images -- to expose the interrelationship of the node/images. When selected this way, the underlying parameters for the node/image could be displayed
Here's some thoughts that come to my mind:
1) Apple bought the company that made Shake for some reason!
2) Apple discontinued Shake for some reason.
3) Apparently, Apple still owns the Shake technology.
4) Apple may still employ the creative people who created Shake.
5) Assimilating creative people from different, competing technologies into a single product team can be a real challenge -- a cat fight, with some really big cats.
Maybe, the stars are aligned for time, technology, creatives, and UIs!
It could be that Apple has found a way to integrate the use of stack-based and node-based editing -- in such a way that the creative user could use them interchangeably.
Hopefully, this could also be done in such a way to hide (or modularize) complexity, while at the same time, not reducing any capabilities,
A product like like this could span the use from newbie to prosumer to Pro.
It could be:
1) easy to learn (the basics)
2) easy to progress more capable hardware, software modules, training/learning tools
3) easy to become expert.
Maybe it's just dreaming -- but you appear to have similar thoughts.
I don't think Adobe understand pre-comps either to be fair.
They make you separate layers into a pre-comp to apply effects to a group and then you get the choice to continuously rasterize them so you're like, why did I have to separate them in the first place then? If they are continuously rasterized then by definition they aren't pre-composed. It's just extra work for no benefit.
Quote:
Originally Posted by Dick Applebaum
It is superior to Motion for some things and inferior for others -- to be expected, I guess.
The Multiplane node is essentially Motion's predecessor. That's really all Apple added to Shake beyond Quicktime support. It's very limited but you can see the hardware and software switch in that node and the performance difference, which probably led them to the decision to go GPU-only. It supports point-clouds and 3D objects too.
This might have been the decision that split the Shake team. I think it's the right one but at the time premature. When you see the difference in speed between Motion and AE at processing, it's pretty incredible. Until you start adding things that chokes Motion up of course. OpenCL is really the ideal development for this type of app and I hope one day the benefits will be seen here.
Quote:
Originally Posted by Dick Applebaum
Where, Shake fails at this, IMO, is showing all the noodles, all the time -- it becomes a snake-pit or can of worms that clutters the picture.
You can group nodes together, which simplifies things but opening/closing nodes has an odd behaviour as it doesn't prevent overlapping.
I actually find the spaghetti view shows more about what's going on. When you have just a scrollable list of layers that you have to keep collapsing and opening, it's hard to see how the elements affect each other and something as simple as masking a layer becomes a chore.
Quote:
Originally Posted by Dick Applebaum
1) Apple bought the company that made Shake for some reason!
2) Apple discontinued Shake for some reason.
I think they bought it for the same reason as they bought SoundJam/iTunes, FCP, Color, Logic etc - they are powerful apps that they will use themselves as well as their partner companies and good enough to be industry standard. The problem with Shake is the UI is all OpenGL so it's very hard to rework it all, the scripting language and plugin UI is old and there's probably a bit of cross-platform code in there.
I definitely think Shake would have had to be rewritten from the ground up with a GPU-compatible language like GLSL or OpenCL so it made sense to start over. Trouble is they decided to make a motion graphics app which isn't the same thing. You can't for example take passes out of a 3D app and combine them in the way the equations work. If you check out the manual, you can see an example of Abe's Exoddus - some of the best game CGI that still rivals today's FMV made back in 1998. You can't do that with Motion. You should also see a 'monkey boy' reference in the manual too, which is Ballmer's happy dance video.
Quote:
Originally Posted by Dick Applebaum
3) Apparently, Apple still owns the Shake technology.
4) Apple may still employ the creative people who created Shake.
Some of the Shake team left to work at The Foundry on Nuke, which looks exactly like Shake now but they do own the Shake code and have reworked some of it into Motion. Not sure if it has Keylight yet though as that may be owned by The Foundry.
Quote:
Originally Posted by Dick Applebaum
Maybe it's just dreaming -- but you appear to have similar thoughts.
Definitely, I think the tool needs to be simple for new users to pick it up with self-training. I think all apps should be. But they should never compromise on power/flexibility to get there. I think if simplicity has to go initially then it's fine, it just means people have to do a bit more work but at no point will they drop the app for something else because it doesn't do the job.
I think adding in multi-touch gestures will actually help somewhat. Navigating through certain UIs can be done some much quicker with touch input. Timeline navigation can be made so much better with it. Pinch-zoom and pan on a node view too. It's been a while since a major overhaul happened so it'll be good to see what's happened.
I'd quite like to see Motion merged with FCP. I think it's a bit redundant having them as separate apps. The main timeline would essentially have comps and you'd click on one and it would load the layer view in a separate pane. This way you get the multi-comp function of AE but even better as you are in an NLE so no intermediates. All real-time as it's GPU accelerated but can be pre-rendered for complex effects.
Given that a Quartz Composer block could be added, that could take care of some node functionality.
Comments
If you are 4:2:0 and you know what that means, you are "prosumer".
Or a starving indie filmmaker!
Or a starving indie filmmaker!
Or you do work for the World Wide Interwebs.
But that's just a phase--and in no means as big as, much less bigger than Hollywood et al.
Don't confuse bits/color-space for ultimate quality, format specs for caliber of content, or gadgetry for artistry, mon frere, or you'll end up like Lucas (or worse, Lucas acolytes who dream of his success w/out half his tech genius or early inspiration).
Most incredible is a new, intelligent computerized assistant, called "Clippy." Clippy knows what you are doing, and helpfully asks questions.
No, that sounds too much like the old Microsoft Office Assistant that waves goodbye to you when you kill him out of frustration.
Besides the Cohen brothers, Murch, Hammer and Copula, and based on the positions available now, it appears that FCP is well entrenched in the pro market.*
In fact, 9 out of 10 of this year's (2010) nominees in the "Documentary Feature" and "Documentary Short" categories used Final Cut Studio to make their films."
Do you really not know how to spell those director's names?
Outside of some indie films, docus and basic cable reality programming, AVID is the standard in Hollywood, regardless of your FCP fantasies.
I'm can't wait to see the new version, but it's going to take more than slobbering comments from a guy whose life and business revolve around FCP before I get excited.
No, that sounds too much like the old Microsoft Office Assistant that waves goodbye to you when you kill him out of frustration.
Whooooosh!
Don't confuse bits/color-space for ultimate quality, format specs for caliber of content, or gadgetry for artistry, mon frere, or you'll end up like Lucas (or worse, Lucas acolytes who dream of his success w/out half his tech genius or early inspiration).
While I don't equate high end gear with creative excellence, it is worth keeping bits and color spaces in mind if you want quality (not creative quality but the quality of the digital end product). A lot of people will pull their already compressed by the camera 8 bit footage into their editor, change the codec, send it out to AE in one format from FCP without protecting the overbrights, work in 8bit and send it out again in a different format and pull it into FCP and change it again to something else etc. causing loss of quality all along the way. Save your angst for your film school classes.
Do you really not know how to spell those director's names?
Outside of some indie films, docus and basic cable reality programming, AVID is the standard in Hollywood, regardless of your FCP fantasies.
I'm can't wait to see the new version, but it's going to take more than slobbering comments from a guy whose life and business revolve around FCP before I get excited.
My apologies to Coppola. http://www.apple.com/finalcutstudio/in-action/
But I gather with your expertise, Disney, Pixar, Industrial Light & Magic or NBC, etc., would by below your expectations. They are, amongst hundreds of production houses that seem to have jobs for anybody with FCP expertise. Hell even IL&M Singapore was looking.
http://tbe.taleo.net/NA9/ats/careers...&cws=6&rid=312
http://www.simplyhired.com/a/jobs/li...y/l-california
http://www.simplyhired.com/a/jobs/li...ney/l-new+york
http://www.simplyhired.com/a/jobs/li...pro/l-new+york
http://www.simplyhired.com/a/jobs/li...new+york%2C+ny
Oh. and a little known school, the Carnegie Mellon University College of Fine Arts…
Uh, when you say the exact same thing for everything, it ceases to have any meaning.
He doesn't though. And as was pointed out already, he didn't even say that.
It's called PIXELMATOR, so Apple doesn't have to.
It's available on the Mac App Store.
I love it!
Well, that program can compete with Photoshop Elements, but not with Photoshop CS5. That's an order of magnitude more sophisticated.
First, I am not a pro -- I use FCP as a hobbyist, experimentation, home movie, etc.
I. too, think that node-based editing is opaque.
But then, a post by @palegolas, in another thread, got me thinking that it need not be that way;
Here's my post.
Thanks for the considered answer.
I, too, find the Motion UI a bit "clunky" -- it just seems to take soooo many levels to accomplish something -- you do lose your place.
I dabbled a bit with QC but it quickly becomes too cluttered.
I briefly experimented with a node-based (FCP Color FX) system -- but found it less than intuitive.
Of all the points you made, the following hit home:
Can you spread your "images" out on a light table, and have each effect (filter, whatever) you create display the result rather than the underlying details of the node?
That, to me, would be very intuitive -- something like:
1) Tap an image to create a duplicate of the original
2) Select the new duplicate and specify whatever filter, effect, etc.
3) Deselect the duplicate and the result is shown
Repeat the above as many times as desired and one result node can be dropped on top of another to form a composite result.
Possible?
Here's his answer.
I didn't see this until a few mninutes ago... it was posted after my bedtime.
What I get from his answer is that there may be a way to:
1) display each node as partial result rather than the underlying processes used to achieve the result.
2) manipulate these results as images on a light table
There is still the business of wires connecting the nodes -- but I can think of several ways of handling that:
-- optionally display the wires or not
-- optionally replace the wires with numbered stubs
-- option-select a result or stub to see the wires
If that's doable, then a node-based system could be more intuitive than a stack like motion uses).
There is a way to spread out all elements in a Motion project, with a simple key command... but I don't remember what it is.
Well, that program can compete with Photoshop Elements, but not with Photoshop CS5. That's an order of magnitude more sophisticated.
Yes, CS5 is pretty nice. Still discovering all the little things that have been changed and how they help my workflow. The 3D implementation is still clunky. They need to hire some good people for the 3D UI refinement... More like Motion or something simpler. The shadow catcher and model textures implementation are especially bad.
With the Canvas selected, simply press X, and all of the layers at the playhead location will fan out so you can see them. If you press SHIFT+X all of the layers in the entire project will fan out.
I was just training for Apple Pro certification in Motion 4. Brand new version coming out is good and bad news for me: I guess training goes back to square one when the new Final Cut Studio is released. Hopefully the new version of Motion won't freeze and crash so much...
Biggest change was you never saw the RED LINE "RENDER NEEDED" and it just played but very deep with a much easier learning curve.
Of course, that's one major thing they need to fix. Can't believe I forgot one of the biggest annoyances in FCP. The audio curve take so long to render too and I don't get why. Surely the computer can fast forward through hours worth of audio in seconds and just mark the highs and lows to build a graph.
Longer story short, she is now in ICU with less fluids coming out then going in. I thought my wife and I would ask for positive results, PRAYER (her name is Olga), positive vibes and to just PRAY SHE IS AROUND TO SEE ME AND MY WIFE, Sheri, have a few children and know we are doing great plus we go to Maine every summer which we e haven't done for years.
Yuck, old-people-fluid-talk but I'm sure everyone here wishes your mother well. I know that grandparents seeing their grandkids grow up is one of the best experiences they can have and I hope your mother has that privilege.
also pray I find some great deals on airline tickets/car, if need be.
If you use the last minute flight sites, you should be ok and book as far in advance as you can.
If that's doable, then a node-based system could be more intuitive than a stack like motion uses).
It is possible, a stack is essentially just a single branch of a node tree, so you just need to have a way to have multiple branches while also allowing them to interact.
After Effects does this in an annoying way using Pre-Comps and multiple comps and they have a node visualizer. The idea with Shake/Nuke etc is that you have an infinite workspace and you just throw things around as you please and you could connect things in a lossless way so compatible filters can combine before computing an image. AE does this transparently though to some extent and actually does some things better than the node editors like layer transforms.
AE takes pre-comps way too far though - you can't even apply effects to a group of layers with making a separate comp. That combined with no separation of x,y,z in earlier versions and no bezier animation curves is head-smackingly stupid. At least Motion gets this stuff right.
All they need to do is to allow you to link properties and layers. I think they added some link mechanism in a later version but the thing about nodes is it actually is easy and intuitive when you need that flexibility as you literally just pull a cable and connect two things. That's the simplest behaviour for that process.
I agree though that layers are easier to grasp most of the time and easier to control their timeline. Nodes are quite easy to disconnect from the timeline and end up screwing things up where you didn't want to as they have complex relationships. Nodes have stability issues too as you create cyclical dependence, which gets computers confused. A depends on B but B depends on C, which depends on A and now C wants to depend on B but... ah forget it, crash. 5 minute autosaves are a lifesaver.
The solution to multiple root braches is easy as you just allow multiple scenes/comps or even the ability to keep adding root nodes. The complex part comes when you try to connect a child node of one branch to the child node of another or make an entire scene a child of another scene.
The child-child link can be done by using a kind of ghost comp. It could exist separately in the comp but you just set its parent inputs manually and it would display as an icon in the graph to let you select it easily.
The majority of a node tree can condense into a layer-based structure, especially with Motion's grouping and having independent groups can take care of the rest. The question is how complex that process becomes on a complex tree with multiple child-child links like when a child-child branch links to another and then the 'ghost comps' become nested themselves.
Apple already has a node-based compositor anyway in their Quartz Composer and I think you can insert these into Motion but again you can see the disconnect from the timeline.
In the end, they are trying to visualise 2D non-linear spatial connections + 1D temporal constraints = 3D in a 2D interface. A single branch is essentially 1D as you can collapse the hierarchy at every level. As soon as you join two child nodes, you can't.
I don't think a 3D interface (even isometric) is the solution but it's the only way to display the above in full without using these separated comps. You also have to consider that GPUs are data parallel not task parallel so they can't process multiple branches at once. Not that it's a huge deal as they are fast but it means caching results more in memory - this is fine in Quartz Composer but not so much in Motion.
No matter what they do, they either over-complicate the UI for people who just use layers or dumb down the UI for people who need the flexibility of nodes. Apple have some of the best UI designers so if anyone can figure out a compromise, it's probably them but they will side on simplicity primarily and that's not good enough for the Shake users who went to Nuke instead.
It is possible, a stack is essentially just a single branch of a node tree, so you just need to have a way to have multiple branches while also allowing them to interact.
After Effects does this in an annoying way using Pre-Comps and multiple comps and they have a node visualizer. The idea with Shake/Nuke etc is that you have an infinite workspace and you just throw things around as you please and you could connect things in a lossless way so compatible filters can combine before computing an image. AE does this transparently though to some extent and actually does some things better than the node editors like layer transforms.
AE takes pre-comps way too far though - you can't even apply effects to a group of layers with making a separate comp. That combined with no separation of x,y,z in earlier versions and no bezier animation curves is head-smackingly stupid. At least Motion gets this stuff right.
All they need to do is to allow you to link properties and layers. I think they added some link mechanism in a later version but the thing about nodes is it actually is easy and intuitive when you need that flexibility as you literally just pull a cable and connect two things. That's the simplest behaviour for that process.
I agree though that layers are easier to grasp most of the time and easier to control their timeline. Nodes are quite easy to disconnect from the timeline and end up screwing things up where you didn't want to as they have complex relationships. Nodes have stability issues too as you create cyclical dependence, which gets computers confused. A depends on B but B depends on C, which depends on A and now C wants to depend on B but... ah forget it, crash. 5 minute autosaves are a lifesaver.
The solution to multiple root braches is easy as you just allow multiple scenes/comps or even the ability to keep adding root nodes. The complex part comes when you try to connect a child node of one branch to the child node of another or make an entire scene a child of another scene.
The child-child link can be done by using a kind of ghost comp. It could exist separately in the comp but you just set its parent inputs manually and it would display as an icon in the graph to let you select it easily.
The majority of a node tree can condense into a layer-based structure, especially with Motion's grouping and having independent groups can take care of the rest. The question is how complex that process becomes on a complex tree with multiple child-child links like when a child-child branch links to another and then the 'ghost comps' become nested themselves.
Apple already has a node-based compositor anyway in their Quartz Composer and I think you can insert these into Motion but again you can see the disconnect from the timeline.
In the end, they are trying to visualise 2D non-linear spatial connections + 1D temporal constraints = 3D in a 2D interface. A single branch is essentially 1D as you can collapse the hierarchy at every level. As soon as you join two child nodes, you can't.
I don't think a 3D interface (even isometric) is the solution but it's the only way to display the above in full without using these separated comps. You also have to consider that GPUs are data parallel not task parallel so they can't process multiple branches at once. Not that it's a huge deal as they are fast but it means caching results more in memory - this is fine in Quartz Composer but not so much in Motion.
No matter what they do, they either over-complicate the UI for people who just use layers or dumb down the UI for people who need the flexibility of nodes. Apple have some of the best UI designers so if anyone can figure out a compromise, it's probably them but they will side on simplicity primarily and that's not good enough for the Shake users who went to Nuke instead.
Wow! Thanks for the great post. I think I understand everything except pre-comps -- I have never used AE.
All the talk about node-based editing piqued my interest. I had an old trial version of Shake -- but it would not open... sigh.
As a last resort I found a torrent -- haven't gone that route in years -- but seemed valid as there was no other way to try.
I have beed watching some tutorials and playing around with Shake 4.1.
It looks pretty good.
It is superior to Motion for some things and inferior for others -- to be expected, I guess.
I previously bought Silhouette, a fairly expensive plugin for FCP, to do complex rotos.
Anyway, after a few more Shake tutorials, I think I'll try using it for some of the the things I found difficult to do with FCP, Silhouette and Motion.
I suspect that I will like Shake.
In some of the tutorials they show how Shake can optionally display the visual result of a node: the node/image -- I think that's part of what's needed to be able to simplify and visualize what's going on.
As was discussed earlier. this is similar to the power and elegant simplicity of looking at images on a light table.
Where, Shake fails at this, IMO, is showing all the noodles, all the time -- it becomes a snake-pit or can of worms that clutters the picture.
QC, on the other hand, goes too far -- they encapsulate complexity in a summary node with no patch cords (good) but no visual clue as to what the summary nod does.
What I think would be superior would be just to show the node/images juxtaposed in some natural way -- left to right, top to bottom, whatever.
Then the user could Option-click-hold (or two-finger-press-hold) and expose the noodles connected to that node/image, and their connection to other node/images -- to expose the interrelationship of the node/images. When selected this way, the underlying parameters for the node/image could be displayed
Here's some thoughts that come to my mind:
1) Apple bought the company that made Shake for some reason!
2) Apple discontinued Shake for some reason.
3) Apparently, Apple still owns the Shake technology.
4) Apple may still employ the creative people who created Shake.
5) Assimilating creative people from different, competing technologies into a single product team can be a real challenge -- a cat fight, with some really big cats.
Maybe, the stars are aligned for time, technology, creatives, and UIs!
It could be that Apple has found a way to integrate the use of stack-based and node-based editing -- in such a way that the creative user could use them interchangeably.
Hopefully, this could also be done in such a way to hide (or modularize) complexity, while at the same time, not reducing any capabilities,
A product like like this could span the use from newbie to prosumer to Pro.
It could be:
1) easy to learn (the basics)
2) easy to progress more capable hardware, software modules, training/learning tools
3) easy to become expert.
Maybe it's just dreaming -- but you appear to have similar thoughts.
.
I think I understand everything except pre-comps
I don't think Adobe understand pre-comps either to be fair.
They make you separate layers into a pre-comp to apply effects to a group and then you get the choice to continuously rasterize them so you're like, why did I have to separate them in the first place then? If they are continuously rasterized then by definition they aren't pre-composed. It's just extra work for no benefit.
It is superior to Motion for some things and inferior for others -- to be expected, I guess.
The Multiplane node is essentially Motion's predecessor. That's really all Apple added to Shake beyond Quicktime support. It's very limited but you can see the hardware and software switch in that node and the performance difference, which probably led them to the decision to go GPU-only. It supports point-clouds and 3D objects too.
This might have been the decision that split the Shake team. I think it's the right one but at the time premature. When you see the difference in speed between Motion and AE at processing, it's pretty incredible. Until you start adding things that chokes Motion up of course. OpenCL is really the ideal development for this type of app and I hope one day the benefits will be seen here.
Where, Shake fails at this, IMO, is showing all the noodles, all the time -- it becomes a snake-pit or can of worms that clutters the picture.
You can group nodes together, which simplifies things but opening/closing nodes has an odd behaviour as it doesn't prevent overlapping.
I actually find the spaghetti view shows more about what's going on. When you have just a scrollable list of layers that you have to keep collapsing and opening, it's hard to see how the elements affect each other and something as simple as masking a layer becomes a chore.
1) Apple bought the company that made Shake for some reason!
2) Apple discontinued Shake for some reason.
I think they bought it for the same reason as they bought SoundJam/iTunes, FCP, Color, Logic etc - they are powerful apps that they will use themselves as well as their partner companies and good enough to be industry standard. The problem with Shake is the UI is all OpenGL so it's very hard to rework it all, the scripting language and plugin UI is old and there's probably a bit of cross-platform code in there.
I definitely think Shake would have had to be rewritten from the ground up with a GPU-compatible language like GLSL or OpenCL so it made sense to start over. Trouble is they decided to make a motion graphics app which isn't the same thing. You can't for example take passes out of a 3D app and combine them in the way the equations work. If you check out the manual, you can see an example of Abe's Exoddus - some of the best game CGI that still rivals today's FMV made back in 1998. You can't do that with Motion. You should also see a 'monkey boy' reference in the manual too, which is Ballmer's happy dance video.
3) Apparently, Apple still owns the Shake technology.
4) Apple may still employ the creative people who created Shake.
Some of the Shake team left to work at The Foundry on Nuke, which looks exactly like Shake now but they do own the Shake code and have reworked some of it into Motion. Not sure if it has Keylight yet though as that may be owned by The Foundry.
Maybe it's just dreaming -- but you appear to have similar thoughts.
Definitely, I think the tool needs to be simple for new users to pick it up with self-training. I think all apps should be. But they should never compromise on power/flexibility to get there. I think if simplicity has to go initially then it's fine, it just means people have to do a bit more work but at no point will they drop the app for something else because it doesn't do the job.
I think adding in multi-touch gestures will actually help somewhat. Navigating through certain UIs can be done some much quicker with touch input. Timeline navigation can be made so much better with it. Pinch-zoom and pan on a node view too. It's been a while since a major overhaul happened so it'll be good to see what's happened.
I'd quite like to see Motion merged with FCP. I think it's a bit redundant having them as separate apps. The main timeline would essentially have comps and you'd click on one and it would load the layer view in a separate pane. This way you get the multi-comp function of AE but even better as you are in an NLE so no intermediates. All real-time as it's GPU accelerated but can be pre-rendered for complex effects.
Given that a Quartz Composer block could be added, that could take care of some node functionality.
I'm amazed the tech community are so naive in business matters.
It's not just the tech community. How many have actually ran their own businesses?
Also there is a bit of selfishness and greed - who doesn't want something for nothing? What a deal!