Originally posted by kim kap solTrust me...I don't know what your big brother and little brother are about either.
I only have a slight understanding of these concepts. I'm still trying to understand a couple of Hiro's explanations.
OK how about an example diagram?
Big-O describes worst case performance as n approaches infinity, little-o is any function that is larger.
Big-Omega describes best case performance as n approaches infinity, little-omega is any function that is smaller.
Big-O and Big-Omega may cross for small values of n because the order of the algorithm really describes how the algorithm behaves as n gets large.
When Big-O and Big-Omega have exactly the same shape, but only shift up/down by some constant coefficient , we drop the coefficient from the equation and call that big-Theta [ O( c1*f(n) ) && Omega ( c2*F(n) ) then Theta( f(n) ) ].