Oh crap. Look at this massive can of worms I opened! I hope I can explain it better for the likes of Tenobell and anantksundaram. Unfortunately, I fear that this is unlikely, since the issues are highly complex and to explain it all from the basics upwards requires a book, not a forum post.
First let me try and give a simple overview of the video playback process, and then I'll review the exchanges I had with Tenobell.
Video tracks have several parameters, four important ones are:
- Frame rate
- Bit rate
I'm going to have to assume that you understand the basics of the first three, as I don't have time to go over them in depth. Let's talk about codecs.
Codecs are all about reducing the amount of space-required-to-store/bandwidth-required-to-transmit a video. Uncompressed video can be quite huge. Take a 24 bits per pixel (that's 8 bits each for red, green and blue) 1280 x 720 resolution video at 24 frames per second as an example. How many bits per second are required to represent this?
Each frame has 1280 x 720 = 921,600 pixels.
Each pixel has 24 bits of colour information = 22,118,400 bits per frame
There are 24 frames per second = 530,841,600 bits per second, i.e. approximately 531 Mbps or over 13 times the maximum data rate allowed on a blu-ray disc, to put it into some sort of context. 100 minutes (a common length for a movie) of such uncompressed data would require 371 GiB of data storage!
So, for practical purposes, we really need to radically reduce the amount of space required to store this file. Using lossless compression - whereby redundant data is removed for storage, but this data can be re-calculated from what's left such that the original file can be re-created perfectly, bit for bit - we can halve the amount of data required. This isn't nearly enough. We need a radical reduction. And that's what lossy codecs such as MPEG-2 and H.264 are for. The AppleTV will support H.264 at a maximum of 5 Mbps. Hopefully you can appreciate that achieving a compression ratio of over 100:1 and still maintaining a high-quality picture is not at all trivial. The codec specifies how the information should be removed from the original file, giving us a resultant H.264 stream. The codec also specifies how this data should then be decompressed to give us our picture information as displayed on screen.
Now, here's an important bit: since both compression and decompression are extremely complex, despite the fact that codec standards specify how a stream should be structured, the methods that should be used to generate such compressed streams from uncompressed streams, and the methods to decode the compressed streams, there is a lot
of "wiggle-room" in implementations - especially on the de-code side there's a whole host of post-decompression filtering that you can
perform, but don't have
to, in order to try and improve the final picture quality. In other words, not all H.264 implementations are identical. Whilst on the encode side they'll all give you H.264 compliant-streams, and on the de-code side will all decode H.264 compliant streams, they won't all give you the same picture quality.
Next up is interlacing: interlacing is the process of dividing the picture up into odd and even lines. If the camera is an interlacing camera, even lines are captured, then odd lines are captured. The discussion of why you might want to do this is not important. Years ago, cameras were interlacing and TV sets were interlacing. Capture and playback methods matched up to one another.
Most modern forms of capturing and playing-back content don't use interlacing.
If you've got some interlaced content and need to display it on an LCD, plasma or DLP projector, you need to de-interlace it first. Simple you say: just take the one set of even lines, and the immediate next set of odd lines, and display them at the same time. Yes, you can do that, but it'll look rubbish. Why? Because the even and odd lines weren't captured at the same time, so if the subject and camera where moving relative to one another, there will be motion-related problems. Now, you can
use loads of wicked-clever maths to overcome this problem (motion-compensated de-interlacing) and deliver a pretty damn good picture.
Next up, scaling: Imagine you've got a 640 x 360 pixel iTunes movie file, and your TV has 1360 x 768 pixels (a very common resolution for LCD tvs). How do you get your video to fill the screen completely, without having to crop it? On the horizontal scale, you've got 2.125 pixels on screen for each 1 pixel in the file, and on the vertical scale, you've got 2.133 pixels on screen for each 1 pixel in the file. Let's consider a single pixel in the very top left of the source file. Let's say it's coloured red. Which pixels on the screen should be coloured red? What's .125 of a pixel? .133 of a pixel? Again - here is where some clever maths is required to achieve optimal picture quality.
Now, let's review the process of playing back a file that contains an H.264 compressed video track - a stream of bits in an H.264-complaint structure that represents moving pictures:
- Decode H.264
- De-interlace (if the source is interlaced)
- Scale (if source resolution and display resolution don't match)
- Send data to screen
The AppleTV can do all these things. Each stage can contribute to the final picture quality achieved. The GeForce 7300 go can do all of these things in hardware, and it does it very well.
With the Mac Mini, the GMA950 can
do the de-interlacing and scaling in hardware, but it does it very poorly. You can do de-interlacing and scaling in software running on the Core2Duo processors, but the QuickTime de-interlacing and scaling isn't that good.
So, both AppleTV and Mac Mini have the potential
to deliver top-notch picture quality, but it's much easier to achieve with the AppleTV since the hardware has all the functionality built-in already, it just needs relatively simple software (relative to the complex software-implementation of high-quality motion-compensated de-interlacing and scaling) to support those features.
For MPEG-4 content played on the AppleTV, the Geforce Go features are used, delivering superior picture quality relative to a Mac Mini. Using QuickTime plugins on the AppleTV however, all decoding, de-interlacing and scaling is performed on the CPU so will deliver equal performance to a Mac Mini. However, there is the potential there for Apple to deliver MPEG-2 and other codec support using some or all (depending on the codec in question) of the GeForce Go advanced features.
It is worth noting that most content won't be interlaced so the interlace performance might not matter all that much. I'm sure iTunes store files aren't interlaced, for example. It is also worth noting that interlaced content can
be de-interlaced before
being encoded. For example, a cable TV transmission may be interlaced. If you record this with a cable box, assuming the cable box doesn't do anything clever, you'll have an interlaced file. If you want to convert this file to H.264, you can as part of the process de-interlace the file before it's compressed. The quality of the algorithm used to de-interlace will affect the final picture quality of the file when it's played back; any de-interlacer in the playback hardware won't be needed and will therefore have no bearing on the final picture quality (but the H.264 decoder and scaling (if required) implementation will).
Having said all that Tenobell, here is the sequence of events in this thread from my point of view, with further expansion of each of the points I made:
Kip Kap Sol said people hacking the AppleTV might as well use a Mac Mini instead (i.e. - connect a Mac Mini to their TV, not an AppleTV to their TV)
I suggested that the AppleTV has superior video quality output than the Mac Mini. Whilst the Mac Mini has many advantages over an AppleTV in terms of range of capabilities, the AppleTV has superior video-playback hardware and therefore the potential for higher video output quality compared to a Mac Mini.
Then you said something along the lines of decent streaming performance and decent video output quality being mutually exclusive. I replied with a post pointing out that this is nonsense. Whether something is being streamed or read from HDD has no bearing on the quality of codec decoder, de-interlacer and scaler being used.
As long as the average bit-rate of the video being streamed is below the average throughput of the network over which it is being streamed, and the receiving device has a big enough buffer, there is no problem. Since the AppleTV will only go up to 720p and has 802.11n, and a 40 GB HDD (i.e. easily enough room for a, say, 20 second buffer), video bit-rate is not a problem in the context of streaming.
Then you made a post suggesting that if a Mac Mini is streaming to an AppleTV, the limiting factor will be the Mac Mini. In addition, you suggested that since the Mac Mini has a powerful Core2Duo processor, it doesn't matter that it has a GMA950 with rubbish video features.
I replied that your first point makes no sense (video output quality of a Mac Mini doesn't affect the video output quality of an AppleTV - the Mac Mini is sending a compressed video stream to the AppleTV, the Mac Mini is not modifying this stream in any way - it is the AppleTV that decodes the stream and displays the video) and that the second point could make sense but unfortunately Apple don't do all the things they'd need to do in software that the GMA doesn't do in hardware (and the Geforce 7300 does do in harware) to deliver maximum-quality pictures.