Originally Posted by caliminius
Originally Posted by Dick Applebaum
Think of the social possibilities:
You and your friends are watching the same TV show and making extemporaneous comments... But you're in different places (rooms, houses, towns)...
Prepared on my personal TV
Are you talking about live chat which you could accomplish today with the TV and phone you already own?
Or obnoxious text that pops up on the video, which some shows already do via twitter and social media sites? (though it's obviously not restricted to people you know)
I'm not exactly sure why either one of those possibilities should excite me.
This thread is about the small screen, the portable screen... the iPad.
You, and your friends, could stream the same video (Movie, TV Show, Music Video, Live Event, Vimeo, YouTube, etc.). Each person could display the video full screen or partial screen. The social interaction could be by voice, text, or even voice to text ala Siri.
For the friend viewing the video in full screen, the text or voice would appear in a semi-transparent HUD that overlays the video; or mute the video sound and play the voice over sound.
I suspect that most friends would opt for something like a video chat room where the video plays in one area and the chat messages scroll in another area *. There are all inds of variations possible -- multiple mini windows, one for each friend...
* A Windows Surface Tablet's entire screen can be displayed in less than 1/3 of the Retina iPad's display
There are some technology enhancements coming that will make this possible such as:
- more powerful iPad hardware-- RAM,CPU, GPU video encoder/decoder specialty chips
- faster WiFi
- faster cell
- better streaming compression that uses lower bandwidth **
** In a recent writeup of Apple patents:
Jack Purcher points to a patent application near the end of the article:
Below's an overview from the patent application. It's a little hard to follow but it appears to be directly related to preserving image quality and reducing bandwidth when transmitting video. This certainly would apply to video streaming and video chat -- but it could also be used in any collaboration involving video, such as video editing.
 In video coder/decoder systems, a video encoder may code a source video sequence into a coded representation that has a smaller bit rate than does the source video and, thereby may achieve data compression. The encoder may code processed video data according to any of a variety of different coding techniques to achieve bandwidth compression. One common technique for data compression uses predictive coding techniques (e.g., temporal/motion predictive encoding). For example, some frames in a video stream may be coded independently (I-frames) and some other frames (e.g., P-frames or B-frames) may be coded using other frames as reference frames. P-frames may be coded with reference to a single previously coded frame and B-frames may be coded with reference to a pair of previously-coded frames, typically a frame that occurs prior to the B-frame in display order and another frame that occurs subsequently to the B-frame in display order. The resulting compressed sequence (bitstream) may be transmitted to a decoder via a channel. To recover the video data, the bitstream may be decompressed at the decoder, by inverting the coding processes performed by the encoder, yielding a received decoded video sequence. In some circumstances, the decoder may acknowledge received frames and report lost frames.
 Modern coder/decoder systems often operate in processing environments in which the resources available for coding/decoding operations varies dynamically. Modern communications networks provide variable bandwidth channels to connect an encoder to a decoder. Further, processing resources available at an encoder or a decoder may be constrained by hardware limitations or power consumption objectives that limit the complexity of analytical operations that can be performed for coding or decoding operations. Accordingly, many modern coder/decoder systems employ a variety of techniques to constraint bandwidth consumption and/or conserve processing resources.
 Video Resolution Adaptation ("VRA") is one such a technique used in video coding/decoding systems to manage bandwidth and/or resource consumption within the coder/decoder system. VRA is a technique that alters the resolution of images prior to being coded for bandwidth conservation. For example, a camera may output video data to an encoder at a predetermined resolution (say, 960.times.720 pixels) but an encoder may reduce this resolution to a lower resolution (ex., 320.times.240 pixels) to meet a performance constraint. Reducing the resolution of the image effectively reduces its size for coding and, therefore, contributes to reduced bandwidth when the resulting image is coded. Similarly, a reduced resolution image also is less complex to code than a full resolution image.
 When a decoder receives and decodes such an image, it will generate a recovered image with reduced resolution. If the image is rendered on a display device and expanded to fit a larger display area than the reduced size (ex., 320.times.240 pixels to 960.times.640 pixels), it will appear blurry and will be perceived as having lower quality. Alternatively, it might be displayed at a reduced size--a 320.times.240 pixel window on a 960.times.640 pixel display--but it also will be perceived as having low quality even though it may appear relatively sharper than the expanded version.
 Accordingly, there is a need in the art for a video coder/decoder system that takes advantage of the bandwidth and resource conservation that VRA techniques can provide but still provide high image quality. There is a need in the art for a video coder/decoder system that allows images to be coded as low resolution images and be displayed at a decoder as if they were high resolution images.
Finally, I am 73 (next Wed) and have 3 teenage grandkids... all have had their own iPad for 2 years, and the oldest has an iPhone 4 (they'll all have iPhones after the September announcement). This generation uses technology in totally different ways than my, or their mother's, generation. They don't know a world without Television, personal computers, cell phones, and the pre-post-pc (mobile phone, iPad) world, to them, is a fading memory.
One of the things, that constantly amazes me -- is that they can simultaneously watch the HDTV, do homework, do something on their iPad while talking to or texting friends. Sometimes, the use one of the "truck" computers in the home -- but mostly it's iPads and cell phones.
My point... while the capability may not excite you or me -- the socially-active younger generations will incorporate something like this into their lives, without giving it a second thought.
Edit: Finally 2: One of the reasons we're getting iPhone 4 (or better) for the grandkids is that it eliminates the need for an extra camera or video cam.
Here's something I recently learned: Each iPhone (and some cameras) have an unique ID. When you ingest photos or video into Final Cut Pro X (retail $299) the ID (as well as accurate date and time) accompanies the media as metadata. So, if you and a bunch of family and friends are all attending an event (rave, soccer game, picnic, amusement park, etc.) and taking pictures and/or videos -- you can easily combine them into a multicam video ***.
Then, with Final Cut Pro X it is ridiculously easy (automatic) to ingest the media, assemble an angle from each camera and synchronize these angles based on time. (Using other methods could take days or weeks and involve preplanning, special equipment, etc).
Once you have all the [camera] angles assembled and synchronized it is fun and easy to view them and switch (cut) between them -- so that the video includes shots from whatever camera has the most interesting content at any given time.
My 15-year-old granddaughter is learning how to do this now -- with an iPhone or iPad as a camera, it is trivial.
***A multicam video is like a music video where several cameras are used -- each focused on a different player. After the shooting, media is selected from various cameras, at various points in time, to create a final video.
Edited by Dick Applebaum - 8/23/12 at 9:57am