Apple unveils redesigned, thinner iPhone 4 with two cameras

1151618202126

Comments

  • Reply 341 of 507
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by success View Post


    Meaning 'Google like' conversation? That's BS. Many people have been requesting for this feature for ages. The reason why it's needed it because many people use POP accounts. When you first set up the POP account on the iPhone it imports say all messages that that account has which could easily be 1,000 archived messages. So not only do you have to wait while the iPhone imports those 1,000 messages but after you're done those messages sit on the iPhone. They aren't needed on the iPhone because you can view them on the desktop/webmail. So many of us manually delete those POP emails because we're just interested in the new messages that come in after we set up the account. Right now you have to go in and manually delete 1,000 messages. There is no delete all button. There is a delete all button for the messages in the trash box but...



    I just know what they talked about in the address given today. Don't blame me.
  • Reply 342 of 507
    bowserbowser Posts: 89member
    Quote:
    Originally Posted by twoslick View Post


    For someone who claims to teach this stuff, it honestly saddens me that you've tried to equate the cell density of the retina with the density of a display device held 10 to 12 inches away. Your nitpicking assumes you smashed the display up against your retina (of course, then you couldn't actually see the whole display). Your vision really does lose "resolution" the farther away an object is. Otherwise, we'd all be able to see the stripes on the American flag on the Moon.



    Maybe you'd like to source some research that says the retina can pick out detail higher than 326 ppi at 10 inches away before making nonsensical arguments.



    You know, I'm sorry... I forgot the old internet forum posting rule warning that when you mention you have an advanced education, it will always provoke a comment from a jealous, insecure, hater, trying to feel better about himself by insulting and criticizing those who make the post.



    This is all over, a few minutes of looking, and you could have saved yourself from looking like a complete asshole.



    This is one of the first modern studies showing the limits of visual acuity.



    Vernier acuity, crowding, and cortical magnification.

    Levi, Klein, and Altsebaomo, 1985.

    Vision Research, Vol. 25, Issue 7, pp 963 - 977



    It is considered a seminal paper in the field of vision research on visual acuity.



    And to provide you with a source closer to your level;

    "Eye, Human", in the 2006 Brittanica.



    Also, a quick search of Google Scholar will return literally tens of thousands of hits on human visual acuity. In fact, in the top five hits for "Vernier Acuity", is the first paper I mention above, plus others investigating and describing such visual acuity.



    The fact is we are able to distinguish details down to 8 arcminutes at 16 feet. And just so you know, that is 40% of the size of a human cone photoreceptor.



    Now STFU and go back to your troll cave.
  • Reply 343 of 507
    frank777frank777 Posts: 5,839member
    Sorry if I missed it, but we're on page 9 here.



    Did Apple confirm that iPhone 4 will be able to use the Bluetooth Keyboard?

    I didn't see it anywhere on the specs page.
  • Reply 344 of 507
    dick applebaumdick applebaum Posts: 12,527member
    Quote:
    Originally Posted by melgross View Post


    They were interesting, but I was having some problems with them and I don't know if it was from my end or not. They were jerky at times, but not the kind of jerk from lack of stabilization.



    Prolly, at my end... Cameraman (me) error. Again I am usually at midfield... am always jerking around looking for the ball & zoom in and out. One problem is the Panny has a only an LCD viewfinder that is totally useless in the sun... so I use a tripod and dead reckoning. Several clips were slowed down to synch with the music. I could have spent more, time, used FCS (and image stabilization), still frames instead of jerks.... But my goal was to examine 3-4 hours of footage, every Sunday, and have highlights published on Monday, for Tuesday practice... and to have a little fun.



    Quote:





    At times, they were rock solid. But some looked to be shot in slo mo.




    Yeah, quick and dirty... one video clip, "Zack Takes Control", was about 7 seconds long.., but Zack faked out the opposing player, twice, in that seven seconds... The only way I could figure to show this was to convert the vid to an image-sequence at 30 FPS... than play that as a herky-jerky slide-show.



    Quote:



    As for the question about anti shake, stabilization, or whatever we call it on an iPad. The answer is that I don't know yet. Maybe some of the guys here who are developers and who have seen the new OS may have some idea. But, while there are schemes to post process video so as to eliminate shake from exposure, its fairly sophisticated stuff.



    I am a Developer... 4.0 beta has all this stuff DSP, etc that seems designed for video processing. Unfortunately, you can only use it on the iPhone at this time (iPad support for 4.0 is not available yet, even for developers).



    I am not knowledgeable enough to write code for image stabilization... so I have to depend on others.



    That's why I ask!



    Quote:





    What is done is to crop slightly so that the image can be moved around so as to eliminate the shake. The frame is moved up or down by whatever amount is needed, and sometimes sideways too. But it requires a fair amount of processing power and sophistication from the software. It has to know what's shake, and what's movement, and separate out the differences, and correct for the proper errors. Often you can click on an object that shouldn't be moving, so that the software can compensate for it. But if it moves out of the frame, another has to be used, and there may be a jerk as the software moves from one reference to another.




    There is at least 1 iPhone app that can stitch a series of overlapping still images into a panorama... I am surprised by the results and the performance.



    http://itunes.apple.com/us/app/autos...318944927?mt=8



    It doesn't do realtime, but the processing should be similar to stabilization, No?



    Quote:



    I don't think this stuff will be seen on an iPad for some time.



    It's really much easier to do it in the camera.



    I an interested in why... is it easier to do in hardwired circuitry on the cam?



    TIA



    I am learning something!



    .
  • Reply 345 of 507
    jetzjetz Posts: 1,293member
    I am curious about FaceTIme. Did they say it was an open standard? I read that somewhere. Would that mean it could open up video calling to other phones? Right now, if it's just iPhone 4 to iPhone 4, that'd be of very limited usefulness.
  • Reply 346 of 507
    jetzjetz Posts: 1,293member
    I am actually liking the design. Almost reminds me of the Prada phone. It's not as "Bling" as iPhones passed. But I kinda like that. It seems more suitable for business. A more professional look if you will.



    Kinda looks like this:



    http://i.imgur.com/DHP8p.jpg
  • Reply 347 of 507
    mstonemstone Posts: 11,510member
    Quote:
    Originally Posted by melgross View Post


    There's no such thing as the "RGB" gamut. RGB can have a small gamut, or a very large gamut. Seeing detail has nothing to do with the gamut per se.



    Explain yourself out of this comment with links, documentation, evidence, or anything relevant and maybe we have a discussion
  • Reply 348 of 507
    bowserbowser Posts: 89member
    Quote:
    Originally Posted by MFago View Post


    What is the angular resolution of the human eye, and what linear distance does this angle correspond to at 12" away? I believe Steve was claiming that it is 300 dpi.



    I don't have the time to do a conversion, and it is also distance dependent, but we are capable of perceiving a difference in two lines as small as 8 arcminutes of angle. This is typically measured with the Snellen methodology which is usually at 12, 16, or 20 feet. This size (8 arcmin.) is about 40% the size of a cone receptor at 16 ft.



    Quote:
    Originally Posted by masternav View Post


    yeah - get the receptor thingy BUT how does the visual center process all that data? We are talking perceptivity here not receptivity, processing not physical capacity. Again, don't get distracted by the label - consider the function - unless you are in fact a hopeless pedant. Go back to what the actuall Apple website says about the display and actually watch what Steve Jobs says about it - not the translated, transformed, transliterated abomination that passes for the article here.



    The vast majority of what the brain processes is done so unconsciously. We don't have to wake up in the morning and say to ourselves, "I'm going to be extra vigilant today and pay more attention to what my brain is doing". The brain processes everything it takes in. Awareness is another thing entirely. You are making the mistake that many first time Perception students make which is assuming we're aware of everything being processed by the brain. We are not, in fact, we are only aware of less than about 1% of all perceptual activity performed by the brain.



    Quote:
    Originally Posted by melgross View Post


    You certainly didn't understand what Jobs was saying. nd what you are saying isn't entirely correct.



    Actually I do, and much of what you say here demonstrates that you really don't understand anything about the optics of the eye or the physical relationship between locations in the visual field and how we perceive those objects.



    Quote:
    Originally Posted by melgross View Post


    I've been in the photo industry since 1969, and have degrees in biology, as I assume you do, and 300 dpi, ppi, spi, etc., are standards for good reasons. Our visual acuity is limited. Under VERY special circumstances we can see lines, and dots that are finer than we can normally see. But that's under circumstances that are unusual. Under conditions of extreme contrast, we can see details we otherwise can't see. Those conditions don't normally exist for us when reading newspapers, magazines, and books. You might remember that he mentions, specifically, printed matter as the standard there.



    My claim about the degree of human visual acuity is based on the standardized Snellen procedure used by every optometrist in the world. Such acuity is usually measured at distances of 12, 16, or 20 feet.



    Yes, there are things that can be finer than the resolution of our visual acuity. But we can see things that are extremely small. As I replied to other posters above, details less than one half the size of a cone photoreceptor.



    Quote:
    Originally Posted by melgross View Post


    The lower the contrast, the lower our ability to discriminate detail. This is pretty well known and understood.



    See above.



    Quote:
    Originally Posted by melgross View Post


    You are also making a major error in talking about the density of the retina. The retina is small. There's no point in saying that there are millions of rods and cones per inch. Do you know the size of the retina?



    In addition to that small retina size is the fact that the iPhone screen is vastly larger than that retina. Can you figure out how many retinas would be needed to cover the iPhone screen? The point is that the retina has just so many sensing cells, and the iPhone screen has just so many pixels. Talking about the number of cells per inch is a useless statement because that small "sensor" is looking at a much larger screen.



    In addition to that is the other fact that we aren't using more than a small part of out retina to image the iPhone screen, so we are just using a fraction of the cells in it. And, in addition to that, is the other fact that while the screen has the same sensitivity from edge to edge, our eye has very poor vision outside a fairly small area in the middle.



    The situation isn't as clear as you make it out to be.



    I can continue with this, but enough is enough.



    No, I'm not making a mistake. Just look it up. The area of the retina has nothing with resolving power. By your argument, eagles, with retinae smaller than ours, should have poorer vision than ours. It is well documented that the visual acuity of birds of prey is superior to humans. What you're saying here is that simply having a bigger retina increases acuity. That is simply false.



    The only thing you say here that is even close to correct is that we have better acuity at the fovea than anywhere else in our field of vision. That is true, and my statements above are assuming foveal vision.



    Other than that you do not know what you're talking about here.



    Quote:
    Originally Posted by cgc0202 View Post


    Are you sure you are actually familiar with the biology of vision itself?

    And, for that matter the psycology -- what people see, and how they perceive what they saw?



    Please do enlighten us with your expertise. Your initial lecture does not suggest you do.



    Equating precision with the size of a reinal cell? But then again, if it cannot display anything, how can it be the arbiter of precision, and more important, perception?



    I heard, a team of scientists (one of them a professor at Harvard) won the Nobel prize for their seminal work on the biology of vision. You may want to brush up on their work and those who followed on their research. There is also a great body of work on the psychology of vision.



    CGC



    Yes, I am familiar with that work, I earned a PhD in Perceptual Psychology with an emphasis on visual attention, psychophysics, and physiological psychology at UC Santa Cruz. I now teach Perception at another UC campus now.



    Sorry if not listing references here doesn't meet with your approval; I have a lot of other things to do in a day. I forgot the natural way people tend to respond on the internet when you say something simple such as "I have an advanced degree in ..."



    I'm not going to waste my time here with these insults or even more politely worded posts as yours. In an earlier response I referenced an excellent Brittanica article that backs up my claims. I would also recommend as starters, "Sensation and Perception" 5th. ed. by Harvey Richard Schiffman, "Vision Science, photons to phenomenology" by Stephen Palmer, and "Foundations of Vision" by Brian A. Wandell, and "Visual Perception, a clinical orientation" by Steven H. Schwartz. Any of these works, especially the first two, will be helpful. The latter two are much more technical, and assume a background in psychology, neurophysiology, and also require first year calculus to understand the mathematics of the computer modeling discussed.



    I point other posters to these references, and the ones made in my previous reply here.
  • Reply 349 of 507
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by Dick Applebaum View Post


    Prolly, at my end... Cameraman (me) error. Again I am usually at midfield... am always jerking around looking for the ball & zoom in and out. One problem is the Panny has a only an LCD viewfinder that is totally useless in the sun... so I use a tripod and dead reckoning. Several clips were slowed down to synch with the music. I could have spent more, time, used FCS (and image stabilization), still frames instead of jerks.... But my goal was to examine 3-4 hours of footage, every Sunday, and have highlights published on Monday, for Tuesday practice... and to have a little fun.







    Yeah, quick and dirty... one video clip, "Zack Takes Control", was about 7 seconds long.., but Zack faked out the opposing player, twice, in that seven seconds... The only way I could figure to show this was to convert the vid to an image-sequence at 30 FPS... than play that as a herky-jerky slide-show.



    Ok, that's good, what I saw was what you were doing.



    Quote:

    I am a Developer... 4.0 beta has all this stuff DSP, etc that seems designed for video processing. Unfortunately, you can only use it on the iPhone at this time (iPad support for 4.0 is not available yet, even for developers).



    I am not knowledgeable enough to write code for image stabilization... so I have to depend on others.



    That's why I ask!







    There is at least 1 iPhone app that can stitch a series of overlapping still images into a panorama... I am surprised by the results and the performance.



    http://itunes.apple.com/us/app/autos...318944927?mt=8



    It doesn't do realtime, but the processing should be similar to stabilization, No?







    I an interested in why... is it easier to do in hardwired circuitry on the cam?



    TIA



    I am learning something!



    .



    Stitching does do something similar to what software would do post. It must straighten out the images so that they can be combined. That's some of it.



    I can use an example from the real world. A magnet has field lines. Force that is there because of the properties of the elements in the magnet. They just are. But in order to calculate those force lines we must devise, and then use the math. That's more complex. When we fall, gravity pulls us down at a steady rate depending on the mass of the planet. It just happens. It takes calculus to figure out the actual speeds that occur.



    With image stabilization in-camera. The sensor tells the software where the movement is, and the software can move the image around to compensate fairly easily within the larger sensor area.



    When we already have the shaky video, it's different. Then the object that must remain positioned in the same place, say, the left upright of the goal, must usually be selected. A point there must be measured by the software from one side and the top or bottom of the frame. Then the frame must be cropped so as to keep that point in the same place. If the camera points downward, then the top must be cropped. If the camera points upward, then the bottom must be cropped. Same thing with side to side shake. Each frame would have a different amount cropped out. This must be done field by field, or frame by frame. That takes a bit of work that doesn't have to be done in-camera.



    One of the problems is cropping the right amount, as every frame must be the same size. I've never seen software do this in a way that I was happy with. It doesn't work well for a lot of shake.
  • Reply 350 of 507
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by mstone View Post


    Explain yourself out of this comment with links, documentation, evidence, or anything relevant and maybe we have a discussion



    You're the one who has to do that here. I already explained the situation. You've explained nothing. I don't even know what you disagree with, as you've declined the opportunity to explain it.



    Are you disagreeing with what a pixel is? Are you disagreeing with what a sub-pixel is? I don't know.



    Are you disagreeing that you can't simply say "RGB gamut', as opposed to sRGB gamut, or Adobe 1998 RGB gamut, or Prophoto RGB gamut, to give a few examples?



    I dont know. You refuse to explain yourself.
  • Reply 351 of 507
    gregoriusmgregoriusm Posts: 513member
    I haven't read a single post in this thread, but will go back and do so, so this may already have been mentioned.



    On the Apple.com website, it reads:



    "

    While most phones have only one microphone, iPhone 4 has two. The main mic, located on the bottom next to the speakers, is for phone and FaceTime calls, voice commands, and memos. The second mic, built into the top near the headphone jack, is for making your phone and video calls better. It works with the main mic to suppress unwanted and distracting background sounds, such as music and loud conversations. This dual-mic noise suppression helps make every conversation a quiet one."



    Notice it says the main mic is place near the "speakers".



    Does anyone know if another speaker has been added, either near where the mic is, or internally similar to the way the iPod touch has it?



    Also, has anyone found out how much RAM it has?



    Cheers!



    Greg
  • Reply 352 of 507
    mstonemstone Posts: 11,510member
    Quote:
    Originally Posted by melgross View Post


    You're the one who has to do that here. I already explained the situation. You've explained nothing. I don't even know what you disagree with, as you've declined the opportunity to explain it.



    Are you disagreeing with what a pixel is? Are you disagreeing with what a sub-pixel is? I don't know.



    Are you disagreeing that you can't simply say "RGB gamut', as opposed to sRGB gamut, or Adobe 1998 RGB gamut, or Prophoto RGB gamut, to give a few examples?



    I dont know. You refuse to explain yourself.



    it really doesn't have to be any more complicated than 255^3
  • Reply 353 of 507
    solipsismsolipsism Posts: 25,726member
    Am I the last person to realize the iPhone 4 has 5.8Mbps HSUPA? How did I miss this all day?
  • Reply 354 of 507
    dick applebaumdick applebaum Posts: 12,527member
    Quote:
    Originally Posted by melgross View Post


    Ok, that's good, what I saw was what you were doing.







    Stitching does do something similar to what software would do post. It must straighten out the images so that they can be combined. That's some of it.



    I can use an example from the real world. A magnet has field lines. Force that is there because of the properties of the elements in the magnet. They just are. But in order to calculate those force lines we must devise, and then use the math. That's more complex. When we fall, gravity pulls us down at a steady rate depending on the mass of the planet. It just happens. It takes calculus to figure out the actual speeds that occur.



    With image stabilization in-camera. The sensor tells the software where the movement is, and the software can move the image around to compensate fairly easily within the larger sensor area.



    When we already have the shaky video, it's different. Then the object that must remain positioned in the same place, say, the left upright of the goal, must usually be selected. A point there must be measured by the software from one side and the top or bottom of the frame. Then the frame must be cropped so as to keep that point in the same place. If the camera points downward, then the top must be cropped. If the camera points upward, then the bottom must be cropped. Same thing with side to side shake. Each frame would have a different amount cropped out. This must be done field by field, or frame by frame. That takes a bit of work that doesn't have to be done in-camera.



    One of the problems is cropping the right amount, as every frame must be the same size. I've never seen software do this in a way that I was happy with. It doesn't work well for a lot of shake.





    Thank you! I understand... "steady as you go" is a lot easier than first defining what "steady" and "go" mean, then trying to adjust some frames to match those definitions.



    I am installing the SDK with the 4.0 GM.



    It's past my bedtime, here, in CA... must be way past yours.



    Long day... let's se what tomorrow brings!



    I appreciate the discussion on this thread.



    .
  • Reply 355 of 507
    mstonemstone Posts: 11,510member
    This whole argument/discussion started because SJ said that the new iPhone's display equaled human vision as if nothing can surpass the quality of the iPhone in practical terms, ever, which was clearly just marketing speak. Now we are simply debating the technical accuracy of that statement which is irrelevant to the actual quality of the screen comparatively, at least to any other product currently on the market. IPhone wins for now. No argument there.
  • Reply 356 of 507
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by Bowser View Post


    Actually I do, and much of what you say here demonstrates that you really don't understand anything about the optics of the eye or the physical relationship between locations in the visual field and how we perceive those objects.



    I have to differ. You can discuss perception, and that's fine. But under what conditions are you relating this to? What contrast levels are you using for your statements? I don understand this, and I know very well that at about 12" 300 dpi, or ppi, the normal human eye can't distinguish individual lines or dots, unless the contrast is very high. It's generally accepted that 20/20 vision allows about a one minute width line to be observed in a high contrast line pair.



    Are you saying differently?



    Quote:

    My claim about the degree of human visual acuity is based on the standardized Snellen procedure used by every optometrist in the world. Such acuity is usually measured at distances of 12, 16, or 20 feet.



    I think we are all familiar with the Snellen charts. At 20 feet it's similar to infinity, and we can see (if 20/20) about a 1.75mm line.



    I understand that 20/20 isn't really the "normal" eye, just the standard he set up.



    But even the Snellen charts aren't always accurate. Different lighting levels in the office can determine how well patients see them. My doctor agrees with that. In addition, now they use computers projecting letters into an LCD very often. Some have higher contrast than others. We also went through this at my doctor's office, where he has several.



    Quote:

    Yes, there are things that can be finer than the resolution of our visual acuity. But we can see things that are extremely small. As I replied to other posters above, details less than one half the size of a cone photoreceptor.



    Theta is 1/60th of a degree as I've said. We can see a single line of a line pair at 1/2 theta. It's the line pairs that are normally spoken about. From what I remember, at about 12" we can see black and white lines just a bit bigger than 0.0035 inch. Correct me if that's wrong.



    Quote:

    No, I'm not making a mistake. Just look it up. The area of the retina has nothing with resolving power. By your argument, eagles, with retinae smaller than ours, should have poorer vision than ours. It is well documented that the visual acuity of birds of prey is superior to humans. What you're saying here is that simply having a bigger retina increases acuity. That is simply false.



    You misunderstood what I was saying. I'm not saying that having a bigger retina would give us better vision. I didn't say that once. I'm also not comparing our retinas with those of an eagle, or a fish, for that matter. What I'm saying is that the image of the screen will take up just a small part of the image on the retina, depending on the distance. It will just be using a portion of those rods and cones. The fact that there are so many per inch is fine, but it will be just a fraction of an inch on the retina that the screen will be impinging upon. compare that with the size of the screen itself and the number of pixels, and just as importantly, the number of sub-pixels, and we get to a point where the eye can't resolve more information than what is on the screen. Don't forget that each pixel is made up of three sub-pixels, one of each color. Those sub-pixels are smaller than the entire pixel. So as we consider that the screen is 326 ppi, that's for entire pixels.



    Quote:

    The only thing you say here that is even close to correct is that we have better acuity at the fovea than anywhere else in our field of vision. That is true, and my statements above are assuming foveal vision.



    Other than that you do not know what you're talking about here.



    Well, I admit that I got off the point somewhat, and didn't state it as well as I should have. But the point is that we are only using a small part of the retina, and only a fraction of the rods and cones. Considering that as contrast goes down, we lose acuity quickly, even the paper by Levi shows that, we have to know what contrast levels we are talking about. This is well known photographically, which is why lenses are characterized at differing contrast levels.



    You're making it out to seem as though we can see a particular amount of detail under any condition. That's one of the things I disagree about.
  • Reply 357 of 507
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by mstone View Post


    it really doesn't have to be any more complicated than 255^3



    Yes, but you know that that's too general. And you're talking about 8 bit, if you're indicating what I think you are..
  • Reply 358 of 507
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by mstone View Post


    This whole argument/discussion started because SJ said that the new iPhone's display equaled human vision as if nothing can surpass the quality of the iPhone in practical terms, ever, which was clearly just marketing speak. Now we are simply debating the technical accuracy of that statement which is irrelevant to the actual quality of the screen comparatively, at least to any other product currently on the market. IPhone wins for now. No argument there.



    I really don't think Jobs meant that nothing could ever surpass this in terms of usefulness (the screen, that is).



    But several of us are having an argument, where the argument seems to revolve around the idea that we see black and white lines, or dots, only. But that's not true. Even text on a screen is continuous tone, because there is bleed from one pixel to the next. In continuous tone images, particularly color images, it's much more difficult to pick out a line at the edge of visual acuity.



    While one person here seems to think that the phone resolution is too course for the human eye to perceive it as continuous tone, industry standards in the photo industry, which certainly has plenty of experience in this area, says otherwise, for the average person. I can second that from my own experiences in that industry since 1969.
  • Reply 359 of 507
    dick applebaumdick applebaum Posts: 12,527member
    Quote:
    Originally Posted by mstone View Post


    This whole argument/discussion started because SJ said that the new iPhone's display equaled human vision as if nothing can surpass the quality of the iPhone in practical terms, ever, which was clearly just marketing speak. Now we are simply debating the technical accuracy of that statement which is irrelevant to the actual quality of the screen comparatively, at least to any other product currently on the market. IPhone wins for now. No argument there.



    Yes! I was watching a live blog (the keynote video is not, yet, available).



    SJ said something like: the iPhone 4 screen had a higher resolution than the human eye could detect (under "normal" conditions)... whatever those are.



    So, in typical market-speak they coined a phrase to describe the capability: "Retinal Display", or somesuch...



    We can't take these things too literally! After all, Ringling Bros., Barnum & Bailey wasn't really "the Greatest Show On Earth!"



    Give the Ringmaster his due!



    .
  • Reply 360 of 507
    mstonemstone Posts: 11,510member
    Quote:
    Originally Posted by melgross View Post


    I can second that from my own experiences in that industry since 1969.



    You got me there. I wasn't able to watch Sky King because there was too much snow on the tv set and the humming noise was either audio feedback or the sound of the engines, it was hard to tell.
Sign In or Register to comment.