Why Apple is betting on Light Peak with Intel: a love story

1246

Comments

  • Reply 61 of 113
    x38x38 Posts: 97member
    Several articles have mentioned that Light Peak is full duplex, hot plugging and carries DC power, but I'd like to know if it also has some of the other features that make Fire Wire so much nicer than USB, such as off loading the signal & protocol processing from the CPU, peer-to-peer connections, the ability to operate in asynchronous or synchronous modes, daisy chain connections, and higher packet efficiency.



    The story unfolding sounds like Light Peak is capable of emulating any other standard, so I'm guessing it must have or is at least able to emulate all of those features, except the packet efficiency. I don't want to take it for granted though.



    I would think with all that emulating ability that the packet efficiency would be relatively low, but with the absolute speeds so incredibly high, maybe the actual data throughput would still be higher and the efficiency just wouldn't matter?



    I'm hoping you smart folks can educate us ignorant types on these points. I'm starting to get excited about this thing, but after the USB 2.0/3.0 & DVI/HDMI kludges that Intel has forced on the market in place of FW, I'd like to have a better understanding of this standard before I get my hopes up.



    Finally, I really like the idea of being truly universal and I can imagine how this might even migrate down to the keyboard & mouse level, but will it really be able to go as far as providing an interface for all those tiny memory sticks & flash cards that currently use USB?
  • Reply 62 of 113
    x38x38 Posts: 97member
    Quote:



    Thanks for the interesting link.

    In the pictures and video it looks like they have gotten the optical encoder & decoder (lasers & photodetectors) so small & cheap that it will just be integrated directly into the plug at each end of the optical cable. If so, then the plug that the user connects and disconnects would just be using traditional copper connections.



    Fascinating if true, and possibly the fundamental breakthrough that makes optical technology finally practical on a consumer scale. It would certainly be an interesting solution to the typical optical fiber problems of expensive connectors that are difficult to mate correctly and highly susceptible to failures caused by getting dirty.



    Anybody know if this is really the case or not?
  • Reply 63 of 113
    apple in history always like to change cpu every 10 years, they don't stay with one cpu, they do this because you can't keep using your old software, just like they moved from powerpc to intel, all the software needed to upgrade, after you own the new software they will change to another type of cpu, then all your software need to be upgrade again.



    now they bought the new cpu factory, that means, they will produce cpu for iphone and ipod, later they will get rid of intel and produce cpu for their macbook and desktop.



    after 5 years apple will switcxh os, maybe use back the os9 or use BEOS
  • Reply 64 of 113
    Quote:
    Originally Posted by newuser1980 View Post


    apple in history always like to change cpu every 10 years, they don't stay with one cpu, they do this because you can't keep using your old software...



    SNIP



    after 5 years apple will switcxh os, maybe use back the os9 or use BEOS



    Funny, I thought the same thing. The problem with this picture is that Apple has nowhere else to go for desktop chips. Power is unsuitable. AMD aspires to match Intel. Alpha is dead. At the same time, Intel's Tick-Tock approach has been very successful and, so long as they can continue to innovate at the current rate, unmatchable by anyone else due to their sales volume. Elsewhere, all the action is in the MID / phone market and I strongly doubt that there will be clarity in 5 years...or even 10.
  • Reply 65 of 113
    Quote:
    Originally Posted by melgross View Post


    She's beautiful, but I hope you're not insinuating that Apple's fortunes are sagging.



    I would help keep Apple's fortunes from sagging anyway, since it makes me feel like this
  • Reply 66 of 113
    hey Dan, not to nitpick too much, but Apple had built in 10base-T ethernet in the PowerMac 7200 in 1996.



    ( to be fair, alot of PC manufacturers also had standard ethernet by that time as well. I had a Dell 486 with a 3com 10/100 card preconfigured in it in 1994. Was it on the motherboard? no.. but still in the interest of being fair. )



    Apple *did* have the 1000base-T ports in desktops far before anyone else. I always thought that was really weird since no mortal user even had a switch that could handle the bandwidth ( at home anyway ), but I guess edit houses using the "brand new" ( at the time ) Final Cut probably loved it.





    meh
  • Reply 67 of 113
    Quote:
    Originally Posted by huntercr View Post


    hey Dan, not to nitpick too much, but Apple had built in 10base-T ethernet in the PowerMac 7200 in 1996.



    ( to be fair, alot of PC manufacturers also had standard ethernet by that time as well. I had a Dell 486 with a 3com 10/100 card preconfigured in it in 1994. Was it on the motherboard? no.. but still in the interest of being fair. )



    Apple *did* have the 1000base-T ports in desktops far before anyone else. I always thought that was really weird since no mortal user even had a switch that could handle the bandwidth ( at home anyway ), but I guess edit houses using the "brand new" ( at the time ) Final Cut probably loved it.





    meh



    NeXT's NeXTStation (1990)



    Input/Output: SCSI internal connector, SCSI2 external port, DSP, video output, proprietary port for NeXT laser printer, two RS-423 serial ports, 10Base-T and 10Base-2 Ethernet
  • Reply 68 of 113
    Quote:
    Originally Posted by tombeck View Post


    I wouldn't associate the name "Daniel Eran Dilger" with the word "truth" - at least not in the context of technology reporting.



    I've read most of his stuff, and while he loves to extend a lot with his analysis, I happened to be around for the 80s and 90s and his recollection of history confirms with mine. I've used an Apple IIe, and I also happen to have saved a lot of those PC magazines during the 90s saying how much Win95 sucked and how NT was too much for computers to run, and I also remembered in my yearbook room (it was all mac, because computers sucked for desktop publishing) in 1995 the joke showing a mac from 1984 and a computer with win95 on it, both saying "Easy! Just point and click!".



    I've used the original Apple II, an Apple III, the Macintosh, Win 3.0 and 3.1 to 95, 98, skipped ME for Win2k, used LInux, and remembered the IE4/Win98 browser bundling fiasco.



    Where were you when Microsoft was building its monopoly?
  • Reply 69 of 113
    jeffdmjeffdm Posts: 12,951member
    Quote:
    Originally Posted by al_bundy View Post


    with USB and PCI it took Intel a long time to get their technology into computers. with Apple's record of shipping computers without "legacy" ports they found a partner that will ship millions of units with their port.



    PCI was very quick to be adopted that I remember. It sems like it only took one motherboard generation. USB was a different matter. I think the biggest hurdle was OS support.



    Quote:
    Originally Posted by fyngyrz View Post


    Cables? CABLES?



    The future is wireless. Get the damned cables off my desk. All of them. I don't want to see anything more than power cords, and I'm not all that happy with them, either.



    ...



    Sometimes I think the world just gets stuck. Cables are out. RF is in.



    Wires contains the signal and limit interference, easier to design for. For every wireless standard, you'll find a wired counterpart that's faster. Also, the higher the frequency, the more line of sight matters. Wireless still requires power that a wired cable usually supplies as a default, even this Light Peak idea offers power connectors. Not only that, wireless is less efficient with its signal, the signal goes in all directions, wired sends most of its signal down a controlled path.



    Quote:
    Originally Posted by mdriftmeyer View Post


    Apple invented Firewire, created miniDVI/miniDisplayPort and many other modifications to standards all subsequently added to standards used by the Industry.



    Who else has adopted miniDVI? Has anyone else adopted miniDP yet?



    Quote:
    Originally Posted by Joe The Dragon View Post


    but can this do pci-e over it?



    video cards need to be on the pci-e bus and not a super usb bus with high cpu load.



    Video cards don't have to be on PCIe. That's just the current favored standard. If that changes, the card makers will adapt and switch to a new connector like they have in the past with ISA->PCI->AGP->PCIe transitions.



    Quote:
    Originally Posted by X38 View Post


    Systems like Verizon FIOS have already solved the "last mile" problem and easily beat cable in cities lucky enough to have it. Even FIOS is bottlnecked by copper once it gets inside the home though. Maybe we should call that the "last foot" problem. I've heard they have been working on a solution, but haven't made it economical yet.

    This Ligh Peak looks like it might quickly end up being the de facto solution to the "last foot" problem. Just imagine - a high speed optical connection all the way from the phone company trunk lines to the motherboard of your computer.



    It's not a problem for the immediate FIOS doesn't saturate 100Mbit Ethernet. Then there's gigabit. There are 10gigabit ethernet standards that use copper too. When you're talking about shorter distances, optical doesn't have quite the same advantage, though optical is probably more reliable at 10gig anyways.



    Quote:
    Originally Posted by huntercr View Post


    Apple *did* have the 1000base-T ports in desktops far before anyone else. I always thought that was really weird since no mortal user even had a switch that could handle the bandwidth ( at home anyway ), but I guess edit houses using the "brand new" ( at the time ) Final Cut probably loved it.



    The chart has a double standard anyways, comparing the high end Mac platform against mainstream PCs. Not only that, if a high end Mac gets a port, then the entire Mac platform is counted, but it wasn't counted on the Wintel platform unless most machines get it. Gigabit ethernet is pretty common on PCs now, they've been standard on workstations and business desktops for maybe fie years now, and are starting to encroach on even the budget machines.



    I still need to get a reliable gigE switch. I have one, but once a week it quit working properly, requiring a power cutoff to work, so I just quit using it. This is despite my having several PCs and macs, and just this week, even a printer, with gigE.
  • Reply 70 of 113
    Quote:
    Originally Posted by sprockkets View Post


    I've read most of his stuff, and while he loves to extend a lot with his analysis, I happened to be around for the 80s and 90s and his recollection of history confirms with mine. I've used an Apple IIe, and I also happen to have saved a lot of those PC magazines during the 90s saying how much Win95 sucked and how NT was too much for computers to run, and I also remembered in my yearbook room (it was all mac, because computers sucked for desktop publishing) in 1995 the joke showing a mac from 1984 and a computer with win95 on it, both saying "Easy! Just point and click!".



    I've used the original Apple II, an Apple III, the Macintosh, Win 3.0 and 3.1 to 95, 98, skipped ME for Win2k, used LInux, and remembered the IE4/Win98 browser bundling fiasco.



    Where were you when Microsoft was building its monopoly?



    No one wants to hear your life story...this article isn't about you. Why are you making it about you?
  • Reply 71 of 113
    kundekunde Posts: 2member
    Apple and Intel working together on high-speed communication ports is not new. In fact Intel was involved in the development of the Firewire standard as well. The development of Firewire was initiated and perhaps lead by Apple but there were many people and organizations involved including Intel. Why Intel and other members of the IEEE P1394 Working Group ultimately dropped Firewire is thought to be the result of disagreements over high licensing costs.



    I recommend reading some first hand facts about the history of firewire. It is perhaps not as romantic but much more interesting:

    http://www.teener.com/firewire_FAQ/#...ented_Firewire



    So what does the CPU development have to do with Light Peak? The way the facts are connected to a story in this article seems almost funny. I guess, the title does justices to this AppelInsinder article after all. The facts are true, but the story line is fictitious. Everybody who likes to read apple soap & love stories, read on.



    PS: Here is an excerpt from the above mentioned webpage: "After Steve Jobs came back to Apple, he was somehow convinced that Apple should change the game midstream and ask for $1 per port for the Apple patents (his argument was that it was consistent with the MPEG patent fees). I left Apple before Steve came back, so I have no idea how this really happened. This annoyed everyone (including yours truly) immensely ... particularly Intel which had sunk a lot of effort into 1394 (the improved 1394a-2000 and 1394b-2002 standards are partly based on Intel work). The faction of Intel that doesn't like open standards like 1394 used this as an excuse to drop 1394 support and bring out USB 2 instead."
  • Reply 72 of 113
    kundekunde Posts: 2member
    Quote:
    Originally Posted by Gazoobee View Post


    It's called "context" and sometimes "backgrounding." It's what's missing from 95% or more of tech news stories and is usually welcomed by anyone with half a brain who's actually interested in the truth.



    If you just want to hear the latest unsupported opinions and alternately either nod your head or shake your fist, you'd be better off reading another site.







    I would agree that the background is interesting, but the context is quite a stretch. Anyhow, recommending someone to go somewhere else is a very constructive way to deal with an opposing opinion. Well done, Gazoobee!
  • Reply 73 of 113
    al_bundyal_bundy Posts: 1,525member
    Quote:
    Originally Posted by JeffDM View Post


    PCI was very quick to be adopted that I remember. It sems like it only took one motherboard generation. USB was a different matter. I think the biggest hurdle was OS support.







    Wires contains the signal and limit interference, easier to design for. For every wireless standard, you'll find a wired counterpart that's faster. Also, the higher the frequency, the more line of sight matters. Wireless still requires power that a wired cable usually supplies as a default, even this Light Peak idea offers power connectors. Not only that, wireless is less efficient with its signal, the signal goes in all directions, wired sends most of its signal down a controlled path.







    Who else has adopted miniDVI? Has anyone else adopted miniDP yet?







    Video cards don't have to be on PCIe. That's just the current favored standard. If that changes, the card makers will adapt and switch to a new connector like they have in the past with ISA->PCI->AGP->PCIe transitions.







    It's not a problem for the immediate FIOS doesn't saturate 100Mbit Ethernet. Then there's gigabit. There are 10gigabit ethernet standards that use copper too. When you're talking about shorter distances, optical doesn't have quite the same advantage, though optical is probably more reliable at 10gig anyways.







    The chart has a double standard anyways, comparing the high end Mac platform against mainstream PCs. Not only that, if a high end Mac gets a port, then the entire Mac platform is counted, but it wasn't counted on the Wintel platform unless most machines get it. Gigabit ethernet is pretty common on PCs now, they've been standard on workstations and business desktops for maybe fie years now, and are starting to encroach on even the budget machines.



    I still need to get a reliable gigE switch. I have one, but once a week it quit working properly, requiring a power cutoff to work, so I just quit using it. This is despite my having several PCs and macs, and just this week, even a printer, with gigE.



    motherboards with PCI came out right away just as they did with USB, but it took years to switch over. my first home build in 1998 had ISA slots and they were found on PC's for a few more years. Even VLB gave Intel some competition for a few years and some people said it was faster.



    compare that to Apple shipping a computer one year with no "legacy" ports at all. I bet they will do the same thing next year. It will give accessory makers a financial reason to build devices for light peak. i remember even in 2000 there were a lot of devices on the market for legacy ports and no USB
  • Reply 74 of 113
    jeffdmjeffdm Posts: 12,951member
    Quote:
    Originally Posted by al_bundy View Post


    motherboards with PCI came out right away just as they did with USB, but it took years to switch over. my first home build in 1998 had ISA slots and they were found on PC's for a few more years. Even VLB gave Intel some competition for a few years and some people said it was faster.



    I still don't think PCI/VLB was quite like that. VLB was 486 only, I don't think any Pentium systems had VLB. 486 boards with PCI were pretty scarce, I've seen exactly one. So the transition did happen, it was necessary because VLB couldn't work so well. It took a while for 486 systems to not be sold anymore, because the industry doesn't just abandon microarchitectures the instant the new one is available, and that is still true, for example, i7 being available doesn't mean everyone quits buying Core 2, it just goes down a price bracket.
  • Reply 75 of 113
    I always shudder when I see a technical article on Insider, because they are almost always wrong. At some point I always conclude that the author doesn't really understand the technology they're talking about. The analysis articles, like this one, are equally wrong headed.



    After THREE PAGES of bogus filler brushing over ancient history, the author finally gets around to actually talking about Light Peak. And what does he say? Nothing that wasn't in the briefs sent out on the wire. Ugggg.



    Apple is giving Light Peak to Intel because Intel builds EVERY DESKTOP PLATFORM. AMD and others are still based on the same platform. If you want to make a desktop standard you get it into Intel's roadmap. The counterexamples are extremely rare.



    There, three pages of cruft reduced to a single para.



    Maury
  • Reply 76 of 113
    > if Light Peak's economies of scale drives down the price of optical cabling as expected

    > doesn't this effectively solve the Telcos "last mile" bandwidth problem?



    Sadly, no. Light Peak is a different kind of fibre, and the kind of fibre the telco's/cableco's use is already pretty much free.



    The cost in the last mile has nothing to do with the cost of the fibre and everything to do with digging up the last mile. You have to send out real humans to run the stuff, and they cost a fortune. Imagine the cost of sending someone to install a single CCF light bulb to every house in the USA - the cost would be essentially the same if the bulbs cost $1 or $0. Additionally, running fibre means you have to replace all sorts of equipment at the local head-end office, which isn't cheap either.



    As to the cable that Light Peak uses, it is a real breakthrough. Optical cabling has always had serious advantages over copper, but it also has three major downsides. One is that it doesn't supply power, but that's easy to solve with some copper running beside the fibre. Another is that the connectors are bulky AND fragile, which is purely a design issue. But the main problem for the consumer space is that the fibres are surprisingly inflexible, and need large radius curves. They're fine in the back-office but you couldn't expect to plug in your iPod or run it to your keyboard.



    And that's where Corning comes in. This new fibre they've developed is extremely flexible. Not as flexible as thin copper, but definitely more flexible than larger wires like the one hooking up my monitor. Flexible enough that you could seriously consider using it to replace copper in pretty much everything other than the mouse cable, which needs to be REALLY bendy. Thank you Corning!



    Maury
  • Reply 77 of 113
    jasenj1jasenj1 Posts: 923member
    If understand correctly, the big tech-magic here is miniaturizing and making cheap the optical-to-electrical bridge. From the video at Intel, it looks like every cable will have this hardware on each end. So end-users won't be dealing with high-precision optical connections, but "normal" metal-to-metal electrical connections.



    The connectors shown at Intel's site still don't look like trivial plug & unplug cables like USB. This looks intended for relatively long-lived connections - like a monitor or docking station. Maybe Apple will finally make a laptop with a docking station using this tech. Current docking ports have so many pins and are so bulky.



    Interesting stuff.



    - Jasen.



    P.S. I agree with those complaining that the article rambled and covered a lot of unnecessary side history.
  • Reply 78 of 113
    It sounds like a great way to communicate data but what about power? Today's USB and Firewire also provide power over the cable for things like portable harddrives etc.



    I did not see it anywhere but does this tech also supply power over copper cable along with the data over optical?
  • Reply 79 of 113
    I would normally agree... until you need to go to init 1 on a iMac with a wireless keyboard (needs init 3) <smile>
  • Reply 80 of 113
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by X38 View Post


    Thanks for the interesting link.

    In the pictures and video it looks like they have gotten the optical encoder & decoder (lasers & photodetectors) so small & cheap that it will just be integrated directly into the plug at each end of the optical cable. If so, then the plug that the user connects and disconnects would just be using traditional copper connections.



    Fascinating if true, and possibly the fundamental breakthrough that makes optical technology finally practical on a consumer scale. It would certainly be an interesting solution to the typical optical fiber problems of expensive connectors that are difficult to mate correctly and highly susceptible to failures caused by getting dirty.



    Anybody know if this is really the case or not?



    This is an interconnect protocol, and physical system. It doesn't matter what information moves over it.



    Remember that when you are talking about digital, it's all 1's and 0's. That's really about it.



    What this does is to provide a very fast way of moving those 1's and 0's across a cable to somewhere else.



    As far as the protocols for whatever is being moved, they will remain the same.



    USB will still be USB at the bus level, and so will FW, SATA



    That's the beauty of this.



    If another protocol is invented meanwhile, this can be used to transport it. That's never been the case before.



    So if at first, only USB and FW are moving through this, but later, SATA is desired, it can be done. as can any other protocol being used.



    There are several levels to all transport types. One level is the very low level software protocols used for basic interfacing with whatever it functions with, such as the software in the computer bus. Then there is the level that is the standard, whatever that may be, FW, USB etc. Then there is the physical level, which is the actual components and their interfaces with the hardware.



    This appears to have the lowest level and the physical level, but leaves the middle level to be used by the various standards.
Sign In or Register to comment.