or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Why Apple is betting on Light Peak with Intel: a love story
New Posts  All Forums:Forum Nav:

Why Apple is betting on Light Peak with Intel: a love story - Page 2

post #41 of 114
Quote:
Originally Posted by diskimage View Post

One thing I would like to see that uses this Light Peak cable is some type of plug in graphics card.
Is this even possible?

It all depends on what you expect from the graphics card, but you can find USB interfaced displays right now of limited size. Using an optical connection though, like this, you don't have to have the display processor at the end of the cable. You could instead just transmit the frame information over the Light Peak cable.

Either way you could effectively put a display 100 meters away and still get pristine results.

Dave
post #42 of 114
Quote:
Originally Posted by melgross View Post

She's beautiful, but I hope you're not insinuating that Apple's fortunes are sagging.

Quite the contrary.

Malo periculosam, libertatem quam quietam servitutem.

(I prefer the tumult of liberty to the quiet of servitude.)

Reply

Malo periculosam, libertatem quam quietam servitutem.

(I prefer the tumult of liberty to the quiet of servitude.)

Reply
post #43 of 114
Quote:
Originally Posted by fyngyrz View Post

Cables? CABLES?

The future is wireless. Get the damned cables off my desk. All of them. I don't want to see anything more than power cords, and I'm not all that happy with them, either.

Sorry but you don't know what you are talking about. There is limited bandwidth to dedicate to computer communications. At best you will get more reliable interfaces for short range device communications but that is about it.
Quote:

Optical cables are like optical LP pickup heads. Great idea, time is past, lets get on with the real deal, pure digital media. Only pseudo audiophiles try to claim LPs have merit over high definition digital formats (and they're still completely wrong.) Adding an optical pickup to an LP makes it better, but it still can't compete with 48-bit high bandwidth audio.

Now you reinforce the fact that you don't know what you are talking about and can't find decent analogies.
Quote:

WRT computer system connectivity, we're already seeing cable-free terabyte-class NAS, keyboards, mice, internet, cameras, HD video... Why in the *world* would you encourage Apple to put more CABLES on the desk?

Reliability, security and speed. Not to mention the limited space for RF communications in the first place.
Quote:

Sometimes I think the world just gets stuck. Cables are out. RF is in.

Three strikes and your out.


Dave
post #44 of 114
I've been wondering, if Light Peak's economies of scale drives down the price of optical cabling as expected, doesn't this effectively solve the Telcos "last mile" bandwidth problem?

Wouldn't this means Bell and the others will be able to match cable speeds very soon?
The evil that we fight is but the shadow of the evil that we do.
Reply
The evil that we fight is but the shadow of the evil that we do.
Reply
post #45 of 114
Quote:
Originally Posted by Joe The Dragon View Post

but can this do pci-e over it?

video cards need to be on the pci-e bus and not a super usb bus with high cpu load.

I don't have confirmation but I do believe that PCI-Express is possible over the port. Honestly though you wouldn't want to do it that way. Rather I'd go for a networking protocol of some sort. It would be a great way to distribute a bunch of monitors around a large area. For example Airports.

I have the feeling that most systems currently use video signaling for this. Having more smarts out at the monitor would provide for new capabilities. Sounds like a business opportunity.


Dave
post #46 of 114
Quote:
Originally Posted by Frank777 View Post

I've been wondering, if Light Peak's economies of scale drives down the price of optical cabling as expected, doesn't this effectively solve the Telcos "last mile" bandwidth problem?

Wouldn't this means Bell and the others will be able to match cable speeds very soon?

The reason is simple, optical cabling is produced in huge quantities already. What Apple and Intel can attack is the cost of the interconnects, the I/O chips and other components outside of the fiber.

Dave
post #47 of 114
Quote:
Originally Posted by Gazoobee View Post

It's called "context" and sometimes "backgrounding." It's what's missing from 95% or more of tech news stories and is usually welcomed by anyone with half a brain who's actually interested in the truth.

I wouldn't associate the name "Daniel Eran Dilger" with the word "truth" - at least not in the context of technology reporting.
post #48 of 114
A Ton of writing and comments, but what about power? The big bonus of USB, firewire, ADB was that they have power lines built in so you don't need a jillion transformers for every hub or device. If we have spaghetti optical fiber everywhere - we will still need some copper to provide the 5v, or this is doomed from the get go. Also my Tos-links are a pain sometimes - they get dirty and are sensitive to alignment - hope they can make better connectors...

That said I like the idea of one good standard. Maybe it can replace the over wired ethernet while it's at it...
post #49 of 114
Quote:
Originally Posted by melgross View Post

She's beautiful, but I hope you're not insinuating that Apple's fortunes are sagging.

And . . .


Google


post #50 of 114
Quote:
Originally Posted by 21yr_mac_user View Post

A Ton of writing and comments, but what about power?

That said I like the idea of one good standard. Maybe it can replace the over wired ethernet while it's at it...

It's supposed to have power according to earlier articles. Bring it on! LP is going to be much bigger than firewire.
turtles all the way up and turtles all the way down... infinite context means infinite possibility
Reply
turtles all the way up and turtles all the way down... infinite context means infinite possibility
Reply
post #51 of 114
Quote:
Originally Posted by tombeck View Post

I wouldn't associate the name "Daniel Eran Dilger" with the word "truth" - at least not in the context of technology reporting.

I used the word "truth' to contrast it with "consensus" which is what most tech sites traffic in.

Anyway, insults aside, if you really want to get at the truth of any subject one of the best ways to get there is by reading about the context and history of the item in question. Sometimes "biased" points of view are more valuable than those that purport to be facts because of course everyone has a point of view.

Even if I agree with your insult about Daniel being overly biased, one is still better off reading his stuff with the appropriate grains of salt than one is reading the average tech story. Most tech sites just regurgitate the rumour of the day word for word like Engadget, publish industry talking points verbatim like PC World or CNET or worse, just make crap up that sounds plausible like Gizmodo.

My point was only that in a world where everyone is just repeating the daily he said/she said stuff, a site that's interested in points of view, history, context, and most of all analysis is to be preferred. One of the reasons all the stuff written by all the writers on this site is so worth-while is that they do that very thing, and they do it fairly consistently.
post #52 of 114
Quote:
Originally Posted by Joe The Dragon View Post

but can this do pci-e over it?

video cards need to be on the pci-e bus and not a super usb bus with high cpu load.

The concept, as I've been reading it that anything can be sent over this. Any standards. As PCI-e is a serial bus, it should mesh well.

And don't forget that Intel said that this would scale up to 100 Gb/s over the decade. I wouldn't be surprised if by then they can get it higher with multimode. You can get thousands of channels over that, all at the highest data rates.

There has been a breakthrough on optical switching as well, which would help that tremendously.

Researchers at Yale have found a way to open and close switches with light, using positive and negative pressure. This is considered to be really major.
post #53 of 114
Quote:
Originally Posted by Joe The Dragon View Post

but can this do pci-e over it?

video cards need to be on the pci-e bus and not a super usb bus with high cpu load.

One of the articles on this (sorry, I forget which one or where) quoted Intel as saying that the demo was running PCI Express over the Light Peak connection.
Reminds me of the rumors around the time of the Nubus to PCI switch that Apple was developing a serial bus capable of replacing the motherboard parallel buses, at least for peripherals. Maybe they finally got there.

If true, I wonder if that will be their answer for all these years of user requests for a mini tower Mac?
post #54 of 114
Quote:
Originally Posted by Frank777 View Post

I've been wondering, if Light Peak's economies of scale drives down the price of optical cabling as expected, doesn't this effectively solve the Telcos "last mile" bandwidth problem?

Wouldn't this means Bell and the others will be able to match cable speeds very soon?

Overall, optical cable can be cheap. We might not need glass for this. plastic optical cable would be fine for shorter runs, up to a few tens of feet without a problem, at least.

It's the connectors that have been expensive. I have an optical splicing and cable connector kit that I've used for that. It's more complex than cutting wires, sticking them into a connector and crimping. Also the optical to electrical converters are been expensive when compared to electrical equivalents.

But technology marches on.

Verizon is slowly but surely wiring the country with optical. The amount of cable they are using is so great that whatever is done here wouldn't even be noticed. AT&T is following.

This is actually a byproduct of what the telco's and ISP's are doing. Without them, the price of these parts would be too high.
post #55 of 114
Quote:
Originally Posted by 21yr_mac_user View Post

A Ton of writing and comments, but what about power? The big bonus of USB, firewire, ADB was that they have power lines built in so you don't need a jillion transformers for every hub or device. If we have spaghetti optical fiber everywhere - we will still need some copper to provide the 5v, or this is doomed from the get go. Also my Tos-links are a pain sometimes - they get dirty and are sensitive to alignment - hope they can make better connectors...

That said I like the idea of one good standard. Maybe it can replace the over wired ethernet while it's at it...

Power is easy. They just have to include two power lines in the connectors and cabling.
post #56 of 114
[QUOTE=Quadra 610;1491390]And . . .


Google




I don't know how to respond to this one, other than; "Hi cutie!"
post #57 of 114
Quote:
Originally Posted by Frank777 View Post

I've been wondering, if Light Peak's economies of scale drives down the price of optical cabling as expected, doesn't this effectively solve the Telcos "last mile" bandwidth problem?

Wouldn't this means Bell and the others will be able to match cable speeds very soon?

Systems like Verizon FIOS have already solved the "last mile" problem and easily beat cable in cities lucky enough to have it. Even FIOS is bottlnecked by copper once it gets inside the home though. Maybe we should call that the "last foot" problem. I've heard they have been working on a solution, but haven't made it economical yet.
This Ligh Peak looks like it might quickly end up being the de facto solution to the "last foot" problem. Just imagine - a high speed optical connection all the way from the phone company trunk lines to the motherboard of your computer.
post #58 of 114
Quote:
Originally Posted by melgross View Post

Power is easy. They just have to include two power lines in the connectors and cabling.

Several articles have mentioned that the Light Peak cables and connectors do exactly as you suggest.
post #59 of 114
[QUOTE=melgross;1491403]
Quote:
Originally Posted by Quadra 610 View Post

And . . .


Google




I don't know how to respond to this one, other than; "Hi cutie!"

She's evil.
post #60 of 114
Quote:
Originally Posted by ranson View Post

Agreed. While it was a nice biography of computing buses, the assertion that somehow Intel thinks Atom will get more attention from Apple thanks to Light Peak is an absolute stretch and definitely not newsworthy, considering it's only supported by the author's opinion.

I do have to agree that the link between Atom and Light Peak were puzzling at best when I read it. I think that should have been expanded on more. I could loan out my tin foil hat if need be.

Good article besides that though. I enjoyed the read!
Hard-Core.
Reply
Hard-Core.
Reply
post #61 of 114
Apple may be pushing for this but it isn't Apple's innovation, it's Intel's and Sony is also on board according to this article on Intel's website. Doesn't mention Apple.

http://techresearch.intel.com/articles/None/1813.htm
post #62 of 114
Several articles have mentioned that Light Peak is full duplex, hot plugging and carries DC power, but I'd like to know if it also has some of the other features that make Fire Wire so much nicer than USB, such as off loading the signal & protocol processing from the CPU, peer-to-peer connections, the ability to operate in asynchronous or synchronous modes, daisy chain connections, and higher packet efficiency.

The story unfolding sounds like Light Peak is capable of emulating any other standard, so I'm guessing it must have or is at least able to emulate all of those features, except the packet efficiency. I don't want to take it for granted though.

I would think with all that emulating ability that the packet efficiency would be relatively low, but with the absolute speeds so incredibly high, maybe the actual data throughput would still be higher and the efficiency just wouldn't matter?

I'm hoping you smart folks can educate us ignorant types on these points. I'm starting to get excited about this thing, but after the USB 2.0/3.0 & DVI/HDMI kludges that Intel has forced on the market in place of FW, I'd like to have a better understanding of this standard before I get my hopes up.

Finally, I really like the idea of being truly universal and I can imagine how this might even migrate down to the keyboard & mouse level, but will it really be able to go as far as providing an interface for all those tiny memory sticks & flash cards that currently use USB?
post #63 of 114
Quote:

Thanks for the interesting link.
In the pictures and video it looks like they have gotten the optical encoder & decoder (lasers & photodetectors) so small & cheap that it will just be integrated directly into the plug at each end of the optical cable. If so, then the plug that the user connects and disconnects would just be using traditional copper connections.

Fascinating if true, and possibly the fundamental breakthrough that makes optical technology finally practical on a consumer scale. It would certainly be an interesting solution to the typical optical fiber problems of expensive connectors that are difficult to mate correctly and highly susceptible to failures caused by getting dirty.

Anybody know if this is really the case or not?
post #64 of 114
apple in history always like to change cpu every 10 years, they don't stay with one cpu, they do this because you can't keep using your old software, just like they moved from powerpc to intel, all the software needed to upgrade, after you own the new software they will change to another type of cpu, then all your software need to be upgrade again.

now they bought the new cpu factory, that means, they will produce cpu for iphone and ipod, later they will get rid of intel and produce cpu for their macbook and desktop.

after 5 years apple will switcxh os, maybe use back the os9 or use BEOS
post #65 of 114
Quote:
Originally Posted by newuser1980 View Post

apple in history always like to change cpu every 10 years, they don't stay with one cpu, they do this because you can't keep using your old software...

SNIP

after 5 years apple will switcxh os, maybe use back the os9 or use BEOS

Funny, I thought the same thing. The problem with this picture is that Apple has nowhere else to go for desktop chips. Power is unsuitable. AMD aspires to match Intel. Alpha is dead. At the same time, Intel's Tick-Tock approach has been very successful and, so long as they can continue to innovate at the current rate, unmatchable by anyone else due to their sales volume. Elsewhere, all the action is in the MID / phone market and I strongly doubt that there will be clarity in 5 years...or even 10.
post #66 of 114
Quote:
Originally Posted by melgross View Post

She's beautiful, but I hope you're not insinuating that Apple's fortunes are sagging.

I would help keep Apple's fortunes from sagging anyway, since it makes me feel like this
post #67 of 114
hey Dan, not to nitpick too much, but Apple had built in 10base-T ethernet in the PowerMac 7200 in 1996.

( to be fair, alot of PC manufacturers also had standard ethernet by that time as well. I had a Dell 486 with a 3com 10/100 card preconfigured in it in 1994. Was it on the motherboard? no.. but still in the interest of being fair. )

Apple *did* have the 1000base-T ports in desktops far before anyone else. I always thought that was really weird since no mortal user even had a switch that could handle the bandwidth ( at home anyway ), but I guess edit houses using the "brand new" ( at the time ) Final Cut probably loved it.


meh
post #68 of 114
Quote:
Originally Posted by huntercr View Post

hey Dan, not to nitpick too much, but Apple had built in 10base-T ethernet in the PowerMac 7200 in 1996.

( to be fair, alot of PC manufacturers also had standard ethernet by that time as well. I had a Dell 486 with a 3com 10/100 card preconfigured in it in 1994. Was it on the motherboard? no.. but still in the interest of being fair. )

Apple *did* have the 1000base-T ports in desktops far before anyone else. I always thought that was really weird since no mortal user even had a switch that could handle the bandwidth ( at home anyway ), but I guess edit houses using the "brand new" ( at the time ) Final Cut probably loved it.


meh

NeXT's NeXTStation (1990)

Input/Output: SCSI internal connector, SCSI2 external port, DSP, video output, proprietary port for NeXT laser printer, two RS-423 serial ports, 10Base-T and 10Base-2 Ethernet
post #69 of 114
Quote:
Originally Posted by tombeck View Post

I wouldn't associate the name "Daniel Eran Dilger" with the word "truth" - at least not in the context of technology reporting.

I've read most of his stuff, and while he loves to extend a lot with his analysis, I happened to be around for the 80s and 90s and his recollection of history confirms with mine. I've used an Apple IIe, and I also happen to have saved a lot of those PC magazines during the 90s saying how much Win95 sucked and how NT was too much for computers to run, and I also remembered in my yearbook room (it was all mac, because computers sucked for desktop publishing) in 1995 the joke showing a mac from 1984 and a computer with win95 on it, both saying "Easy! Just point and click!".

I've used the original Apple II, an Apple III, the Macintosh, Win 3.0 and 3.1 to 95, 98, skipped ME for Win2k, used LInux, and remembered the IE4/Win98 browser bundling fiasco.

Where were you when Microsoft was building its monopoly?
post #70 of 114
Quote:
Originally Posted by al_bundy View Post

with USB and PCI it took Intel a long time to get their technology into computers. with Apple's record of shipping computers without "legacy" ports they found a partner that will ship millions of units with their port.

PCI was very quick to be adopted that I remember. It sems like it only took one motherboard generation. USB was a different matter. I think the biggest hurdle was OS support.

Quote:
Originally Posted by fyngyrz View Post

Cables? CABLES?

The future is wireless. Get the damned cables off my desk. All of them. I don't want to see anything more than power cords, and I'm not all that happy with them, either.

...

Sometimes I think the world just gets stuck. Cables are out. RF is in.

Wires contains the signal and limit interference, easier to design for. For every wireless standard, you'll find a wired counterpart that's faster. Also, the higher the frequency, the more line of sight matters. Wireless still requires power that a wired cable usually supplies as a default, even this Light Peak idea offers power connectors. Not only that, wireless is less efficient with its signal, the signal goes in all directions, wired sends most of its signal down a controlled path.

Quote:
Originally Posted by mdriftmeyer View Post

Apple invented Firewire, created miniDVI/miniDisplayPort and many other modifications to standards all subsequently added to standards used by the Industry.

Who else has adopted miniDVI? Has anyone else adopted miniDP yet?

Quote:
Originally Posted by Joe The Dragon View Post

but can this do pci-e over it?

video cards need to be on the pci-e bus and not a super usb bus with high cpu load.

Video cards don't have to be on PCIe. That's just the current favored standard. If that changes, the card makers will adapt and switch to a new connector like they have in the past with ISA->PCI->AGP->PCIe transitions.

Quote:
Originally Posted by X38 View Post

Systems like Verizon FIOS have already solved the "last mile" problem and easily beat cable in cities lucky enough to have it. Even FIOS is bottlnecked by copper once it gets inside the home though. Maybe we should call that the "last foot" problem. I've heard they have been working on a solution, but haven't made it economical yet.
This Ligh Peak looks like it might quickly end up being the de facto solution to the "last foot" problem. Just imagine - a high speed optical connection all the way from the phone company trunk lines to the motherboard of your computer.

It's not a problem for the immediate FIOS doesn't saturate 100Mbit Ethernet. Then there's gigabit. There are 10gigabit ethernet standards that use copper too. When you're talking about shorter distances, optical doesn't have quite the same advantage, though optical is probably more reliable at 10gig anyways.

Quote:
Originally Posted by huntercr View Post

Apple *did* have the 1000base-T ports in desktops far before anyone else. I always thought that was really weird since no mortal user even had a switch that could handle the bandwidth ( at home anyway ), but I guess edit houses using the "brand new" ( at the time ) Final Cut probably loved it.

The chart has a double standard anyways, comparing the high end Mac platform against mainstream PCs. Not only that, if a high end Mac gets a port, then the entire Mac platform is counted, but it wasn't counted on the Wintel platform unless most machines get it. Gigabit ethernet is pretty common on PCs now, they've been standard on workstations and business desktops for maybe fie years now, and are starting to encroach on even the budget machines.

I still need to get a reliable gigE switch. I have one, but once a week it quit working properly, requiring a power cutoff to work, so I just quit using it. This is despite my having several PCs and macs, and just this week, even a printer, with gigE.
post #71 of 114
Quote:
Originally Posted by sprockkets View Post

I've read most of his stuff, and while he loves to extend a lot with his analysis, I happened to be around for the 80s and 90s and his recollection of history confirms with mine. I've used an Apple IIe, and I also happen to have saved a lot of those PC magazines during the 90s saying how much Win95 sucked and how NT was too much for computers to run, and I also remembered in my yearbook room (it was all mac, because computers sucked for desktop publishing) in 1995 the joke showing a mac from 1984 and a computer with win95 on it, both saying "Easy! Just point and click!".

I've used the original Apple II, an Apple III, the Macintosh, Win 3.0 and 3.1 to 95, 98, skipped ME for Win2k, used LInux, and remembered the IE4/Win98 browser bundling fiasco.

Where were you when Microsoft was building its monopoly?

No one wants to hear your life story...this article isn't about you. Why are you making it about you?
post #72 of 114
Apple and Intel working together on high-speed communication ports is not new. In fact Intel was involved in the development of the Firewire standard as well. The development of Firewire was initiated and perhaps lead by Apple but there were many people and organizations involved including Intel. Why Intel and other members of the IEEE P1394 Working Group ultimately dropped Firewire is thought to be the result of disagreements over high licensing costs.

I recommend reading some first hand facts about the history of firewire. It is perhaps not as romantic but much more interesting:
http://www.teener.com/firewire_FAQ/#...ented_Firewire

So what does the CPU development have to do with Light Peak? The way the facts are connected to a story in this article seems almost funny. I guess, the title does justices to this AppelInsinder article after all. The facts are true, but the story line is fictitious. Everybody who likes to read apple soap & love stories, read on.

PS: Here is an excerpt from the above mentioned webpage: "After Steve Jobs came back to Apple, he was somehow convinced that Apple should change the game midstream and ask for $1 per port for the Apple patents (his argument was that it was consistent with the MPEG patent fees). I left Apple before Steve came back, so I have no idea how this really happened. This annoyed everyone (including yours truly) immensely ... particularly Intel which had sunk a lot of effort into 1394 (the improved 1394a-2000 and 1394b-2002 standards are partly based on Intel work). The faction of Intel that doesn't like open standards like 1394 used this as an excuse to drop 1394 support and bring out USB 2 instead."
post #73 of 114
Quote:
Originally Posted by Gazoobee View Post

It's called "context" and sometimes "backgrounding." It's what's missing from 95% or more of tech news stories and is usually welcomed by anyone with half a brain who's actually interested in the truth.

If you just want to hear the latest unsupported opinions and alternately either nod your head or shake your fist, you'd be better off reading another site.


I would agree that the background is interesting, but the context is quite a stretch. Anyhow, recommending someone to go somewhere else is a very constructive way to deal with an opposing opinion. Well done, Gazoobee!
post #74 of 114
Quote:
Originally Posted by JeffDM View Post

PCI was very quick to be adopted that I remember. It sems like it only took one motherboard generation. USB was a different matter. I think the biggest hurdle was OS support.



Wires contains the signal and limit interference, easier to design for. For every wireless standard, you'll find a wired counterpart that's faster. Also, the higher the frequency, the more line of sight matters. Wireless still requires power that a wired cable usually supplies as a default, even this Light Peak idea offers power connectors. Not only that, wireless is less efficient with its signal, the signal goes in all directions, wired sends most of its signal down a controlled path.



Who else has adopted miniDVI? Has anyone else adopted miniDP yet?



Video cards don't have to be on PCIe. That's just the current favored standard. If that changes, the card makers will adapt and switch to a new connector like they have in the past with ISA->PCI->AGP->PCIe transitions.



It's not a problem for the immediate FIOS doesn't saturate 100Mbit Ethernet. Then there's gigabit. There are 10gigabit ethernet standards that use copper too. When you're talking about shorter distances, optical doesn't have quite the same advantage, though optical is probably more reliable at 10gig anyways.



The chart has a double standard anyways, comparing the high end Mac platform against mainstream PCs. Not only that, if a high end Mac gets a port, then the entire Mac platform is counted, but it wasn't counted on the Wintel platform unless most machines get it. Gigabit ethernet is pretty common on PCs now, they've been standard on workstations and business desktops for maybe fie years now, and are starting to encroach on even the budget machines.

I still need to get a reliable gigE switch. I have one, but once a week it quit working properly, requiring a power cutoff to work, so I just quit using it. This is despite my having several PCs and macs, and just this week, even a printer, with gigE.

motherboards with PCI came out right away just as they did with USB, but it took years to switch over. my first home build in 1998 had ISA slots and they were found on PC's for a few more years. Even VLB gave Intel some competition for a few years and some people said it was faster.

compare that to Apple shipping a computer one year with no "legacy" ports at all. I bet they will do the same thing next year. It will give accessory makers a financial reason to build devices for light peak. i remember even in 2000 there were a lot of devices on the market for legacy ports and no USB
post #75 of 114
Quote:
Originally Posted by al_bundy View Post

motherboards with PCI came out right away just as they did with USB, but it took years to switch over. my first home build in 1998 had ISA slots and they were found on PC's for a few more years. Even VLB gave Intel some competition for a few years and some people said it was faster.

I still don't think PCI/VLB was quite like that. VLB was 486 only, I don't think any Pentium systems had VLB. 486 boards with PCI were pretty scarce, I've seen exactly one. So the transition did happen, it was necessary because VLB couldn't work so well. It took a while for 486 systems to not be sold anymore, because the industry doesn't just abandon microarchitectures the instant the new one is available, and that is still true, for example, i7 being available doesn't mean everyone quits buying Core 2, it just goes down a price bracket.
post #76 of 114
I always shudder when I see a technical article on Insider, because they are almost always wrong. At some point I always conclude that the author doesn't really understand the technology they're talking about. The analysis articles, like this one, are equally wrong headed.

After THREE PAGES of bogus filler brushing over ancient history, the author finally gets around to actually talking about Light Peak. And what does he say? Nothing that wasn't in the briefs sent out on the wire. Ugggg.

Apple is giving Light Peak to Intel because Intel builds EVERY DESKTOP PLATFORM. AMD and others are still based on the same platform. If you want to make a desktop standard you get it into Intel's roadmap. The counterexamples are extremely rare.

There, three pages of cruft reduced to a single para.

Maury
post #77 of 114
> if Light Peak's economies of scale drives down the price of optical cabling as expected
> doesn't this effectively solve the Telcos "last mile" bandwidth problem?

Sadly, no. Light Peak is a different kind of fibre, and the kind of fibre the telco's/cableco's use is already pretty much free.

The cost in the last mile has nothing to do with the cost of the fibre and everything to do with digging up the last mile. You have to send out real humans to run the stuff, and they cost a fortune. Imagine the cost of sending someone to install a single CCF light bulb to every house in the USA - the cost would be essentially the same if the bulbs cost $1 or $0. Additionally, running fibre means you have to replace all sorts of equipment at the local head-end office, which isn't cheap either.

As to the cable that Light Peak uses, it is a real breakthrough. Optical cabling has always had serious advantages over copper, but it also has three major downsides. One is that it doesn't supply power, but that's easy to solve with some copper running beside the fibre. Another is that the connectors are bulky AND fragile, which is purely a design issue. But the main problem for the consumer space is that the fibres are surprisingly inflexible, and need large radius curves. They're fine in the back-office but you couldn't expect to plug in your iPod or run it to your keyboard.

And that's where Corning comes in. This new fibre they've developed is extremely flexible. Not as flexible as thin copper, but definitely more flexible than larger wires like the one hooking up my monitor. Flexible enough that you could seriously consider using it to replace copper in pretty much everything other than the mouse cable, which needs to be REALLY bendy. Thank you Corning!

Maury
post #78 of 114
If understand correctly, the big tech-magic here is miniaturizing and making cheap the optical-to-electrical bridge. From the video at Intel, it looks like every cable will have this hardware on each end. So end-users won't be dealing with high-precision optical connections, but "normal" metal-to-metal electrical connections.

The connectors shown at Intel's site still don't look like trivial plug & unplug cables like USB. This looks intended for relatively long-lived connections - like a monitor or docking station. Maybe Apple will finally make a laptop with a docking station using this tech. Current docking ports have so many pins and are so bulky.

Interesting stuff.

- Jasen.

P.S. I agree with those complaining that the article rambled and covered a lot of unnecessary side history.
post #79 of 114
It sounds like a great way to communicate data but what about power? Today's USB and Firewire also provide power over the cable for things like portable harddrives etc.

I did not see it anywhere but does this tech also supply power over copper cable along with the data over optical?
post #80 of 114
I would normally agree... until you need to go to init 1 on a iMac with a wireless keyboard (needs init 3) <smile>
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Why Apple is betting on Light Peak with Intel: a love story