Macs may go even longer between revamps as Intel kills tick-tock

1246

Comments

  • Reply 61 of 105
    TonyITonyI Posts: 2member
    Contrary to what the article indicates, Apple has incorporated Skylake processors into its latest iMacs: http://arstechnica.com/apple/2015/10/apple-goes-all-retina-for-its-27-inch-skylake-imac-refresh/ Just not (yet) into Mac notebooks.
    baconstang
  • Reply 62 of 105
    schlack said:
    this makes so much sense...they're probably leaving a lot of room on the table for improvement with each tic-tock cycle. plus it buys them some time to fight physics...as they start to hit walls that need breaking down.
    Intel is having difficulty competing with the aggressive advancements being delivered by Samsung and TSMC as chip foundries. 

    The industry pundits were louding Intel's 14 nm process as being superior to the other foundries. And the observations were technically correct. However, no one ever mentioned the difficulties that Intel could encounter with their aggressive design. 

    That is exactly what happened. Intel couldn't produce the chips in large volumes. It isn't entirely due to physics. 

    TSMC chose a less aggressive but manageable approach. Their 16 nm FF process may not be as elegant as Intel's but yields are not an issue. And TSMC is getting ready to add Integrated Fan Out to the manufacturing process allowing for even thinner chips with better efficiency still. 

    As Nvidia and AMD manufacture their GPUs on TSMC foundries, we are about to witness substantial improvement in the performance of discrete GPUs to boot. 

    Over on semiwiki, there is a lot of excitement regarding TSMC's upcoming 10 and 7 nm process nodes. In short, Intel has essentially lost its lead in manufacturing technology. They never were really good with processor design and AMD showed that back in the mid 2000s. Apple is showing it now also with the A9X performing at the level of a low end Core i5 at one third the energy consumption. 

    Apple will have no choice but to move the MacBook line over to the A series of CPUs. The A10/A10X will be released this fall. And Intel seems to be still having difficulty getting enough high end Skylake chips into Apple's hands to build high end MacBook Pros at the volumes Apple needs. 

    The A10X may outperform Intel's Skylake Core i7. The A11X most likely will. And Apple will have a problem. An iPad Pro will outperform the flagship MacBook Pro at half the cost and with much better graphics performance. 

    Intel's dominance is now officially over. TSMC is now the leader in CPU manufacturing and Apple in CPU design. 

    And I can't wait to get my hands on an A series powered MacBook. 
  • Reply 63 of 105
    MacProMacPro Posts: 19,718member
    brs165 said:
    I think this all the more reason that Apple will at some point move away from intel x86 to Apple ARM Ax chips.
    I would agree at least the lower end MacBook could soon be running on something like an AX10. I bet Apple have in-house prototypes running OS X already.  At some point even the high end macs may move that way and perhaps offer a BTO additional Intel chip for those wanting dual booting.
  • Reply 64 of 105
    MacProMacPro Posts: 19,718member

    melgross said:
    bdkennedy said:
    Considering Apple underclocks all of their Ax chips, I'm willing to bet if Apple threw a couple of normal clocked A10X chips in a MacBook they would be good to go.
    Why does everybody oversimplify this?
    Many of us have lived through Apple changing the core CPU in Macs several times since 1984.  Is this not quite feasible, albeit more complicated than those of us who are not chip experts realize?  The most recent being switching to Intel obviously, that was pretty dramatic to users.   Apparently Apple had prototypes running OS X on Intel for several years if I recall correctly.
  • Reply 65 of 105
    As for Mel Gross' post regarding the complexity of porting OS X to the A series, it is a complicated issue. But Apple has ported the Mac OS from 68k to PPC and moved from Mac OS to OS X. They then moved OS X from PPC over to x86. And they did it as a vastly smaller company with minuscule resources to what they have now. 

    The real question is whether porting OS X to the A series is worth the effort. And it probably is not. Development on OS X is paltry in comparison to iOS. 

    Hence, the real issue is when Apple adds additional functionality into iOS that eliminates the need for a conventional laptop altogether. 

    Intel is losing its ability to compete. And that trend will accelerate. It has far more to do with economics than physics. 

    The sales of conventional desktops and laptops are falling off substantially. The public does most of their computing on mobile devices. Intel no longer has the capital to continue investing in its own manufacturing process. The laws of physics haven't prevented TSMC's development of the Integrated Fan Out process. And it isn't preventing them from achieving 10 nm before Intel. It's making Intel's acquisition of Altera look foolish. Xilinx is planning on manufacturing their FPGAs on TSMCs 7 nm process. And the first one out with a 7 nm product is virtually assured of achieving dominant market share. 

    Apple may have a problem when an iPad or even an iPhone outperforms Intel's best portable CPU computationally. Especially for far lower cost. Likely it will be Intel taking the egg off of its own face. 

    Intel would seem safe on the desktop for now. But Apple has plans to build out their own cloud. Designing a high performance ARM chip would seem to be in the works. 

    M over at semiwiki, they have written about TSMC developing two separate 7 nm nodes. One for mobile CPUs and the other for high performance CPUs. 

    There would seem to be someone out there wanting to build high performance CPUs on the 7 nm process node and at high enough volumes to justify TSMC investing into developing the process. I could be wrong, but it would seem a strong indication that Apple may be planning on building a high performance ARM CPU using that process. And it that's true, Intel is in deep trouble. 

    Mr. Gross seems to think that Intel's position is secure, but all the evidence is pointing to a massive shift away from x86 and over to the ARM ISA. All being lead by Apple. 

    And to think that Otellini turned down Jobs for the opportunity of building the CPU for the original iPhone. 
    baconstangknowitall
  • Reply 66 of 105
    mattinozmattinoz Posts: 2,299member
    wizard69 said:

    pmz said:
    I hope Apple does go to ARM and custom designed silicon for Macs. We'd get better Mac products in the end. Apple controlling the Mac's destiny, rather than waiting on Intel.
    There is another option custom silicon in cooperation with Intel.    Intel has been doing a lot of custom CPU's, mostly XEON's of late, but there is nothing to keep them from doing custom Apple hardware.   

    I can see Apple doing custom I/O, GPU's and camera processors, all siting on a die managed by Intel technology.
    In the case of Intel allowing joint ventures couldn't they license DMI base access to Apple?
    Take a big little approach. Apple builds their own Platform hub using the Aseries. It can already drive a 12inch screen, handle networking, day to day productivity apps. Treat the x86 as a co-processor for higher demand loads or apps that aren't optimised.

    Sure the DMI bandwidth might be to high for the ASeries to handle but it's not like Apple don't have people who are really good at PCIe switching on the team.
  • Reply 67 of 105
    rezwitsrezwits Posts: 878member
    I am not ready for ARM, and "hybrid" applications. Been there too many times. If Apple went to ARM, that would probably be the end of my "getting new macs, all the time, days", except to just by a laptop to install Xcode for compiling, especially with almost ALL software going to subscription based pricing. I liked the days when you could buy software and it was yours. I mean in the future I could see a whole setup where a developer or designer would have to pay $50-$100 to $200 a month in software "tax" just to be able to use and have his ONE SYSTEM per month. This is getting ridiculous... And all these updates every month and per year are just a joke, I am going to stick with older even retro GOODS. These new machines aren't getting faster they are getting slower with less ports, less capability, and more expensive, and cheaper quality. They are like soda cans that open the "latest" OS and the "latest" App, then you throw them away and get a new one to install the "new" OS and yada yada yada. I have about 10-15 machines at home that just work, all with payed software with not too many subscriptions. And I am OUT! I'll just have my one LATEST and GREATEST machine, which will be whatever MacBook Pro-ish for sitting on the couch and all the workhorses with be the older machines. Processing code, video, audio, and images...
  • Reply 68 of 105
    staticx57staticx57 Posts: 405member
    sockrolid said:
    brs165 said:
    I think this all the more reason that Apple will at some point move away from intel x86 to Apple ARM Ax chips.
    Because RISC > CISC.

    You do know Intel uses RISC cores right?
  • Reply 69 of 105
    knowitallknowitall Posts: 1,648member
    melgross said:
    brs165 said:
    I think this all the more reason that Apple will at some point move away from intel x86 to Apple ARM Ax chips.

    Ah, that's a who,e different matter. Apple's ARM chips would need to become several times faster.
    What nonsense, A processors are more than fast enough.
  • Reply 70 of 105
    knowitallknowitall Posts: 1,648member
    schlack said:
    this makes so much sense...they're probably leaving a lot of room on the table for improvement with each tic-tock cycle. plus it buys them some time to fight physics...as they start to hit walls that need breaking down.
    Intel is having difficulty competing with the aggressive advancements being delivered by Samsung and TSMC as chip foundries. 

    The industry pundits were louding Intel's 14 nm process as being superior to the other foundries. And the observations were technically correct. However, no one ever mentioned the difficulties that Intel could encounter with their aggressive design. 

    That is exactly what happened. Intel couldn't produce the chips in large volumes. It isn't entirely due to physics. 

    TSMC chose a less aggressive but manageable approach. Their 16 nm FF process may not be as elegant as Intel's but yields are not an issue. And TSMC is getting ready to add Integrated Fan Out to the manufacturing process allowing for even thinner chips with better efficiency still. 

    As Nvidia and AMD manufacture their GPUs on TSMC foundries, we are about to witness substantial improvement in the performance of discrete GPUs to boot. 

    Over on semiwiki, there is a lot of excitement regarding TSMC's upcoming 10 and 7 nm process nodes. In short, Intel has essentially lost its lead in manufacturing technology. They never were really good with processor design and AMD showed that back in the mid 2000s. Apple is showing it now also with the A9X performing at the level of a low end Core i5 at one third the energy consumption. 

    Apple will have no choice but to move the MacBook line over to the A series of CPUs. The A10/A10X will be released this fall. And Intel seems to be still having difficulty getting enough high end Skylake chips into Apple's hands to build high end MacBook Pros at the volumes Apple needs. 

    The A10X may outperform Intel's Skylake Core i7. The A11X most likely will. And Apple will have a problem. An iPad Pro will outperform the flagship MacBook Pro at half the cost and with much better graphics performance. 

    Intel's dominance is now officially over. TSMC is now the leader in CPU manufacturing and Apple in CPU design. 

    And I can't wait to get my hands on an A series powered MacBook. 
    I second that, your well informed!

    I didn't know Nvidia uses TSMC to manufacture its GPUs, this is important because it's state of the art on several fronts.
    Funny so many people doubt this is the reality in the near future, ARM based processors will dominate the server and desktop space. It's actually quite simple, cost and performance per watt determines it all.
    Also, other payers like Nvidia already produce feature complete boards with similar advanced capabilities for the cost of a single Intel processor, it will be a matter of time before this forms the base of most new desktop systems.
  • Reply 71 of 105
    knowitallknowitall Posts: 1,648member

    staticx57 said:
    sockrolid said:
    Because RISC > CISC.

    You do know Intel uses RISC cores right?
    That's bloatware, it's a translation layer to make good for the x86 instruction crap.
    ARM processors do not need such a layer and are inherently more efficient.
  • Reply 72 of 105
    knowitallknowitall Posts: 1,648member

    As for Mel Gross' post regarding the complexity of porting OS X to the A series, it is a complicated issue. But Apple has ported the Mac OS from 68k to PPC and moved from Mac OS to OS X. They then moved OS X from PPC over to x86. And they did it as a vastly smaller company with minuscule resources to what they have now. 

    The real question is whether porting OS X to the A series is worth the effort. And it probably is not. Development on OS X is paltry in comparison to iOS. 

    Hence, the real issue is when Apple adds additional functionality into iOS that eliminates the need for a conventional laptop altogether. 

    Intel is losing its ability to compete. And that trend will accelerate. It has far more to do with economics than physics. 

    The sales of conventional desktops and laptops are falling off substantially. The public does most of their computing on mobile devices. Intel no longer has the capital to continue investing in its own manufacturing process. The laws of physics haven't prevented TSMC's development of the Integrated Fan Out process. And it isn't preventing them from achieving 10 nm before Intel. It's making Intel's acquisition of Altera look foolish. Xilinx is planning on manufacturing their FPGAs on TSMCs 7 nm process. And the first one out with a 7 nm product is virtually assured of achieving dominant market share. 

    Apple may have a problem when an iPad or even an iPhone outperforms Intel's best portable CPU computationally. Especially for far lower cost. Likely it will be Intel taking the egg off of its own face. 

    Intel would seem safe on the desktop for now. But Apple has plans to build out their own cloud. Designing a high performance ARM chip would seem to be in the works. 

    M over at semiwiki, they have written about TSMC developing two separate 7 nm nodes. One for mobile CPUs and the other for high performance CPUs. 

    There would seem to be someone out there wanting to build high performance CPUs on the 7 nm process node and at high enough volumes to justify TSMC investing into developing the process. I could be wrong, but it would seem a strong indication that Apple may be planning on building a high performance ARM CPU using that process. And it that's true, Intel is in deep trouble. 

    Mr. Gross seems to think that Intel's position is secure, but all the evidence is pointing to a massive shift away from x86 and over to the ARM ISA. All being lead by Apple. 

    And to think that Otellini turned down Jobs for the opportunity of building the CPU for the original iPhone. 
    Ok, very good!
    Please comment some more.
  • Reply 73 of 105
    MarvinMarvin Posts: 15,309moderator
    The Economist magazine just had an article on the death of Moore's Law, and mentioned exactly this change by Intel as one of their points. Not sure if ARM chips are going to be able to escape the fundamental issues of physics and technological hurdles that need to be met to keep on making processors ever better at the same pace.
    All of the chip manufacturers will run into problems the smaller they go. Intel is planning to move away from using Silicon:

    http://arstechnica.com/gadgets/2015/02/intel-forges-ahead-to-10nm-will-move-away-from-silicon-at-7nm/

    IBM has managed a 7nm test with some new processes:

    http://www.pcworld.com/article/2946124/ibm-reveals-worlds-first-working-7nm-processor.html

    There are other things they can do like use different materials to boost clock speed over transistor count. If they converted parts to optical components, they might be able to reduce the heat the chips generate:

    http://news.berkeley.edu/2015/12/23/electronic-photonic-microprocessor-chip/

    There may eventually be a way to create a virtual transistor instead of a physical one without any complications of quantum computing. Perhaps a transistor can be represented using an electrical or photonic signal that just has its state maintained. Like a light wave in a loop. At any given time, a processor maintains a binary state. They have 1-2 billion transistors just now. An optical fiber has managed to send 255 trillion bits of information in a second. They need to perform computing operations like add, subtract, multiply, divide, shift, copy etc. They would produce a second signal that does the required adjustments to the sustained state and merge them. If that kind of thing could exist, it wouldn't need physical chip architectures, any chip architecture (ARM, PPC, X86) could be virtualized and software could run against whatever virtual chip architecture was required. We already have this kind of thing with emulators where they create software versions of hardware chips.

    Eventually people are going to stop caring about the performance improvements the way we do with other products like cars, appliances, TVs. On the following page of Mac CPU scores, they have only gone up about 50-60% over the last 5 years but 400% in the 5 years before:

    https://browser.primatelabs.com/mac-benchmarks

    The power draw has gone down making chips run cooler but the performance demand is slowing down. Mobile device sales are growing, PC sales are falling. Power budgets are being put into GPUs instead of CPUs (GPUs have gone up 400% in the last 5 years). Some of the upcoming quad-i7 mobile chips today are slower than they were 4 years ago because the IGPs are taking up so much of the power.

    If they were able to switch back and forth between CPU and GPU cores dynamically then this would be less of a problem. You wouldn't for example have a fixed quad-core CPU plus 40-core GPU, the GPU cores could be reconfigured to behave like CPU cores but more than one of them per CPU core. They are doing the same basic computations but in a different way.

    The need for really high performance computing will be satisfied by the cloud and the cost of that will come down. For the most part, GPUs will take over high performance personal computing tasks and they are still advancing for now. We only need the high performance for a few particular tasks like scientific computing, 3D/image processing, encoding and there will be a plateau that is good enough for personal computing and beyond this, people just get a cloud service or buy a bunch of personal computers.
    edited March 2016
  • Reply 74 of 105
    knowitallknowitall Posts: 1,648member
    sockrolid said:

    kkerst said:
    If the future is with ARM and to achieve the type of processing power needed, the solution will probably be distributed processing with numerous customer Apple ARMs used in one design. I can see a platform existing where an A10 is used with an Apple GPU (ARM based). If a DSP is needed, then I can see a custom sound DSP processor used either off chip or within the A10 (Ax or whatever). The only problem with all this is not the hardware, it's what it does to the install base. Essentially, backward compatibility would be a problem for software. That's never stopped Apple before.
    The obvious solution is the same one that Apple has used before: fat binaries with both RISC and CISC executables.
    Yes, the executable part of the "fat" app bundle will be roughly 2x the size of that of an Intel-only app.
    But we're living in the era of 3TB hard drives.  A few extra MB per app won't even be noticed.

    There would be a transition period of during which Apple will allow developers to ship fat binaries for backward compatibility.
    Then, eventually, as the older Intel systems become deprecated, Apple would plant the stake and require ARM CISC-only binaries.

    Apple has made two processor transitions already.  68K -> PowerPC and PowerPC -> Intel.
    And I have heard rumors on AI that there is an ARM port of OS X running right now.
    Just waiting for the right moment to be released on an ARM iMac or MacBook.
    (At least 3 or 4 more years I'm randomly guessing, judging by how fast Apple is developing their Ax chips.)
    Exactly, and fat binaries doesn't have to be fat while downloading (only on Apples servers) because the target OS is known so the server will strip the fat part of it.
    A similar technique is already used for iOS downloads (in that case the parts used for different classes of iOS hardware, like iPhone and iPad).
  • Reply 75 of 105
    volcan said:
    Those boxes use Freescale ARM SoCs.

    I suspect that Apple Ax chips could be used at a fraction of the cost.  And Apple would have secure 1st-party servers, gain economies of manufacturing scale, ability to tailor software and hardware for specific needs.

    Yeah I know, just joking how inexpensive hardware can be load balanced together, but to be really fast a server farm would need fiber optics, so those little boxes wouldn't be ideal anyway.


    What about this from 2012 -- should be doable today (emphasis mine):


    IBM creates first cheap, commercially viable, electronic-photonic integrated chip

    IBMs silicon nanophotonic modulatorphotodetector chip with integrated electrical components

    IBM has become the first company to integrate electrical and optical components on the same chip, using a standard 90nm semiconductor process. These integrated, monolithic chips will allow for cheap chip-to-chip and computer-to-computer interconnects that are thousands of times faster than current state-of-the-art copper and optical networks. Where current interconnects are generally measured in gigabits per second, IBM’s new chip is already capable of shuttling data around at terabits per second, and should scale to peta- and exabit speeds.

    After more than a decade of research, and a proof of concept in 2010, IBM Research has finally cracked silicon nanophotonics (or CMOS-integrated nanophotonics, CINP, to give its full name). IBM has proven that it can produce these chips on a commercial process, and they could be on the market within a couple of years. This is primarily big news for supercomputing and the cloud, where the limited bandwidth between servers is a major bottleneck.

    There are two key breakthroughs here. First, IBM has managed to build a monolithic silicon chip that integrates both electrical (transistors, capacitors, resistors) and optical (modulators, photodetectors, waveguides) components. Monolithic means that the entire chip is fabricated from a single crystal of silicon, on a single production line; i.e. the optical components are produced at the same time as the electrical components, using the same process. There aren’t two separate regions on the chip that each deal with different signals; the optical and electrical components are all mixed up together to form an integrated nanophotonic circuit.

    Second, and perhaps more importantly, IBM has manufactured these chips on its 90nm SOI process — the same process that was used to produce the original Xbox 360, PS3, and Wii CPUs. According to Solomon Assefa, a nanophotonics scientist at IBM Research who worked on this breakthrough, this was a very difficult step. It’s one thing to produce a nanophotonic device in a standalone laboratory environment — but another thing entirely to finagle an existing, commercial 90nm process into creating something it was never designed to do. It sounds like IBM spent most of the last two years trying to get it to work.

    The payoff makes all the hard work worthwhile, though. IBM now has a cheap chip that can provide a truly mammoth speed boost to computers. It’s not too hyperbolic to say that this advancement will single-handedly allow for the continuation of Moore’s law for the foreseeable future.

    IBMs silicon nanophotonic chip showing optical waveguides blue and copper wires yellow

    IBM’s silicon nanophotonic chip, showing optical waveguides (blue) and copper wires (yellow)

    In these chips, there are optical modulators and germanium photodetectors that can send and receive data at 25 gigabits-per-second (Gbps), using four-channel wave-division multiplexing (WDM). In the picture at the top of the story, you see a single modulator/photodetector transceiver, with copper wiring (yellow), and transistors (red dots on the far right side). Assefa tells us that this single block is 0.5×0.5mm, and that IBM has successfully built a 5x5mm die with 50 transceivers. Connect two of these dies together with a fiber channel and you have an interconnect with 1.2 terabits of bandwidth.

    Compare this to existing fiber-optic interconnects, which are generally very bulky and expensive, and you can see why IBM is so excited. While we couldn’t even get a ballpark figure out of IBM, the use of a standard 90nm process means that these chips probably cost no more than a few dollars to produce. IBM is targeting super and cloud computing first, where bandwidth between nodes is a serious bottleneck — but there’s no reason that these chips won’t eventually find their way into consumer hands.

    Ultimately, we are talking about a standard computer chip that could be integrated into any electronic device, without significantly impacting the price. Assefa tells us that the this nanophotonic tech could, in theory, be integrated into future CPUs or SoCs. This is the chip that could power the next-generation optical interconnect between your desktop’s CPU, GPU, and RAM. This is the chip that could directly wire your PC into your ISP’s fiber-optic network, potentially unleashing terabit-or-higher download speeds. This chip is a big deal.

    http://www.extremetech.com/computing/142881-ibm-creates-first-cheap-commercially-viable-silicon-nanophotonic-chip

    edited March 2016 roundaboutnowmainyehc
  • Reply 76 of 105
    knowitallknowitall Posts: 1,648member
    On porting Mac OS X to ARM.

    Traditionally, porting an OS is difficult because of drivers, written in (part in) assembly.
    Nowedays this is a non issue because of two things: drivers have an higher abstraction layer and use common busses (that is, USB and Bluetooth). So USB and Bluetooth driver classes and specific implementations are written in C and can be ported by a recompile.
    Graphics drivers are a bit less easy to port, but also use common layers like OpenGl and have well defined direct hardware interfaces that are relatively limited in code size.

    In Apples case a port is already done for all classes of drivers mentioned above, because iOS uses Darwin as OS layer (similar to OS X) and iOS devices already use WiFi, graphics, USB and Bluetooth extensively.
    Compared to traditional notebooks, iOS also supports cellular radio stacks, so you could say that device support is even better than OS X.

    I suspect Apple doesn't transition its MacBook line to ARM before it's embarrassingly clear that A processors outperform Intel ones, but the A10 will be that point. So we might expect to see a A10 new MacBook this year (you know as replacement of the current very weak implementation).
    edited March 2016
  • Reply 77 of 105


    Ha!

    That sounds a lot like a computer storage technique used in the 1950-1960s -- magnetosrtictive delay lines.

    They used coils of wire to store bits.  To maintain state the wires were slightly twisted (torqued) to send a ripple down the wire -- at the other end it was recognized and re-torqued.

    http://www.sciencedirect.com/science/article/pii/0029554X71905301


    Marvin said:
    There may eventually be a way to create a virtual transistor instead of a physical one without any complications of quantum computing. Perhaps a transistor can be represented using an electrical or photonic signal that just has its state maintained. Like a light wave in a loop. At any given time, a processor maintains a binary state. They have 1-2 billion transistors just now. An optical fiber has managed to send 255 trillion bits of information in a second. They need to perform computing operations like add, subtract, multiply, divide, shift, copy etc. They would produce a second signal that does the required adjustments to the sustained state and merge them. If that kind of thing could exist, it wouldn't need physical chip architectures, any chip architecture (ARM, PPC, X86) could be virtualized and software could run against whatever virtual chip architecture was required. We already have this kind of thing with emulators where they create software versions of hardware chips.


    edited March 2016
  • Reply 78 of 105
    MacProMacPro Posts: 19,718member
    volcan said:
    Yeah I know, just joking how inexpensive hardware can be load balanced together, but to be really fast a server farm would need fiber optics, so those little boxes wouldn't be ideal anyway.


    What about this from 2012 -- should be doable today (emphasis mine):


    IBM creates first cheap, commercially viable, electronic-photonic integrated chip

    IBMs silicon nanophotonic modulatorphotodetector chip with integrated electrical components

    IBM has become the first company to integrate electrical and optical components on the same chip, using a standard 90nm semiconductor process. These integrated, monolithic chips will allow for cheap chip-to-chip and computer-to-computer interconnects that are thousands of times faster than current state-of-the-art copper and optical networks. Where current interconnects are generally measured in gigabits per second, IBM’s new chip is already capable of shuttling data around at terabits per second, and should scale to peta- and exabit speeds.

    After more than a decade of research, and a proof of concept in 2010, IBM Research has finally cracked silicon nanophotonics (or CMOS-integrated nanophotonics, CINP, to give its full name). IBM has proven that it can produce these chips on a commercial process, and they could be on the market within a couple of years. This is primarily big news for supercomputing and the cloud, where the limited bandwidth between servers is a major bottleneck.

    There are two key breakthroughs here. First, IBM has managed to build a monolithic silicon chip that integrates both electrical (transistors, capacitors, resistors) and optical (modulators, photodetectors, waveguides) components. Monolithic means that the entire chip is fabricated from a single crystal of silicon, on a single production line; i.e. the optical components are produced at the same time as the electrical components, using the same process. There aren’t two separate regions on the chip that each deal with different signals; the optical and electrical components are all mixed up together to form an integrated nanophotonic circuit.

    Second, and perhaps more importantly, IBM has manufactured these chips on its 90nm SOI process — the same process that was used to produce the original Xbox 360, PS3, and Wii CPUs. According to Solomon Assefa, a nanophotonics scientist at IBM Research who worked on this breakthrough, this was a very difficult step. It’s one thing to produce a nanophotonic device in a standalone laboratory environment — but another thing entirely to finagle an existing, commercial 90nm process into creating something it was never designed to do. It sounds like IBM spent most of the last two years trying to get it to work.

    The payoff makes all the hard work worthwhile, though. IBM now has a cheap chip that can provide a truly mammoth speed boost to computers. It’s not too hyperbolic to say that this advancement will single-handedly allow for the continuation of Moore’s law for the foreseeable future.

    IBMs silicon nanophotonic chip showing optical waveguides blue and copper wires yellow

    IBM’s silicon nanophotonic chip, showing optical waveguides (blue) and copper wires (yellow)

    In these chips, there are optical modulators and germanium photodetectors that can send and receive data at 25 gigabits-per-second (Gbps), using four-channel wave-division multiplexing (WDM). In the picture at the top of the story, you see a single modulator/photodetector transceiver, with copper wiring (yellow), and transistors (red dots on the far right side). Assefa tells us that this single block is 0.5×0.5mm, and that IBM has successfully built a 5x5mm die with 50 transceivers. Connect two of these dies together with a fiber channel and you have an interconnect with 1.2 terabits of bandwidth.

    Compare this to existing fiber-optic interconnects, which are generally very bulky and expensive, and you can see why IBM is so excited. While we couldn’t even get a ballpark figure out of IBM, the use of a standard 90nm process means that these chips probably cost no more than a few dollars to produce. IBM is targeting super and cloud computing first, where bandwidth between nodes is a serious bottleneck — but there’s no reason that these chips won’t eventually find their way into consumer hands.

    Ultimately, we are talking about a standard computer chip that could be integrated into any electronic device, without significantly impacting the price. Assefa tells us that the this nanophotonic tech could, in theory, be integrated into future CPUs or SoCs. This is the chip that could power the next-generation optical interconnect between your desktop’s CPU, GPU, and RAM. This is the chip that could directly wire your PC into your ISP’s fiber-optic network, potentially unleashing terabit-or-higher download speeds. This chip is a big deal.

    http://www.extremetech.com/computing/142881-ibm-creates-first-cheap-commercially-viable-silicon-nanophotonic-chip

    Wow.  Of course Apple and IBM are pretty tight (again) these days so that bodes well for us Apple folks.  Talking of Apple and IBM I wonder if they have discussed letting Siri crib off Watson's results yet?  For non British English speaking folks.  to crib: (transitive) (informal) to steal (another's writings or thoughts)

    edited March 2016 mainyehc
  • Reply 79 of 105
    MarvinMarvin Posts: 15,309moderator

    Ha!

    That sounds a lot like a computer storage technique used in the 1950-1960s -- magnetosrtictive delay lines.

    They used coils of wire to store bits.  To maintain state the wires were slightly twisted (torqued) to send a ripple down the wire -- at the other end it was recognized and re-torqued.

    http://www.sciencedirect.com/science/article/pii/0029554X71905301

    That's really interesting. That's pretty much the idea.

    https://en.wikipedia.org/wiki/Delay_line_memory

    With optics, the bandwidth would be much higher. It's sequential access but it doesn't matter because it's so fast. The state could be maintained in a optical fiber or a mirroring system. Maybe it's not practical but there's probably some technique that gets around the problem of always having to shrink and cool physical transistors. Quantum computing is one method to get around it by simultaneously maintaining far more states per unit but it's not reliable enough.
  • Reply 80 of 105
    knowitallknowitall Posts: 1,648member
    On porting Mac OS X applications to ARM.

    Apple made porting of applications to a new architecture a lot easier because they created APIs and operating system support for parallelism and abstracted processor cores, so applications written using OpenCL and/or GrandCentral can be converted with one recompile and run optimally on an arbitrary mix of CPUs and GPUs.
    Also games can be converted easily because of OpenGL and Metal so this also requires no effort at all.
    How about x86 stuff? People are whining about Windows support and how that would be impossible when running on ARM, but they are wrong (of course) and should stop whining and start using it.
    Wine that is.
    Of course wine doesn't run it all, I couldn't get Tacx software running on my Mac for example, but it's just a matter of time before it will. Several 'hardcore' gamers I know of play without the OpenGL lag with DirectWhatever supported by wine, and most games run out of the 'Windows' box.
    Some official wine distributions exists, so you can ask to get your software running if it doesn't already.
    Of course, by far the best option would be to ask the company that made the software to port it to the Mac, because MS software will go the way of the dodo ...
    edited March 2016
Sign In or Register to comment.