Intel's "Platformization"

Posted:
in Future Apple Hardware edited January 2014
There's been a lot of discussion on why Apple didn't choose AMD as a partner. I think AMD definitely has the right product and performanc, the future looks good as well but where Intel likely won the deal is in the ability to sell a platform. They design and offer such a breadth of technology I'm sure Apple was hard pressed to forgo on the benefits here. I've hightlighted some things that are either shipping now or coming from Intel.



EM64T-This is Intel's name for their 64-bit support in the microprocessors. This gives OS X hardware



Quote:



64-bit flat virtual address space



64-bit pointers



64-bit wide general purpose registers



64-bit integer support



Up to 1 terabyte (TB) of platform address space




Pentium 4 are moving to EM64T support. Laptops will get there eventually.



Vanderpool- Also known as VT this is Intel's hardware support for CPU virtualization that is being put in some microprocessors in 2006(Yonah supports VT). Virtualization apps like VMware will be able to complement VT technology to offer more robust virtualization. Itanium and Intel Workstation Xeons will need this technology and companies look to consolidate Server and virtualize the NOS. Apple could make use of it but probably won't initally. AMD's virtualization technology is called Pacifica.



I/O AT- Intel I/O Acceleration Technology- Intel has scrapped plans to move to TOE nics and instead will build I/OAT into Worksation and Server networking chipsets. They claim that it's more consistent in removing the burden of iops from the CPU and is more reliable by performing error checking. It seperates the protocol and payload processing and computes them in parallel. Application latency and CPU utilization is said to decrease by up to 33%



Intel AMT- Intel's Active Management Technology allows for remote management of Intel based computers. Systems can be restored or healed by taking advantage of aux power even when the system is shut down. Built in alerts and the ability to prevent the disabling of important software like anti-virus and remote control. A natural Apple application to support this and the subsequent version 2 of the technology is Apple Remote Desktop.



Common System Interface CSI Bus is Intel's direct competitor to Hypertransport due to ship on Xeon and Itanium platforms in 2007. Not much is known but it will likely be used to connect multicore CPU and the North and South bridges together. Intel is assumed to be moving the 2007 Itanium/Xeon processors to ondie memory controllers.



I'm sure Intel has more up their sleeves but some of their technology just might be beneficial to Mac users.
«13

Comments

  • Reply 1 of 56
    jms698jms698 Posts: 102member
    This doesn't sound very impressive.



    IBM and AMD already have 64-bit.



    Virtualizing another operating system might be useful, but, in practice, most people won't use such a feature.



    Fast I/O is good, but hardly revolutionary (unless I'm misunderstanding something here).



    Remotely powering on a computer is certainly nothing revolutionary and OS X doesn't need a antivirus software.



    No idea what the strange CSI Bus is supposed to do.



    So, what is the big deal? IMHO, the mobile side of things is the area where the big Intel benefits come rolling in.
  • Reply 2 of 56
    Also wouldnt it be cool to say we have a 5ghz powerbook instead of a Athlon XP 6000 (which is probably where they will be up to by then, to be honest their numbering system sucks, my chips overclocked in my pc and it dont tell me the cpu speed at bootup it tells me the chip name (which is wrong, it tells me its a 3200xp when its only a 2500xp, but this could be because of it being overclocked)
  • Reply 3 of 56
    hmurchisonhmurchison Posts: 12,419member
    Quote:

    IBM and AMD already have 64-bit.



    True. Intel is late to the party because they were pushing IA64 and had to regroup after AMD forced the issue. Chalk that victory up to AMD and that benefits us all.



    Quote:

    Virtualizing another operating system might be useful, but, in practice, most people won't use such a feature.



    Really? So Microsoft bought Connectix for nothing right? I guess that means VMwares rising sales are a total fluke. No virtualization makes sense it's just a top down technology that won't infiltrate the home for another 5 years or so. Businesses are eyeballing virtualization as a way to enhance server consolidation in heterogenous networks. Intel is moving to virtualization in a big way with VT and the next revisions to PCI-Express which will virtualize your cards. Does it make sense for the home user now? No we don't have the processing power but in 5 years when the typical home computer has 16-32 processors(physical and virtual) then it makes sense to have that computer leverage the processing power. Dual Boot is not a viable situation.



    Quote:

    Fast I/O is good, but hardly revolutionary (unless I'm misunderstanding something here).



    Note. I never said it was revolutionary. However efficient I/O processing is beneficial to computers. TCP/IP has some serious overhead that is beginning to rear its head now that we have gigabit and 10G ethernet coming. I don't know what Intel's solution will cost but it'll surely be lower than the $800 TOE HBA of today. Being that it's on the sever or workstation motherboards I expect Xserves to benefit primarily.



    Quote:

    Remotely powering on a computer is certainly nothing revolutionary and OS X doesn't need a antivirus software.



    The point of this thread is to examine why Apple may have chosen Intel over AMD. The Revolution will have to come later. Also I think it would be daft to look at AMT as just remotely powering a workstation. Intel hasn't discussed the iAMT2 version due to ship in 2006. I've just written a bit about the current shipping. More improvements and features should be forthcoming.



    Quote:

    No idea what the strange CSI Bus is supposed to do.



    Just like hypertransport it links chips on the motherboard together in a very fast serial connection. Intel will likely use this for forthcoming chips that have 4 or more cores. Not earth shattering but welcome.



    AMD has their hand in similar technologies but they don't like doing motherboard chipset designs. I think Apple likes the idea of using Intels R&D here and focusing on software.



    Intel's good stuff doesn't start hitting the desktop until late 2006 early 2007. Apple will be ready with good stuff.
  • Reply 4 of 56
    thttht Posts: 5,420member
    Quote:

    Originally posted by jms698

    IBM and AMD already have 64-bit.



    Intel has been shipping EM64T Pentium 4 and Xeons since 2004. It's not new for Intel either.



    Quote:

    No idea what the strange CSI Bus is supposed to do.



    CSI is just a chip-to-chip bus or interconnect in the same vein as Hypertransport. It will supposedly allow better multiprocessor performance, where multiprocessor means 4, 8, 16, 32, to "n" processors, not really 2.



    Quote:

    So, what is the big deal? IMHO, the mobile side of things is the area where the big Intel benefits come rolling in.



    Not sure what you mean by big deal, but platformization is just Intel-speak for developing technologies that allow Intel to own every nook and cranny of computing hardware. They want all of the chips on a motherboard to be an Intel product. It's a necessary consequence of a gigantic corporation wanting to grow; well, and of "Moore's Law".



    In days gone by, motherboards consisted of discreet chips performing one function or another: audio chips, serial bus chips, I/O chips, networking chips, etc. Apple has WiFi and Bluetooth add-on chips (in the form of daughterboards). These different chips often came from different vendors.



    Intel's Centrino "platform" is an early instance of platformization. Intel created an "integrated" set of chips that performs all of the functions (pretty much) that a laptop motherboard would have. In doing so, they were able to drive the entire system towards low power in an integrated way while adding features (WiFi was the important one) for "free" for all systems that use it.



    Moore's Law: more transistors can be added to a chip every fab generation. So with more transistors available, more functionality can be added to a system's chipsets. Centrino's chipsets have virtually all of a computer's functionality integrated into 2 or 3 Intel made chips.



    hmurchinson's post talking about EM64T, Vanderpool Tech (essentially hardware-base virtual machines, Virtual PC in hardware), I/O Acceleration Tech (advanced networking), Active Management Tech (advanced hardware management support), CSI bus (new multiprocessor bus tech) - and he didn't mention LaGrande tech (essentially hardware DRM), HyperThreading and multicore - is all about the platformization of the super-profittable enterprise hardware market, in the same vein as what Centrino did for laptops.



    Intel is going to try to own all of the server, mainframe, HPC, "enterprise" hardware and they are going to provide it in one integrated solution to make it easy for IT managers. The big deal I guess is that Intel wants to own this market, and they are going to do it in an integrated way (sort of what Apple does) that will be hard for IT managers to refuse. AMD's nascent server business and IBM's mature mainframe business better watch out.



    Not much to do with Apple, but there will be a platformization of the desktop computing market which Apple will use, where Intel will produce a set of chips that comprise all the functionality that goes into the computer short of anything not on the motherboard. It may just be an outgrowth of the Yonah and Merom based platforms, who knows. All Apple will do is take the chips and package them in their own unique way.
  • Reply 5 of 56
    jms698jms698 Posts: 102member
    Quote:

    Originally posted by hmurchison

    Really? So Microsoft bought Connectix for nothing right? I guess that means VMwares rising sales are a total fluke. No virtualization makes sense it's just a top down technology that won't infiltrate the home for another 5 years or so. Businesses are eyeballing virtualization as a way to enhance server consolidation in heterogenous networks. Intel is moving to virtualization in a big way with VT and the next revisions to PCI-Express which will virtualize your cards. Does it make sense for the home user now? No we don't have the processing power but in 5 years when the typical home computer has 16-32 processors(physical and virtual) then it makes sense to have that computer leverage the processing power. Dual Boot is not a viable situation.



    I wonder if the average computer use would want to run multiple operating systems? Sure, VMware sales might be rising, but that doesn't mean that consumers will ever be interested in running Linux, OS X and Windows on the same machine. Surely, if you like one of those operating systems, you will just run that OS and not bother with the others?



    However, as for choosing Intel over AMD, I agree: Apple is all about offering a combined platform solution were everything works together seamlessly. Intel play right into that philosophy. AMD is not big enough to create a huge portfolio of interlinking technologies but has to rely on 3rd party vendors.
  • Reply 6 of 56
    hmurchisonhmurchison Posts: 12,419member
    I could see someone running two different OS. Three would really be pushing it.





    I don't necessarily see Virtualization as being a technology that is just for running different OS. It may also benefit families that want to run distinct OS X installs simultaneously. Thus my dream setup may be



    16 core Powermac with OS X virtualized on 4 processors each for the family of 4. Longhorn running in tandem with my OS X on my partition.



    My dream for the future is having the computer accessed over a network. The "guts" of the computer will reside in a closet in the basement thus the casing can be bulky and well cooled. On each desktop will be an LCD and I/O for devices. Nice clean and quiet! Serializing everything should make this a distinct possibility. All computers boot over the network and share the bank of CPUs, hard drives and GPU in the rugged case.
  • Reply 7 of 56
    The way I see thing is that after power consumption, speed of execution, multi-core and prices, virtualization could be one of the most important vector of growth for the Mac platform. The Mac mini is trying to convince people to buy a Mac has an accessory to their PC (as a second computer or sort of). But next, what Apple can do with the INTEL switch is to convince more people to just by a Mac to have two platform in one. A Mactel computer will be the only one on which you could run MacOS X AND Windows.
  • Reply 8 of 56
    maddanmaddan Posts: 75member
    Apple chose Intel because it's mutually beneficial. Intel has a huge R&D budget and Apple isn't stuck on legacy hardware the way PC manufacturers are. Where would USB be today without apple?
  • Reply 9 of 56
    cubistcubist Posts: 954member
    With good virtualization the OS could create a program's environment on the fly, so, for example, you could run Windows software, even games, transparently. You could run programs that only work in Mac OS 8.6, or OS/2, or even Apple II programs. With a really good implementation, you could run any software at all - just plug in the virtual machine for the environment required.
  • Reply 10 of 56
    cwestphacwestpha Posts: 48member
    A few notes from someone who gets the Intel "secret" plan every 6 months:



    EM64T:

    It wont be in laptops for a few more years since the only laptops that can truely make use of this technology are DTRs (Desktop Replacements). And these usualy use desktop processors in a luggable form factor. Also adding 64 bits to Sonoma adds a few more watts of disipation and slightly ups power consumption.



    VT:

    VT has allowed Intel to emulate a G5 processor at about 70% real speed in lab conditions. It was neat until you realized the processor needed water cooling and IBM could sue em.

    The real benefit with Venderpool would be for mac on mac, windows on mac, or linux on mac implementations. You could do a Xen like system without the need to alter system code AND the OSs could get almost direct system access without asking a host OS for permission. The real end result of Venderpool is to make the system compleatly modular. Imagin each process on a computer has its own direct access to a hardwar level that then integrates with a OS Kernel. If an application crashes or freezes (dont pretend it doesnt happen Apple and Linux people), you can kill the application and it will be compleatly independent of the OS and other applications.



    I/O AT:

    Hardware based networking (little processing overhead) with the possibility of an nForce-esque hardware firewall.



    Intel AMT:

    This also has a modualar component into BIOS 2.0 that would allow at-boot remote assistance. I have seen the future and it is IT freindly.



    CSI:

    CSI is Intel's answer to their problematic bus architecture when multi-cores become the norm. Expect integrated memory controlers, Hypertransport 2.0 (theioretical) speeds, and other features to be the norm. The only problem is that, like PCI-express, CSI AT doesnt look like it is going to shape up to be a energy efficent design.



    Anything else you want to know that I am not under NDA about? I am under vary few Yonah NDAs right now (atleast until the next Conference and lunch I have with Intel reps).



    P.S. the 4 Ghz lab P4 running G5 OS X at 70% full speed was the sweetest Demo I have ever seen. To bad Intel isnt going to release their lab Venderpool, they are to afraid of lawsuits.
  • Reply 11 of 56
    1337_5l4xx0r1337_5l4xx0r Posts: 1,558member
    Okay, I've never grasped this one: What is HARDWARE virtualization? What does it do that software doesn't? Software can abstract and isolate processes... examples include "honeypot" linux setups (aka chroot), Virtual PC for Macs, etc. What can transistors in a CPU do that adds to this?!



    Moreover, on a similar topic, I have a similar question about threading... inetel has 'hyperthreading' which allows for execution of multiple process threads in parallel. But doesn't software do this as well? I mean, does a P4 do something that a G5 doesn't? I thought unix inherently did multiple threads?



    Hardware IO processing appeals... networking does indeed affect system responsiveness. The rest of those upcoming features don't get me hot and bothered.



    Oh yeah: Doesn't a P4 running G5-compiled OSX have to do with Rosetta technology and not Virtualization? I mean the P4 is clearly not running native binaries.
  • Reply 12 of 56
    wmfwmf Posts: 1,164member
    Quote:

    Originally posted by cwestpha

    EM64T:

    It wont be in laptops for a few more years since the only laptops that can truely make use of this technology are DTRs...




    Given that EMT64 gives a performance boost to most code, I think it's useful in all machines. But since Apple is embracing 64-bit relatively slowly, I don't think it's a big concern.



    Quote:

    VT:

    VT has allowed Intel to emulate a G5 processor...




    No it doesn't. Virtualization isn't emulation.
  • Reply 13 of 56
    cwestphacwestpha Posts: 48member
    Quote:

    Okay, I've never grasped this one: What is HARDWARE virtualization? What does it do that software doesn't? Software can abstract and isolate processes... examples include "honeypot" linux setups (aka chroot), Virtual PC for Macs, etc. What can transistors in a CPU do that adds to this?!



    With software it is as follows:

    guest OS -> application -> host OS -> hardware



    With software its:

    guest OS -> hardware



    Basicly it allows more direct access to hardware with less latency. Hardware acts as both the application and the host OS.

    For an example Xen is a combination of the host OS and application intertwined togeather.



    Quote:

    Moreover, on a similar topic, I have a similar question about threading... inetel has 'hyperthreading' which allows for execution of multiple process threads in parallel. But doesn't software do this as well? I mean, does a P4 do something that a G5 doesn't? I thought unix inherently did multiple threads?



    Yes an no. Hyperthreading allows more then one thread to be processed by a core in a cycle. This is done by what can basicly be considered a striped down version of dual core theory. Basicly in HT there is only one store unit in the processor with two divergant processing paths. As long as both threads arent trying to store or use the same hardware processing unit it will do a little speed benifit.

    HT has little to do with software. The G5 does not have it but the Xbox 360 does have Hyperthreading technology in it. So IBM does have it.

    HT requires OS support to have more then one thread executing at once, and compiler support to properly help optimal schedualing. Do remember that the processor can do out of order execution, but its much faster to have something compiled to take advantage of the different processing paths.

    (yes I know this is a simplistic view of hyperthreading)



    Quote:

    Hardware IO processing appeals... networking does indeed affect system responsiveness. The rest of those upcoming features don't get me hot and bothered.



    Most of those are future looking technologies that consumers wont have a need for until five to ten years. Most of these technologies are initialy aimed at the Xeon (server) line.



    Quote:

    Oh yeah: Doesn't a P4 running G5-compiled OSX have to do with Rosetta technology and not Virtualization? I mean the P4 is clearly not running native binaries.



    This is a common misconception. In order to have multiple OSs think they have full control of the hardware at the same time there is some platform translation/emulation taking place. Oh and Rosetta was made by Apple, Intel doesnt have much work in with it's development.



    Quote:

    Given that EMT64 gives a performance boost to most code, I think it's useful in all machines. But since Apple is embracing 64-bit relatively slowly, I don't think it's a big concern.



    It has a small benifit if you arent doing mathmaticly complex work that requires large matrixes or address space. Gaming shows some performance increases with database, web services, and scientific applications showing the biggest.

    Most of the speed benifits over non-64 bit systems is because of the better compiling and the chance to remove some obsolite code. Also 64 bit processors tend to be a bit more streamlined because they are a newer and premium technology.



    Quote:

    VT has allowed Intel to emulate a G5 processor...



    That is incorect... see above.



    Do I need to scan in and show you people my ICC attendance certificat to belive me? I know more about these technologies then the media does because Intel wants me to SELL these technologies to people. I get two plastic bags full of marketing material, NDA road maps, and technolgoy break downs ever 6 months. Its quite fun, then again I have been selling more AMD stuff recently (currently they are in the lead for everything but the mobile platforms).



    Oh and if the last ICC was any show... Itanium is dead. Both Microsoft and Intel was prentending it didnt exist last meeting. It was funny watching them try not to answer an Itanium question durring the Q&A.
  • Reply 14 of 56
    telomartelomar Posts: 1,804member
    Quote:

    Originally posted by 1337_5L4Xx0R

    Okay, I've never grasped this one: What is HARDWARE virtualization? What does it do that software doesn't? Software can abstract and isolate processes... examples include "honeypot" linux setups (aka chroot), Virtual PC for Macs, etc. What can transistors in a CPU do that adds to this?!



    Essentially it amounts to a very basic OS that runs on the processor. So rather than an OS communicating directly to hardware it communicates with another OS and basically each OS becomes like an application on the hardware. Very useful in server farms where multiple OSs might need to be run.



    I'm yet to be really convinced it has much of a use for consumers although they are going to try and push it for home servers. Only problem I see will be interconnection from different manufacturers but time will tell.



    Quote:

    Originally posted by cwestpha

    Most of the speed benifits over non-64 bit systems is because of the better compiling and the chance to remove some obsolite code. Also 64 bit processors tend to be a bit more streamlined because they are a newer and premium technology.



    What a load of crap. EMT64 gets the majority of its performance boost because it increases the number of registers available from 8 to 16, allowing the compiler greater flexibility.
  • Reply 15 of 56
    programmerprogrammer Posts: 3,457member
    Quote:

    Originally posted by 1337_5L4Xx0R

    Moreover, on a similar topic, I have a similar question about threading... inetel has 'hyperthreading' which allows for execution of multiple process threads in parallel. But doesn't software do this as well? I mean, does a P4 do something that a G5 doesn't? I thought unix inherently did multiple threads?



    Operating system threads achieve the appearance of running multiple threads at the same time by rapidly time-slicing between them. Typically a timer or other external interrupt causes the currently executing thread to be suspended and another to be run. At any given moment in time, however, only a single thread's instructions are being executed and switching between threads only happens at something like millisecond (or slower) granularity.



    With hyperthreaded/SMT or multi-core hardware there is more than one stream of instructions being consumed by the hardware simultaneously. In the case of HT/SMT the threads typically take turns each cycle (i.e. thread A on one cycle, thread B on the next) although this depends on the implementation. In a multi-core processor each core is consuming instructions each cycle.



    Typically an OS will time-slice the existing software threads onto the available hardware threads just like it would on a single threaded processor... except there is more of the scarce hardware resource to go around.
  • Reply 16 of 56
    snoopysnoopy Posts: 1,901member
    Quote:

    Originally posted by Programmer

    Operating system threads achieve the appearance of running multiple threads at the same time by rapidly time-slicing between them. Typically a timer or other external interrupt causes the currently executing thread to be suspended and another to be run. At any given moment in time, however, only a single thread's instructions are being executed and switching between threads only happens at something like millisecond (or slower) granularity.



    With hyperthreaded/SMT or multi-core hardware there is more than one stream of instructions being consumed by the hardware simultaneously. In the case of HT/SMT the threads typically take turns each cycle (i.e. thread A on one cycle, thread B on the next) although this depends on the implementation. In a multi-core processor each core is consuming instructions each cycle.



    Typically an OS will time-slice the existing software threads onto the available hardware threads just like it would on a single threaded processor... except there is more of the scarce hardware resource to go around.






    You shattered my uneducated and naive picture of SMT. I had visions of two threads going through the pipeline at the same time. But allowing threads to execute for a millisecond or more is a long time, and says there is no intermixing of threads. This makes SMT a lot easier than I thought it was. True?



    It also brings up something interesting. SMT is no advantage when executing many short threads. Rather, SMT would have its greatest advantage when there is one very long thread executing and many short threads are waiting. No?
  • Reply 17 of 56
    cwestphacwestpha Posts: 48member
    Quote:

    What a load of crap. EMT64 gets the majority of its performance boost because it increases the number of registers available from 8 to 16, allowing the compiler greater flexibility.



    yes, if the software and OS is compiled for the the x86-64 implementation. I was talking about most of the speed benefits on a 32 bit OS (like OS X 10.4.1 for Intel). Usualy the number of registeres used is dictated by the compiler. This would require Apple to re-compile their software to support 64-bits. Something tells me Apple is going to wait for a X86 64-bit recompile until Intel laptops support 64 bit addressing too. I think they want to be able to plaster 64-bit over all of their mac line before doing another re-compile.

    I suspect Apple is going to add x86-64 (yes EMT64 and x86-64 is identical technologies since AMD and Intel have legal agreenments that they can use eachother's technologies) with 10.5, this should be about the time everything Intel has will be 64-bits. They dont want to make this transition even nominaly more complex by confusing people with what is and is not 64-bits.
  • Reply 18 of 56
    hirohiro Posts: 2,663member
    Quote:

    Originally posted by Programmer

    Operating system threads achieve the appearance of running multiple threads at the same time by rapidly time-slicing between them. Typically a timer or other external interrupt causes the currently executing thread to be suspended and another to be run. At any given moment in time, however, only a single thread's instructions are being executed and switching between threads only happens at something like millisecond (or slower) granularity.



    With hyperthreaded/SMT or multi-core hardware there is more than one stream of instructions being consumed by the hardware simultaneously. In the case of HT/SMT the threads typically take turns each cycle (i.e. thread A on one cycle, thread B on the next) although this depends on the implementation. In a multi-core processor each core is consuming instructions each cycle.



    Typically an OS will time-slice the existing software threads onto the available hardware threads just like it would on a single threaded processor... except there is more of the scarce hardware resource to go around.




    This is the case as IBM has implemented the PPE core in the Cell and Xbox360 processor, but not how Intel has implemented it's version of SMT, or as I would think IBM would implement SMT in POWER5 cores. Intel actually calls the switching type architecture Time Slice Multi-threading and notes it is good for mamory latency reduction, but does almost nothing for increasing execution resource usage. There is also a related version called Switch On-Event Multi-threading which is also not what Intel considers SMT (Hyperthreading).



    Intel committed more transistor resources to optimixing how the SMT threads are policed and scheduled, as I am sure IBM has in the POWER 5 SMT lines. Branch prediction, speculative pre-computation, out of order execution and dynamic prediction to attempt to limit stalls to only one of the two executing threads are all additions over what a strict alternation scheme does and will have a significantly difference performance curve than Cell style PPEs.





    Intel Hyperthreading docs
  • Reply 19 of 56
    mmmpiemmmpie Posts: 628member
    As far as technical reasons go I think that while AMD offers higher performance right now, intel will match them. There are a lot of very smart people at both companies.



    But this isnt a technology decision, IBM has good technology as well, and Apple arent ditching them because PPC is too slow now or has no future. There is no way that IBM can design cpus for Apple and have them hit a price point that is competitive with wintel hardware. There just isnt the volume left in the market. The only reason, IMHO, that IBM might support Apple is as a crown jewel example of PPC development, and with the 3 gaming consoles coming onboard Apple just doesnt shine like they used to.



    So to get hardware that will be affordable in the future Apple has had to move to a platform that doesnt require huge investments by them, hence x86, I think it is a smart business move.



    Now, for choosing between Intel and AMD.

    Intel's free cashflow is 9 billion dollars.

    AMD's free cashflow is -300 million.



    Im not saying that AMD is going out of business, but if you had to choose a long term partner for your business who would you choose?

    I think that Intel came to the party in a big way, not only with R&D support, but with actual cash support for marketing, and a roadmap that will once again leap frog over AMD.
  • Reply 20 of 56
    welshdogwelshdog Posts: 1,897member
    Quote:

    Originally posted by jms698

    This doesn't sound very impressive.



    IBM and AMD already have 64-bit.







    So what? Apple's implementaion of 64-bit is not exactly setting the world on fire.



    Read the part about 64-bit in this blog:



    http://www.drunkenblog.com/drunkenbl...es/000555.html



    With Intel maybe OS X will do 64-bit in a way that more developers and customers can actually use.
Sign In or Register to comment.