4 Gb of ram is scan still be processed by a 32 bit processor, we won't need a 64 it processor till the average machine gets alot higher than that. I don't know enough about the facts to talk enough about wit, but what does dound good to me is a dual 32bitprocessor. Someone was talking about a 64 bit processor that can actually split the data up into 2 32 bit processes, so it would nearly be a dual processor comp, with only 1 processor. beautiful.
<strong>64-bit processors will be necessary if you want more than 4GB of RAM in a desktop.</strong><hr></blockquote>
This is complete disinformation. I really wish people knew what they were talking about when they state 'facts.'
32 processors generally CAN handle more than 4GB of memory. The real limit is 4GB PER PROCESS, and that's not such a big deal right now.
OS X (and all other modern OS's) use a virtual memory system. That, in conjuction with the memory management unit built into modern processors allow larger total memory space that the 'bitness' of the processor would otherwise allow.
The G4, IIRC, has a 36 bit physical address bus, allowing for 64 GB of total RAM.
That said, I agree that even 64 GB will not be enough for very long. However, it could easily suffice for a year or so if need be. :-)
Something else that should be pointed out is that Dell is NOT releasing a machine based on the new Itanium. From what Dell has said they don't believe they would sell enough to justify the R&D.
Also, people are asking about using these systems as a home machine...I haven't seen one yet that is approved for home use...
This device has not been approved by the Federal Communications Commission for use in a residential environment. This device is not, and may not be, offered for sale or lease, or sold or leased for use in a residential environment until the approval of the FCC has been obtained.
Going to 64 bit is also no small task.
[quote]IA-64 may be target for Enterprise now, but do you think Intel will really let AMD brag for long about having the only 64-bit desktop processor or let x86-64 win acceptance over IA-64?<hr></blockquote>
Well, the AMD chip is targeted at servers as well. Besides, I believe it be wise for both AMD and Intel to charge a premium for these chips. $2000 - $6000 each depending on cache size.
<strong>Something else that should be pointed out is that Dell is NOT releasing a machine based on the new Itanium. From what Dell has said they don't believe they would sell enough to justify the R&D.
[quote]McKinley should perform much better than Merced...basically Intel swallowed up the PA-RISC and Alpha folks and sent them to work on Itanium. This is the result.<hr></blockquote>
The McKinley has been in development before Intel swallowed up Compaq's Alpha-related business (Patents, engineers) and HP's Pa-Risc chipset division. It's gonna take awhile for the fruits of those purchases to take effect.
Exactly. I saw some show about Michael Dell, and they were talking about at one point Dell cut its R&D to basicly nothing, and they were relying on their competitors to R&D stuff (or something like that).
I have a feeling this is going to bite dell in teh ass one day (heres to hopeing!)
SAN MATEO, Calif. ? Intel Corp. claimed victory in the speed race for server microprocessors last week, but competitors and analysts said the Itanium 2 performance metrics the company released only marked a warmup lap in a marathon competition ahead.
Intel estimates the Itanium 2, formerly called McKinley, will exceed 730 in base SPECint and 1,400 in base SPECfp benchmarks, compared with 690 and 1,098 for the dual-core IBM Power4 chip and 610 and 827 for the Sun Ultrasparc III. Analysts and competitors credit Itanium 2's 3-Mbyte on-board Level 3 cache (up from 1 Mbyte in the first-generation Itanium) as a key element in the performance specs. As few as 2,000 first-generation Itanium systems shipped last year, in part because the CPU's integer performance was lagging.
Chip-level performance is just one of the hurdles Intel's 64-bit architecture faces. Among the chief challenges, Intel must rally a broad software base behind the new architecture, get OEMs to deliver a wide array of systems and provide building blocks that will scale from single- to 16-processor systems and beyond. Intel hopes to clear most of those hurdles this year, setting Itanium 2 up for broader market acceptance sometime next year when it releases two third-generation Itanium parts.
"They have their ducks in a row now," said Kevin Krewell, an analyst at In-Stat/MDR. "I don't think Intel counts on a lot of volume this year. Their hope is IT spending picks up next year as they roll toward the Madison and Deerfield versions; 2003 is where the knee of the curve kicks in for them."
On the software front, as many as four 64-bit versions of Linux now support Itanium, but a production version of a crucial piece of software ? Microsoft Corp.'s .Net operating system ? is not expected to become available until late this year. Production versions of key database and application programs including Oracle 9i, SAP, SQL Server and DB2, will require qualification on .Net and thus lag its arrival.
Several server makers, including Hewlett-Packard, IBM, NEC and Unisys, are expected to voice support for Itanium 2 when it is formally launched this summer. They will likely announce two- and four-way systems initially. Some are crafting in-house chip sets for 16- and 32-way systems expected to ship late this year or early in 2003.
Intel will roll its own chip set, the E8870, which can be used to construct four-, eight-, 12- or 16-way systems using a scalability port and a switch that links as many as 4 four-way server nodes into a single system. That chip set will support the PCI-X bus and will be announced this summer.
Analyst Krewell said Itanium may not scale as well as other architectures because the processor does not build in a memory controller as Sun did with its Ultrasparc III and Advanced Micro Devices will with its upcoming Opteron CPUs. And use of the separate scalability switches to link four-way nodes could introduce additional latency, he added.
Intel will stick with its strategy of large caches and no integrated memory controller through three generations of Itanium chips, all of which will use a common 611-pin grid array cartridge socket. That socket stability is expected to help make OEMs confident enough to roll a generation of card-upgradable systems. "Through 2004 we will be using the same 6.4-Gbyte/second processor bus," said Jason Waxman, a marketing manager for server processors at Intel.
Indeed, next year Intel will apply its 0.13-micron technology to the 0.18-micron Itanium 2 core to deliver Madison, a version with a whopping 6-Mbyte L3 cache, and a lower-power variant dubbed Deerfield. Those chips will be followed by the Montecito, an enhanced microarchitecture built in 90-nanometer (0.09-micron) technology in 2004.
AMD is launching the Opteron, its 64-bit version of the traditional X86 architecture, later this year, at first in single- and dual-processor versions, which are expected to carry a 156-kbyte cache and to use the HyperTransport interface. Four- and eight-way versions are to follow in mid-2003.
For its part, Sun Microsystems is not saying when it will upgrade its 1-GHz Ultrasparc III, which was released last November. A version with an upgraded memory controller is next on tap.
Sue Kunz, director of marketing for processor products at Sun, said Intel's advantage in chip benchmarks could evaporate when systems-level benchmarks on real application code become available next year. As Intel focuses on excellence in instruction-level parallelism, Sun is doing more to enhance its thread-level parallelism, in part as an effort to gain ground in applications performance.
"These SPEC benchmarks are not a good yardstick. I'd like to see numbers on SAP, Peoplesoft and Notes ? applications real people run," Kunz said.
At 75 watts, the Ultrasparc III also provides lower power consumption than the hot Itanium 2, which is expected to hit 130 W. The high heat dissipation means some OEMs will not use Itanium 2 in growing market areas such as dense rack-mounted systems for telco central offices and Internet data centers. Intel is aiming Itanium at high-end IBM and Sun servers, a portion of the market that is now stagnating under tight IT budgets. "In tight economic times we can deliver greater performance at lower prices," said Intel's Waxman. "This lets IT managers consolidate on common platforms using Windows in the data tier."
<strong>At 75 watts, the Ultrasparc III also provides lower power consumption than the hot Itanium 2, which is expected to hit 130 W.</strong><hr></blockquote>
130 Watts!!!! Too bad it doesn't emit light or you could use it to replace lights AND heaters in whatever place it was installed.
Intel estimates the Itanium 2, formerly called McKinley, will exceed 730 in base SPECint and 1,400 in base SPECfp benchmarks, compared with 690 and 1,098 for the dual-core IBM Power4 chip and 610 and 827 for the Sun Ultrasparc III.<hr></blockquote>
I may be nitpicking, but that sentence is little bit misleading because the POWER4's SPEC scores were achieved with the second core disabled (SPEC cpu only measures single processors).
<strong>The clients that this processor targets could care less about Mhz. This type of client understands processor performance and the simple reality that faster Mhz != faster processing performance.
I just ordered a machine to replace a dual PIII 933 machine. The machine has PIV Xeon's running at 2.4Ghz with DDR. Believe it or not, this machine is a stepping stone to an 8 way PIII Xeon machine...probably PIII 900's.</strong><hr></blockquote>
Mike,
I'm a little confused here, as I'm unaware of the P4 being available in a Xeon variant: I believe that it's nearest functional equivalent is the Xeon MP, but only up to 1.6 GHz (and even then at one heck of a price in a server format, see IBM's X440 and look at the price difference between the 1.5 and the 1.6 GHz versions - Ouch is a word that springs to mind).
Perhaps you could fill in some of the blanks here.
Something else I should point out about the Xeon's and 64 bit processors. Generally, you don't see a HUGE performance increase. For example...
Let's say I have a dual PIII 933 database server that can only safely handle an average of 100 queries per second. A dual PIV Xeon 2.4 could probably handle an average of 600 queries per second. However, these queries would not see a 6x increase in speed...the machine can just handle more and you would probably see only a 2.5 to 3x speed improvment per query.
I honestly believe that we are close to maximum machines needed for desktop use. Most home users won't load the machine like you would a database server.
Also, the move to 64 bit is a HUGE move. Unless the OS is doing some behind the scenes magic most applications will need to be re-written.
[/QB<hr></blockquote>
I think another thing to bear in mind in a 64-bit environment is not just the whole RAM thing, whether on a server or a desktop.
A key point is also whether the 64-bithood is used to develop a a more efficient set of instructions, which - and I'd be looking to Moki for confirmation here - I believe was the point behind the VLIW (Very Long Instruction Word) movement and its' attachment to 64-bithood as an enabling technology.
As for huge memories in desktops or home systems (not Mike's point, I think; but it does turn up elsewhere in the thread), I cannot hep but feel that a) the likelihood is that mainstream business desktops will probably get to 8GB of physical RAM in around two hardware generations (2006-8 is my hunch) and b) that Itanium-style 64-bit hood will take around the same time to get to the mainstream desktop.
I don't have much to back this up, other than the fact that when 32-bit minicomputer technology took root in the late 70's (DEC VAX being the milestone I'm using), it took around 8 years by my reckoning for 32-bithood to reach the desktop with the Intel 80386 (PS/2s and the like, which was 1986 by my memory) and Motorola's 68020 (the venerable Macintosh II [87/88??]).
Even assuming that the pace of change has accelerated, the complexity of the engineering in McKinley and its successors will predicate against its application in mainstream business or consumer systems both on cost and packaging/engineering grounds.
Also, Intel has too much money (and brand image) entrenched in the Pentium architecture for it to switch wholesale.
That said, 64-bithood has the opportunity to act as a leveller in the marketplace where Apple would get the opportunity to re-fight some customer perception battles that it lost at the first time of asking.
As has been noted, shipping McKinley at 1.0GHz - albeit with the mother of all caches - means that if Apple were to ship a "G5" system next year at 1.6 or 1.8 GHz , the megahertz myth gets shown up for the crock we all know it to be.
I personally believe a "G5" mobo + a 64-bit clean version of OS X (which I sometimes refer to in postings as OS XI) will appear at MWSF 04, to coincide with the launch of a "G5" Powerbook, running between 1.4 and 1.6 GHz.
Thus the whole of Apple's "pro" line would be 64-bit capable sometime in 2004, with the consumer line (iMac) following at MWSF 05 and iBook (probably at 1.2 - 1.4 GHz and and low voltage) probably appearing at an earlier than usual MWNY 05, or possibly MWTK 05.
All this between 6 months to 2 years before the WinXXX world can claim 64-bit as standard across all marketplaces.
I do have another question - which is probably best answered by a Programmer or Moki type - which goes something like:
If the Itanium architecture does lead to a component with a 6MB L3 cache, how scalable is this before cache management becomes an issue in an SMP environment?
Ooops, you're right...it's a Xeon...don't know why I was saying PIV
The Xeon machine I just ordered uses the 2.4Ghz Xeon with hyper-threading which can cause each processor to appear as two processors to some applications. Because of this our database server will actually see 4 processors instead of just two.
[quote]Originally posted by Mark- Card Carrying FanaticRealist:
<strong>
I think another thing to bear in mind in a 64-bit environment is not just the whole RAM thing, whether on a server or a desktop.
A key point is also whether the 64-bithood is used to develop a a more efficient set of instructions, which - and I'd be looking to Moki for confirmation here - I believe was the point behind the VLIW (Very Long Instruction Word) movement and its' attachment to 64-bithood as an enabling technology.
</strong><hr></blockquote>
This is where I think we have MILES to go...but I also have to admit that OSX has come so very far when you consider the underpinnings. 64 bit must be taken advantage of. I strongly belive that this responsibility is going to fall mostly on the OS of the future. Application developers aren't going to put $$ into their apps just to make them 64 bit. Now, if Apple could come along and have a 64 bit processor and OSX 64 bit but also have the ability to run 32 bit apps this would be compelling.
[quote]<strong>
As for huge memories in desktops or home systems (not Mike's point, I think; but it does turn up elsewhere in the thread), I cannot hep but feel that a) the likelihood is that mainstream business desktops will probably get to 8GB of physical RAM in around two hardware generations (2006-8 is my hunch) and b) that Itanium-style 64-bit hood will take around the same time to get to the mainstream desktop.
</strong><hr></blockquote>
I just don't see this happening. Unless you are handling HUGE amounts of data there is no need for this unless the applications are VERY poorly written. We don't even have a server with more than 4Gb RAM now that we are no longer running Windows Server.
[quote]<strong>
I don't have much to back this up, other than the fact that when 32-bit minicomputer technology took root in the late 70's (DEC VAX being the milestone I'm using), it took around 8 years by my reckoning for 32-bithood to reach the desktop with the Intel 80386 (PS/2s and the like, which was 1986 by my memory) and Motorola's 68020 (the venerable Macintosh II [87/88??]).
</strong><hr></blockquote>
DEC VAX <img src="graemlins/smokin.gif" border="0" alt="[Chilling]" /> What's funny is that my dad an I were just talking last weekend comparing their pre-vax servers to what I just ordered. I remember when the VAX was state-of-the-art
[quote]<strong>
Even assuming that the pace of change has accelerated, the complexity of the engineering in McKinley and its successors will predicate against its application in mainstream business or consumer systems both on cost and packaging/engineering grounds.
Also, Intel has too much money (and brand image) entrenched in the Pentium architecture for it to switch wholesale.
That said, 64-bithood has the opportunity to act as a leveller in the marketplace where Apple would get the opportunity to re-fight some customer perception battles that it lost at the first time of asking.
As has been noted, shipping McKinley at 1.0GHz - albeit with the mother of all caches - means that if Apple were to ship a "G5" system next year at 1.6 or 1.8 GHz , the megahertz myth gets shown up for the crock we all know it to be.
I personally believe a "G5" mobo + a 64-bit clean version of OS X (which I sometimes refer to in postings as OS XI) will appear at MWSF 04, to coincide with the launch of a "G5" Powerbook, running between 1.4 and 1.6 GHz.
Thus the whole of Apple's "pro" line would be 64-bit capable sometime in 2004, with the consumer line (iMac) following at MWSF 05 and iBook (probably at 1.2 - 1.4 GHz and and low voltage) probably appearing at an earlier than usual MWNY 05, or possibly MWTK 05.
All this between 6 months to 2 years before the WinXXX world can claim 64-bit as standard across all marketplaces.
I do have another question - which is probably best answered by a Programmer or Moki type - which goes something like:
If the Itanium architecture does lead to a component with a 6MB L3 cache, how scalable is this before cache management becomes an issue in an SMP environment?</strong><hr></blockquote>
I honestly believe that Apple does not need the G5 right now or 64 bit. I think many people are going to be amazed at the performance of the new xServer machines. What Apple is currently lacking is memory throughput. I'm keeping my eye on the new Apple servers right now. In about 6 months I'll be purchasing some new servers for a web cluster (tier 2) and I honestly think we will see some decent performance from these machines.
Also, don't forget that the Itanium 2 does require a complete application re-write from what I understand. How many companies are willing to re-write applications given the current economic situations? This is the real question re. the Itanium 2
SpecFP and SpecINT numbers for McKinley actually put it ahead of the POWER4...but we all know Spec is an awful real-world benchmark...
]</strong><hr></blockquote>
...and no doubt the compiler used has had all the appropriate tweeks to be sure of its score. Given how difficult it is to write/compile optimized code for VLIW chips I doubt we'll see many "optimal" apps for this chip for quite some time. It is apparent that Intel is following a facinating hardware course but I wonder if it will ever really see fruition. The horrendous difficulty in designing compilers for these beasts gives AMD and PPC chips an automatic advantage. It'll be interesting to see how things play out over the next few years.
A note about the high prices. The cost helps keep these things from cannibalizing the PIV market but if Intel sees an advantage I would bet that prices could drop very quickly. I wonder how well Itanium could code-morph IA-32 instructions?
I may be nitpicking, but that sentence is little bit misleading because the POWER4's SPEC scores were achieved with the second core disabled (SPEC cpu only measures single processors).</strong><hr></blockquote>
No it wasn't. What was disabled were the other cores in the module. (Power 4 comes as a module that contains 4 processors and 128 MB of cache. The 4 processors are all dual core)
A dual core processor is not 2 separate processors, it's still one processor.
DEC VAX What's funny is that my dad an I were just talking last weekend comparing their pre-vax servers to what I just ordered. I remember when the VAX was state-of-the-art <hr></blockquote></strong>
And can I thank you for helping me to feel even older, just as my 40th birthday comes up on the 90-day radar.
Serious reply forthcoming, once my brain has woken up.
[quote]Originally posted by Mark- Card Carrying FanaticRealist:
<strong>[/qb]
And can I thank you for helping me to feel even older, just as my 40th birthday comes up on the 90-day radar.
Serious reply forthcoming, once my brain has woken up.</strong><hr></blockquote>
Uh-oh...the BIG 40. I just had my BIG 3-0 I know of companies that are still running the VAX system for their old ERP systems! Those machines sure could run!
A dual core processor is not 2 separate processors, it's still one processor.</strong><hr></blockquote>
From Ace's Hardware:
IBM Introduces Entry-Level POWER4 Servers (HARDWARE)
By Brian Neal
Wednesday, June 26, 2002 1:23 PM EDT
IBM has introduced a new entry-level UNIX server based on the POWER4, according to this article from The Register. The pSeries 630 server line, codenamed "Regatta-LE," will ship in 1 to 4-way configurations beginning late in August. This effectively translates into 1-2 CPU modules per system, as POWER4 is a dual-core (on-chip multiprocessing) design, with one core disabled (like the HPC systems) at the low-end and four cores running at the high-end of the 630.
Getting IA-64bit or any 64-bit processor to market will not fly with the masses. There is not exactly a "need" for 64 bit architectures on desktops just yet. We went to 32-bit(and beyond w/ the G4) for te sole purpose of the vast demand for video editing , sound encoding, etc... there was a consumer need for those processors. (Steve didnt have to brag that this was a 32-bit processor, and not a 16-bit processor, etc).
It made buisness sence to get the G4 out when it came out(even if it was a little late).
Right now what good is a 64-bit procecssor on the desktop, auther than ...."run those old apps faster than ever before", which people wont fall for.
So i believe that the G5s will come out when we see Internet2 , or other ground breaking technologies and needs arise. if the G5 came out now, we the apple faithfull will jump onto the same bandwagon, and find that, once again, we are the only ones on it. And that does not make buisness sence, since the cost of production of the chip will vastly outweigh the profit margins given that demand wont be high enough.
Hmmm...The G4 has 36-bit addressing? I guess it's the Athlon and P4 only support 32-bit, so that mean Wintel will need to move to 64-bits before our platform does.
Comments
<strong>64-bit processors will be necessary if you want more than 4GB of RAM in a desktop.</strong><hr></blockquote>
This is complete disinformation. I really wish people knew what they were talking about when they state 'facts.'
32 processors generally CAN handle more than 4GB of memory. The real limit is 4GB PER PROCESS, and that's not such a big deal right now.
OS X (and all other modern OS's) use a virtual memory system. That, in conjuction with the memory management unit built into modern processors allow larger total memory space that the 'bitness' of the processor would otherwise allow.
The G4, IIRC, has a 36 bit physical address bus, allowing for 64 GB of total RAM.
That said, I agree that even 64 GB will not be enough for very long. However, it could easily suffice for a year or so if need be. :-)
<strong>
well for 3.5k i'd pc able to get an it2 pc....wouldnt i?</strong><hr></blockquote>
Uh...no.
An HP ZX2000 with a 900 MHz Itanium 2, ATi Radeon 7000, 1 GB of DDR-SDRAM, and a 40 GB IDE HDD, and no monitor will set you back $5834.
EDIT: SpecFP and SpecINT numbers for McKinley actually put it ahead of the POWER4...but we all know Spec is an awful real-world benchmark...
[ 07-09-2002: Message edited by: Eugene ]</p>
Also, people are asking about using these systems as a home machine...I haven't seen one yet that is approved for home use...
This device has not been approved by the Federal Communications Commission for use in a residential environment. This device is not, and may not be, offered for sale or lease, or sold or leased for use in a residential environment until the approval of the FCC has been obtained.
Going to 64 bit is also no small task.
[quote]IA-64 may be target for Enterprise now, but do you think Intel will really let AMD brag for long about having the only 64-bit desktop processor or let x86-64 win acceptance over IA-64?<hr></blockquote>
Well, the AMD chip is targeted at servers as well. Besides, I believe it be wise for both AMD and Intel to charge a premium for these chips. $2000 - $6000 each depending on cache size.
<strong>Something else that should be pointed out is that Dell is NOT releasing a machine based on the new Itanium. From what Dell has said they don't believe they would sell enough to justify the R&D.
</strong><hr></blockquote>
What R&D?
Shiver.
Lemon Bon Bon
The McKinley has been in development before Intel swallowed up Compaq's Alpha-related business (Patents, engineers) and HP's Pa-Risc chipset division. It's gonna take awhile for the fruits of those purchases to take effect.
<strong>
What R&D?</strong><hr></blockquote>
Exactly. I saw some show about Michael Dell, and they were talking about at one point Dell cut its R&D to basicly nothing, and they were relying on their competitors to R&D stuff (or something like that).
I have a feeling this is going to bite dell in teh ass one day (heres to hopeing!)
Ben
.....
Benchmarks in, McKinley girds for marathon
By Rick Merritt
EE Times
May 31, 2002 (2:19 p.m. EST) \t
SAN MATEO, Calif. ? Intel Corp. claimed victory in the speed race for server microprocessors last week, but competitors and analysts said the Itanium 2 performance metrics the company released only marked a warmup lap in a marathon competition ahead.
Intel estimates the Itanium 2, formerly called McKinley, will exceed 730 in base SPECint and 1,400 in base SPECfp benchmarks, compared with 690 and 1,098 for the dual-core IBM Power4 chip and 610 and 827 for the Sun Ultrasparc III. Analysts and competitors credit Itanium 2's 3-Mbyte on-board Level 3 cache (up from 1 Mbyte in the first-generation Itanium) as a key element in the performance specs. As few as 2,000 first-generation Itanium systems shipped last year, in part because the CPU's integer performance was lagging.
Chip-level performance is just one of the hurdles Intel's 64-bit architecture faces. Among the chief challenges, Intel must rally a broad software base behind the new architecture, get OEMs to deliver a wide array of systems and provide building blocks that will scale from single- to 16-processor systems and beyond. Intel hopes to clear most of those hurdles this year, setting Itanium 2 up for broader market acceptance sometime next year when it releases two third-generation Itanium parts.
"They have their ducks in a row now," said Kevin Krewell, an analyst at In-Stat/MDR. "I don't think Intel counts on a lot of volume this year. Their hope is IT spending picks up next year as they roll toward the Madison and Deerfield versions; 2003 is where the knee of the curve kicks in for them."
On the software front, as many as four 64-bit versions of Linux now support Itanium, but a production version of a crucial piece of software ? Microsoft Corp.'s .Net operating system ? is not expected to become available until late this year. Production versions of key database and application programs including Oracle 9i, SAP, SQL Server and DB2, will require qualification on .Net and thus lag its arrival.
Several server makers, including Hewlett-Packard, IBM, NEC and Unisys, are expected to voice support for Itanium 2 when it is formally launched this summer. They will likely announce two- and four-way systems initially. Some are crafting in-house chip sets for 16- and 32-way systems expected to ship late this year or early in 2003.
Intel will roll its own chip set, the E8870, which can be used to construct four-, eight-, 12- or 16-way systems using a scalability port and a switch that links as many as 4 four-way server nodes into a single system. That chip set will support the PCI-X bus and will be announced this summer.
Analyst Krewell said Itanium may not scale as well as other architectures because the processor does not build in a memory controller as Sun did with its Ultrasparc III and Advanced Micro Devices will with its upcoming Opteron CPUs. And use of the separate scalability switches to link four-way nodes could introduce additional latency, he added.
Intel will stick with its strategy of large caches and no integrated memory controller through three generations of Itanium chips, all of which will use a common 611-pin grid array cartridge socket. That socket stability is expected to help make OEMs confident enough to roll a generation of card-upgradable systems. "Through 2004 we will be using the same 6.4-Gbyte/second processor bus," said Jason Waxman, a marketing manager for server processors at Intel.
Indeed, next year Intel will apply its 0.13-micron technology to the 0.18-micron Itanium 2 core to deliver Madison, a version with a whopping 6-Mbyte L3 cache, and a lower-power variant dubbed Deerfield. Those chips will be followed by the Montecito, an enhanced microarchitecture built in 90-nanometer (0.09-micron) technology in 2004.
AMD is launching the Opteron, its 64-bit version of the traditional X86 architecture, later this year, at first in single- and dual-processor versions, which are expected to carry a 156-kbyte cache and to use the HyperTransport interface. Four- and eight-way versions are to follow in mid-2003.
For its part, Sun Microsystems is not saying when it will upgrade its 1-GHz Ultrasparc III, which was released last November. A version with an upgraded memory controller is next on tap.
Sue Kunz, director of marketing for processor products at Sun, said Intel's advantage in chip benchmarks could evaporate when systems-level benchmarks on real application code become available next year. As Intel focuses on excellence in instruction-level parallelism, Sun is doing more to enhance its thread-level parallelism, in part as an effort to gain ground in applications performance.
"These SPEC benchmarks are not a good yardstick. I'd like to see numbers on SAP, Peoplesoft and Notes ? applications real people run," Kunz said.
At 75 watts, the Ultrasparc III also provides lower power consumption than the hot Itanium 2, which is expected to hit 130 W. The high heat dissipation means some OEMs will not use Itanium 2 in growing market areas such as dense rack-mounted systems for telco central offices and Internet data centers. Intel is aiming Itanium at high-end IBM and Sun servers, a portion of the market that is now stagnating under tight IT budgets. "In tight economic times we can deliver greater performance at lower prices," said Intel's Waxman. "This lets IT managers consolidate on common platforms using Windows in the data tier."
<strong>At 75 watts, the Ultrasparc III also provides lower power consumption than the hot Itanium 2, which is expected to hit 130 W.</strong><hr></blockquote>
130 Watts!!!! Too bad it doesn't emit light or you could use it to replace lights AND heaters in whatever place it was installed.
Intel estimates the Itanium 2, formerly called McKinley, will exceed 730 in base SPECint and 1,400 in base SPECfp benchmarks, compared with 690 and 1,098 for the dual-core IBM Power4 chip and 610 and 827 for the Sun Ultrasparc III.<hr></blockquote>
I may be nitpicking, but that sentence is little bit misleading because the POWER4's SPEC scores were achieved with the second core disabled (SPEC cpu only measures single processors).
<strong>The clients that this processor targets could care less about Mhz. This type of client understands processor performance and the simple reality that faster Mhz != faster processing performance.
I just ordered a machine to replace a dual PIII 933 machine. The machine has PIV Xeon's running at 2.4Ghz with DDR. Believe it or not, this machine is a stepping stone to an 8 way PIII Xeon machine...probably PIII 900's.</strong><hr></blockquote>
Mike,
I'm a little confused here, as I'm unaware of the P4 being available in a Xeon variant: I believe that it's nearest functional equivalent is the Xeon MP, but only up to 1.6 GHz (and even then at one heck of a price in a server format, see IBM's X440 and look at the price difference between the 1.5 and the 1.6 GHz versions - Ouch is a word that springs to mind).
Perhaps you could fill in some of the blanks here.
[quote] Again from Mike
[QB]\t posted 07-08-2002 07:45 PM Â*Â*Â* Â*Â* Â*Â* Â*Â* Â* Â* Â*Â*
------------------------------------------------------------------------
Something else I should point out about the Xeon's and 64 bit processors. Generally, you don't see a HUGE performance increase. For example...
Let's say I have a dual PIII 933 database server that can only safely handle an average of 100 queries per second. A dual PIV Xeon 2.4 could probably handle an average of 600 queries per second. However, these queries would not see a 6x increase in speed...the machine can just handle more and you would probably see only a 2.5 to 3x speed improvment per query.
I honestly believe that we are close to maximum machines needed for desktop use. Most home users won't load the machine like you would a database server.
Also, the move to 64 bit is a HUGE move. Unless the OS is doing some behind the scenes magic most applications will need to be re-written.
[/QB<hr></blockquote>
I think another thing to bear in mind in a 64-bit environment is not just the whole RAM thing, whether on a server or a desktop.
A key point is also whether the 64-bithood is used to develop a a more efficient set of instructions, which - and I'd be looking to Moki for confirmation here - I believe was the point behind the VLIW (Very Long Instruction Word) movement and its' attachment to 64-bithood as an enabling technology.
As for huge memories in desktops or home systems (not Mike's point, I think; but it does turn up elsewhere in the thread), I cannot hep but feel that a) the likelihood is that mainstream business desktops will probably get to 8GB of physical RAM in around two hardware generations (2006-8 is my hunch) and b) that Itanium-style 64-bit hood will take around the same time to get to the mainstream desktop.
I don't have much to back this up, other than the fact that when 32-bit minicomputer technology took root in the late 70's (DEC VAX being the milestone I'm using), it took around 8 years by my reckoning for 32-bithood to reach the desktop with the Intel 80386 (PS/2s and the like, which was 1986 by my memory) and Motorola's 68020 (the venerable Macintosh II [87/88??]).
Even assuming that the pace of change has accelerated, the complexity of the engineering in McKinley and its successors will predicate against its application in mainstream business or consumer systems both on cost and packaging/engineering grounds.
Also, Intel has too much money (and brand image) entrenched in the Pentium architecture for it to switch wholesale.
That said, 64-bithood has the opportunity to act as a leveller in the marketplace where Apple would get the opportunity to re-fight some customer perception battles that it lost at the first time of asking.
As has been noted, shipping McKinley at 1.0GHz - albeit with the mother of all caches - means that if Apple were to ship a "G5" system next year at 1.6 or 1.8 GHz , the megahertz myth gets shown up for the crock we all know it to be.
I personally believe a "G5" mobo + a 64-bit clean version of OS X (which I sometimes refer to in postings as OS XI) will appear at MWSF 04, to coincide with the launch of a "G5" Powerbook, running between 1.4 and 1.6 GHz.
Thus the whole of Apple's "pro" line would be 64-bit capable sometime in 2004, with the consumer line (iMac) following at MWSF 05 and iBook (probably at 1.2 - 1.4 GHz and and low voltage) probably appearing at an earlier than usual MWNY 05, or possibly MWTK 05.
All this between 6 months to 2 years before the WinXXX world can claim 64-bit as standard across all marketplaces.
I do have another question - which is probably best answered by a Programmer or Moki type - which goes something like:
If the Itanium architecture does lead to a component with a 6MB L3 cache, how scalable is this before cache management becomes an issue in an SMP environment?
The Xeon machine I just ordered uses the 2.4Ghz Xeon with hyper-threading which can cause each processor to appear as two processors to some applications. Because of this our database server will actually see 4 processors instead of just two.
[quote]Originally posted by Mark- Card Carrying FanaticRealist:
<strong>
I think another thing to bear in mind in a 64-bit environment is not just the whole RAM thing, whether on a server or a desktop.
A key point is also whether the 64-bithood is used to develop a a more efficient set of instructions, which - and I'd be looking to Moki for confirmation here - I believe was the point behind the VLIW (Very Long Instruction Word) movement and its' attachment to 64-bithood as an enabling technology.
</strong><hr></blockquote>
This is where I think we have MILES to go...but I also have to admit that OSX has come so very far when you consider the underpinnings. 64 bit must be taken advantage of. I strongly belive that this responsibility is going to fall mostly on the OS of the future. Application developers aren't going to put $$ into their apps just to make them 64 bit. Now, if Apple could come along and have a 64 bit processor and OSX 64 bit but also have the ability to run 32 bit apps this would be compelling.
[quote]<strong>
As for huge memories in desktops or home systems (not Mike's point, I think; but it does turn up elsewhere in the thread), I cannot hep but feel that a) the likelihood is that mainstream business desktops will probably get to 8GB of physical RAM in around two hardware generations (2006-8 is my hunch) and b) that Itanium-style 64-bit hood will take around the same time to get to the mainstream desktop.
</strong><hr></blockquote>
I just don't see this happening. Unless you are handling HUGE amounts of data there is no need for this unless the applications are VERY poorly written. We don't even have a server with more than 4Gb RAM now that we are no longer running Windows Server.
[quote]<strong>
I don't have much to back this up, other than the fact that when 32-bit minicomputer technology took root in the late 70's (DEC VAX being the milestone I'm using), it took around 8 years by my reckoning for 32-bithood to reach the desktop with the Intel 80386 (PS/2s and the like, which was 1986 by my memory) and Motorola's 68020 (the venerable Macintosh II [87/88??]).
</strong><hr></blockquote>
DEC VAX <img src="graemlins/smokin.gif" border="0" alt="[Chilling]" /> What's funny is that my dad an I were just talking last weekend comparing their pre-vax servers to what I just ordered. I remember when the VAX was state-of-the-art
[quote]<strong>
Even assuming that the pace of change has accelerated, the complexity of the engineering in McKinley and its successors will predicate against its application in mainstream business or consumer systems both on cost and packaging/engineering grounds.
Also, Intel has too much money (and brand image) entrenched in the Pentium architecture for it to switch wholesale.
That said, 64-bithood has the opportunity to act as a leveller in the marketplace where Apple would get the opportunity to re-fight some customer perception battles that it lost at the first time of asking.
As has been noted, shipping McKinley at 1.0GHz - albeit with the mother of all caches - means that if Apple were to ship a "G5" system next year at 1.6 or 1.8 GHz , the megahertz myth gets shown up for the crock we all know it to be.
I personally believe a "G5" mobo + a 64-bit clean version of OS X (which I sometimes refer to in postings as OS XI) will appear at MWSF 04, to coincide with the launch of a "G5" Powerbook, running between 1.4 and 1.6 GHz.
Thus the whole of Apple's "pro" line would be 64-bit capable sometime in 2004, with the consumer line (iMac) following at MWSF 05 and iBook (probably at 1.2 - 1.4 GHz and and low voltage) probably appearing at an earlier than usual MWNY 05, or possibly MWTK 05.
All this between 6 months to 2 years before the WinXXX world can claim 64-bit as standard across all marketplaces.
I do have another question - which is probably best answered by a Programmer or Moki type - which goes something like:
If the Itanium architecture does lead to a component with a 6MB L3 cache, how scalable is this before cache management becomes an issue in an SMP environment?</strong><hr></blockquote>
I honestly believe that Apple does not need the G5 right now or 64 bit. I think many people are going to be amazed at the performance of the new xServer machines. What Apple is currently lacking is memory throughput. I'm keeping my eye on the new Apple servers right now. In about 6 months I'll be purchasing some new servers for a web cluster (tier 2) and I honestly think we will see some decent performance from these machines.
Also, don't forget that the Itanium 2 does require a complete application re-write from what I understand. How many companies are willing to re-write applications given the current economic situations? This is the real question re. the Itanium 2
[ 07-09-2002: Message edited by: Mike ]</p>
<strong>
SpecFP and SpecINT numbers for McKinley actually put it ahead of the POWER4...but we all know Spec is an awful real-world benchmark...
]</strong><hr></blockquote>
...and no doubt the compiler used has had all the appropriate tweeks to be sure of its score. Given how difficult it is to write/compile optimized code for VLIW chips I doubt we'll see many "optimal" apps for this chip for quite some time. It is apparent that Intel is following a facinating hardware course but I wonder if it will ever really see fruition. The horrendous difficulty in designing compilers for these beasts gives AMD and PPC chips an automatic advantage. It'll be interesting to see how things play out over the next few years.
A note about the high prices. The cost helps keep these things from cannibalizing the PIV market but if Intel sees an advantage I would bet that prices could drop very quickly. I wonder how well Itanium could code-morph IA-32 instructions?
<strong>
I may be nitpicking, but that sentence is little bit misleading because the POWER4's SPEC scores were achieved with the second core disabled (SPEC cpu only measures single processors).</strong><hr></blockquote>
No it wasn't. What was disabled were the other cores in the module. (Power 4 comes as a module that contains 4 processors and 128 MB of cache. The 4 processors are all dual core)
A dual core processor is not 2 separate processors, it's still one processor.
DEC VAX What's funny is that my dad an I were just talking last weekend comparing their pre-vax servers to what I just ordered. I remember when the VAX was state-of-the-art <hr></blockquote></strong>
And can I thank you for helping me to feel even older, just as my 40th birthday comes up on the 90-day radar.
Serious reply forthcoming, once my brain has woken up.
<strong>[/qb]
And can I thank you for helping me to feel even older, just as my 40th birthday comes up on the 90-day radar.
Serious reply forthcoming, once my brain has woken up.</strong><hr></blockquote>
Uh-oh...the BIG 40. I just had my BIG 3-0
<strong>
A dual core processor is not 2 separate processors, it's still one processor.</strong><hr></blockquote>
From Ace's Hardware:
IBM Introduces Entry-Level POWER4 Servers (HARDWARE)
By Brian Neal
Wednesday, June 26, 2002 1:23 PM EDT
IBM has introduced a new entry-level UNIX server based on the POWER4, according to this article from The Register. The pSeries 630 server line, codenamed "Regatta-LE," will ship in 1 to 4-way configurations beginning late in August. This effectively translates into 1-2 CPU modules per system, as POWER4 is a dual-core (on-chip multiprocessing) design, with one core disabled (like the HPC systems) at the low-end and four cores running at the high-end of the 630.
It made buisness sence to get the G4 out when it came out(even if it was a little late).
Right now what good is a 64-bit procecssor on the desktop, auther than ...."run those old apps faster than ever before", which people wont fall for.
So i believe that the G5s will come out when we see Internet2 , or other ground breaking technologies and needs arise. if the G5 came out now, we the apple faithfull will jump onto the same bandwagon, and find that, once again, we are the only ones on it. And that does not make buisness sence, since the cost of production of the chip will vastly outweigh the profit margins given that demand wont be high enough.
Great! I can see it now! The BIT MYTH!
Update: I just read this <a href="http://www.aceshardware.com/read.jsp?id=50000241" target="_blank">http://www.aceshardware.com/read.jsp?id=50000241</a> and it looks like using 36-bit addressing (which is Intel's answer to the 4GB limit) causes problems in Windows.
Would this be true too if Apple want to offer more than 4GB on a PowerMac. How much RAM does OS X support? Can it understand 36-bit addresses?
[ 07-12-2002: Message edited by: Kecksy ]</p>