or Connect
AppleInsider › Forums › Mac Hardware › Current Mac Hardware › Intel unleashes Mac-bound "Woodcrest" server chip
New Posts  All Forums:Forum Nav:

Intel unleashes Mac-bound "Woodcrest" server chip - Page 6

post #201 of 566
Quote:
Originally posted by kim kap sol
Ok...that's a given. Limitations come from the hardware. I haven't seen something other than a 32-bit precision GPU in 3 years though.

Perhaps we're at the tipping point then that GPU maths is good enough to rely on. Apple seem to think so.
post #202 of 566
[QUOTE]Originally posted by aegisdesign
Perhaps we're at the tipping point then that GPU maths is good enough to rely on. Apple seem to think so.



GPU maths as Chucker mentioned in Core Image is implemented via OpenGL. No other way?

That's the thing, if Apple seem to think (based on the earlier quotes such as "It allows for manipulation of deep bit images with incredible accuracy and color fidelity") that Core Image can deliver let's say High accuracy. If via OpenGL then that means Apple has a lot of confidence in Core Image via OpenGL implementing High accuracy when it comes to 2D and 3D effects.

Just a bit confusing about Apple's claims. Remember that the ATI 9550 was Core Image via GPU compatible. So at that level GPU maths was probably good enough by then???
post #203 of 566
Core Image GPU acceleration is determined by whether the GPU implements ARB_fragment_program, not by how fast or precise it is.
post #204 of 566
Thread Starter 
Quote:
Originally posted by Chucker
Core Image GPU acceleration is determined by whether the GPU implements ARB_fragment_program, not by how fast or precise it is.

Looks like the first part it from the documentation, but the second seems non related. What is the ARB_ Fragment_program, a call?
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
post #205 of 566
ARB is the OpenGL Architecture Review Board. Thus, functions prefixed ARB_ are "official" OpenGL extensions, whereas, for instance, functions prefixed NV_ were created by nVidia. (ATI_fragment_shader and NV_fragment_program interact with ARB_fragment_program, for example.)

"Fragment program" is OpenGL's term for what is otherwise known as a pixel shader.
post #206 of 566
more on GPU(er)s...

At Colleges, Women Are Leaving Men in the Dust

Quote:
[Jen] Smyers, also at American, said she recently ended a relationship with another student, in part out of frustration over his playing video games four hours a day...."That's my litmus test now: I won't date anyone who plays video games. It means they're choosing to do something that wastes their time and sucks the life out of them."

....In the Dickinson cafeteria on a spring afternoon, the byplay between two men and two women could provide a text on gender differences. The men...talked about playing "Madden," a football video game, six hours a day, about how they did not spend much time on homework.

....Some professors and administrators have begun to notice a similar withdrawal among men who arrive on campus with deficient social skills. Each year, there are several who mostly stay in their rooms, talk to no one, play video games into the wee hours and miss classes until they withdraw or flunk out.

This spring, Rebecca Hammell, dean of freshman and sophomores, counseled one such young man to withdraw. "He was in academic trouble from the start," Ms. Hammell said. "He was playing games till 3, 4, 5 in the morning, in an almost compulsive way."
post #207 of 566
Thread Starter 
Quote:
Originally posted by Chucker
ARB is the OpenGL Architecture Review Board. Thus, functions prefixed ARB_ are "official" OpenGL extensions, whereas, for instance, functions prefixed NV_ were created by nVidia. (ATI_fragment_shader and NV_fragment_program interact with ARB_fragment_program, for example.)

"Fragment program" is OpenGL's term for what is otherwise known as a pixel shader.

So if your code has a call to the ARB_ Fragment_program to xxx.lib Your app will automatically use core image? To do what? Move code processes off through the GPU?

Oh for chrissake. Your quoting wikipedia. I thought you were serious.

Apples developer documentation where you want to get answers about this stuff from.

Here:
Quote:
Quoted from Apples developer documentation
Up until now OpenGL, the industry standard for high performance 2D and 3D graphics, has been the primary gateway to the graphics processing unit (GPU). If you wanted to use the GPU for image processing, you needed to know OpenGL Shading Language. Core Image changes all that. With Core Image, you dont need to know the details of OpenGL to harness the power of the GPU for image processing. Core Image handles OpenGL buffers and state management for you automatically. If for some reason a GPU is not available, Core Image uses a CPU fallback to ensure that your application runs. Core Image operations are opaque to you; your software just works.

Core Image hides the details of low-level graphics processing by providing an easy-to-use application programming interface (API) implemented in the Objective-C language. The Core Image API is part of the Quartz Core framework (QuartzCore.framework). You can use Core Image from the Cocoa and Carbon frameworks by linking to this framework.
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
post #208 of 566
Quote:
Originally posted by wwwork
more on GPU(er)s...

At Colleges, Women Are Leaving Men in the Dust

Heh!

Then again, I can bore the knickers off the opposite sex with 'No, that's not black, it's black WITH cyan'.
post #209 of 566
Quote:
Originally posted by wwwork
more on GPU(er)s...

At Colleges, Women Are Leaving Men in the Dust

"You play video games? Oh wait, wait...do you use a Mac or a PC?"

"Mac."

"Oh, whew, you're silly, you can't play games on a Mac...how's Saturday night sound to you?"

"Uh...good."

Mac:1 PC:0
post #210 of 566
Quote:
Originally posted by onlooker
So if your code has a call to the ARB_ Fragment_program to xxx.lib Your app will automatically use core image? To do what? Move code processes off through the GPU?

No. I never claimed so.
post #211 of 566
Quote:
Originally posted by a_greer
I bet that they will just call the damn thing "sex". It may attract the Aleinware buyers, they have never had "sex"!

That is soooo funny. Truly funny.
post #212 of 566
Thread Starter 
Quote:
Originally posted by Chucker
No. I never claimed so.

I wasn't saying you were. I was asking what it was you were saying, and as I did I looked at your quoted words, and saw a link to an online, user create-able, user editable. user write-able, user definable encyclopedia, and didn't bother taking it serious after that so I looked it up myself in an appropriate credible place of reference. No worries.
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
post #213 of 566
Quote:
Originally posted by onlooker
So if your code has a call to the ARB_ Fragment_program to xxx.lib Your app will automatically use core image? To do what? Move code processes off through the GPU?

It can't merely be stated as a call to the GPU; your filters, effects, and whatever graphics work you want accellerated has to be written as a pixel shader.

(Yes, there have been a few hacks to get the GPU to work as an extra floating point processor for non-video tasks, but this isn't exactly orthodox or in widespread implementation)
post #214 of 566
I think chucker has more to say, but he's keeping it simple/ quiet for now. Chucker, why didn't you just say earlier ARB_fragment_program was pixel shaders?

So we know that we are down to pixel shaders being used for General Purpose computing tasks.

Here is an interesting wikipedia article:
http://en.wikipedia.org/wiki/GPGPU

Here's an interesting sentence that talks about accuracy dependent on GPUs.

"NVIDIA GPUs currently support 32 bit values through the entire pipeline. ATI cards currently support 24 bit values throughout the pipeline, although their new X1000 series supports 32 bits. The implementations of floating point on GPUs are generally not IEEE compliant, and generally do not match across vendors. This has implications for correctness which are considered important to some scientific applications.

"General-Purpose Computing on Graphics Processing Units (GPGPU, also referred to as GPGP and to a lesser extent GP^2) is a recent trend in computer science that uses the Graphics Processing Unit to perform the computations rather than the CPU. The addition of programmable stages and higher precision arithmetic to the GPU rendering pipeline have allowed software developers to use the GPU for non graphics related applications"

"The following are some of the non-graphics areas where GPUs have been used for general purpose computing:

* Physically based simulation - Game of life, Cloth simulation, Incompressible fluid flow by solution of Navier-Stokes equations
* Segmentation - 2D and 3D
* Level-set methods
* CT reconstruction
* Fast Fourier Transform
* Tone mapping
* Sound Effects Processing
* Image/Video Processing
* Raytracing
* Global Illumination - Photon Mapping, Radiosity, Subsurface Scattering
* Geometric Computing - Constructive Solid Geometry (CSG), Distance Fields, Collision Detection, Transparency Computation, Shadow Generation
* Neural Networks
* Database operations
* Lattice Boltzmann Method
* Cryptography
"
post #215 of 566
[QUOTE]Originally posted by Placebo
It can't merely be stated as a call to the GPU; your filters, effects, and whatever graphics work you want accellerated has to be written as a pixel shader...(Yes, there have been a few hacks to get the GPU to work as an extra floating point processor for non-video tasks, but this isn't exactly orthodox or in widespread implementation)



Excellent. We're getting down to things now. At the lower level Core Image does GPU-driven 2D manipulations as pixel shaders.

The GPGPU wiki I posted above does not specifically discuss pixel shaders or maybe it is talking more about using the GPU as an extra floating point processor rather than a pixel shader.
post #216 of 566
Quote:
Originally posted by sunilraman
I think chucker has more to say, but he's keeping it simple/ quiet for now. Chucker, why didn't you just say earlier ARB_fragment_program was pixel shaders?

I did.

Quote:
Originally posted by Chucker
"Fragment program" is OpenGL's term for what is otherwise known as a pixel shader.
post #217 of 566
Yeah, I meant as in why didn't you say that like, 1 or 2 pages back. Us GPU fanbois recognise "pixel shader" more than "ARB_Fragment_Program".
post #218 of 566
Heh. For the record, yeah when I was in college and playing a lot of computer games (a few hours each day) sex was a mystery. However once I left college there have been more interesting encounters with the opposite (and a bit of the same) sex along with some drugs, rock and roll, and trance musik along the way...... The halcyon days of 2000-2004. 2005-this year more chilled/ quiet. I think I've earned back my "right" to playing games, mostly Half Life 2!! now with HDR!! OMFG!!!111oneone!!!11
post #219 of 566
Quote:
Originally posted by sunilraman
Yeah, I meant as in why didn't you say that like, 1 or 2 pages back. Us GPU fanbois recognise "pixel shader" more than "ARB_Fragment_Program".

I didn't deliberately want to misguide people, if that's what you're thinking.

For Core Image to be GPU-accelerated, it relies on GPUs supporting pixel shaders, and making those pixel shaders accessible through OpenGL, using the ARB_Fragment_Program extension. That's all there is to it, really.
post #220 of 566
Back to the topic (well the off-topic topic) of pixel shaders, I would like to then see implementation examples of Core Image filters across different GPUs.

I think that would be informative and help settle the matter. Take Image Units of Core Image and see "how accurate" image filters can be*, using Photoshop CPU-based image filters as the benchmark. These Image Units of Core Image, we take the output, and not only compare against Photoshop but compare against the output of different GPUs that ship/ are offered with the Mac.

*I don't know about coding for Core Image and Image Units but in this case above we'd push for "as accurate as possible" filtering and output. Eg. Gaussian blur on a defined resolution of imagery (say 1-6 megapixels).
post #221 of 566
[QUOTE]Originally posted by Chucker
I didn't deliberately want to misguide people, if that's what you're thinking. For Core Image to be GPU-accelerated, it relies on GPUs supporting pixel shaders, and making those pixel shaders accessible through OpenGL, using the ARB_Fragment_Program extension. That's all there is to it, really.



Yes, no worries, I think it's great now that in that one paragraph above we've got to the description of Core Image in a nice compact form. Now a good starting point for "accuracy of output across GPUs and as compared to Photoshop" discussion
post #222 of 566
Quote:
Originally posted by Placebo
It's properly designed 2D apps that can benefit heftily from these cards. Have you ever played around in Core Image Funhouse with a 9800 or better graphics card? Compared to performing the same blurs and distortions in Photoshop or Fireworks, it's ridiculously fast. Literally as fast as you can drag the blur slider.

Sure. I have 9800's in my older G4 towers. But those cards are really slow by todays standards.
post #223 of 566
Quote:
Originally posted by a_greer
unless there is a chance of the infamous 23 inch pro-ish iMac surfacing...

Infamous is right. At that size, I would expect it to topple over.
post #224 of 566
Quote:
Originally posted by a_greer
I bet that they will just call the damn thing "sex". It may attract the Aleinware buyers, they have never had "sex"!

post #225 of 566
Well, someone's in a good mood
post #226 of 566
Quote:
Originally posted by sunilraman
Well, someone's in a good mood

Yeah. I've been working on a big project for a while in my shops downstairs, and it's in the living room, finally. That's why I've been here erratically lately. So, I'm feeling less stressed, except for the audio component that just went south.
post #227 of 566
Quote:
Originally posted by melgross
Sure. I have 9800's in my older G4 towers. But those cards are really slow by todays standards.

Therefore, Core Image accelleration has even more potential now with a 7900GTX than it did with a 9800 Pro.
post #228 of 566
Quote:
Originally posted by Placebo
Therefore, Core Image accelleration has even more potential now with a 7900GTX than it did with a 9800 Pro.

But surely a 9800 is better than a 7900?!?!?

Man, these stupid names make even the 'Model year' sound sensible.

David
_ ________________________ _

I have no signature - Doh!
Reply
_ ________________________ _

I have no signature - Doh!
Reply
post #229 of 566
Quote:
Originally posted by Placebo
Therefore, Core Image accelleration has even more potential now with a 7900GTX than it did with a 9800 Pro.

Definitely more potential, but I thought the debate is about the ability to tap that potential. That unused potential makes the difference between the two cards moot for 2D tasks.
post #230 of 566
Quote:
Originally posted by iMacfan
But surely a 9800 is better than a 7900?!?!?

Man, these stupid names make even the 'Model year' sound sensible.

David

Yeah, that struck me as I was writing that post. And now ATI has significantly crippled their products by reducing their mighty 9800 to an X1800! 1800 is smaller than 9800! 9800 must be fast fast fast!
post #231 of 566
Quote:
Originally posted by melgross
Infamous is right. At that size, I would expect it to topple over.

Heh...having switched to dual 24" monitors its painful going back to dual 20" (or single 19" at home).

A 23" iMac Conroe probably handles most power desktop users (non-video). These folks that need more HD can buy an external multi-TB drives or SANs to park under their desks...

Currently the 20" is just too small. Given the real estate the 2x24"s take up on my desk the next upgrade is a single 30" which I have but am not using at the moment. I could go 1x30" + 1x21WS" in portrait mode but the iMac won't do portrait will it?

I don't really need expansion for the most part. I do need a decent vid card but decent...not top of the line. Just needs to drive the 30".

Vinea
post #232 of 566
Quote:
Originally posted by vinea
Currently the 20" is just too small. Given the real estate the 2x24"s take up on my desk the next upgrade is a single 30" which I have but am not using at the moment. I could go 1x30" + 1x21WS" in portrait mode but the iMac won't do portrait will it?

That depends on what you mean. Unless Apple changed it, the iMac's built-in display can't be told to operate in rotated mode, but that would be awkward unless you had a custom mount - the iSight iMacs don't have an available VESA mount. It might work on the external display port, but I don't have an iSight iMac (or an iMac anymore) so I don't know.

Quote:
I don't really need expansion for the most part. I do need a decent vid card but decent...not top of the line. Just needs to drive the 30".

Unfortunately, I think dual link DVI is one feature that Apple would try to reserve for the pro units, though I can understand, it's definitely not something I'd expect from a consumer computer just yet.
post #233 of 566
[QUOTE]Originally posted by iMacfan
But surely a 9800 is better than a 7900?!?!? ... Man, these stupid names make even the 'Model year' sound sensible... David


GPUs have almost a cult-like following in the computer gaming and hardware enthusiast scene. Placebo was being a smartass, it's useful to ignore him somtimes

GPU 101:

1. Intel Integrated Graphics is fine for almost any tasks, except for playing 3D games. Regardless of whatever marketing terms and numbers Intel says about it. Puzzle and flash-type games no problem, but forget about any of the current, 1 year old, or upcoming 3D games, Windows or Mac.

2. nVidia GPUs. These are quite easy to understand. There is the 6 series (http://www.nvidia.com/page/geforce6.html) and the 7 series (http://www.nvidia.com/page/geforce7.html). 7 series is newer than the 6 series. For each series, generally the higher number models are better. Eg. in the 6 series a 6600 is better than the 6150. Further reading is required for letters after the numbers, eg. 6600GT is better than 6600 which is better than 6600LE. Now for the 7 series it is similar, in that 7600 is better than 7300; 7950 is better than the 7900 which is better than the 7600 and 7300. Making sense so far? Now again, the letter designations are also important, the 7900GTX is better than the 7900GT.

3. nVidia GPUs. Overlap between 6 series and 7 series: Note that at some level the high-6 series can outperform the lower-7 series. This requires further reading. Personally, for example, I think the nVidia 7300 is a total waste of money - it has some newer "features" but is not fast enough to make the most of these features. AFAIK you are better off with a 6600GT.

4. Okay. ATI GPUs. The old designations made sense in that higher numbers are better than lower numbers. radeon 9800 is better than 9600 which is better than the now hopelessly outdated 9200. Actually, these old designations are all outdated now, in general. Don't even worry about them. ATI has now moved on to the X1000 series designations. Similarly, X1900 is better than X1800 is better than X1600 is better than X1300. Now, for a given number, further reading is required. From top down, for the X1900, we have X1900 XTX, X1900 XT, X1900 GT.

5. ATI vs nVidia. This is an area that DEFINITELY requires further reading to make sense. for example, the ATI X1600 comes in around the nVidia 6600/6800 range. At the top end, the nVidia 7900 battles it out with the ATI X1900.

6. GPU clocking. Clockspeeds also matter within a certain range of cards. The iMac and MacBookPro sports an ATI X1600 and ATI x1600 Mobility. These run at slower clockspeeds, which results in poorer performance in 3DMark05, a highly regarded GPU benchmarking tool, when compared to the ATI X1600 normal factory settings:



7. Various sites such as tomshardware.com, anandtech.com and GPU enthusiast sites and forums will have further info on a whole range of benchmarks for different games and the quality of visuals you get with different cards. For example, a new feature is HDR (High Dynamic Range) lighting, which is no doubt commonly discussed in recent GPU forums.

8. What Apple has chosen. Mac mini, MacBook has Intel Integrated Graphics. The iMac and MacBookPro has ATI X1600 and X1600 Mobility as per the graph above. For the Powermac G5, it comes with the nVidia 6600LE and 6600, which ironically, has performance roughly around the ATI X1600 in the iMac and MacBookPro, depending on how you adjust the clockspeed of the X1600 when running Windows Bootcamp. Thankfully though the Powermac G5 has the option of a nVidia 7800GT which is a pretty decent card. The nVidia 7800GT is roughly 1.8x faster than the iMac Core Duo's ATI X1600. If the ATI X1600 is clocked at factory settings in Windows Bootcamp, then the nVidia 7800GT is 1.5x faster than the iMac Core Duo. These estimates are based on 3DMark05.

9. nVidia Quadro - These are not for gaming as much, they are targeted towards people that do 3D graphics work.
post #234 of 566
In our discussions of leveraging the power of GPU to make GPU and CPU work more streamlined, it is interesting to consider nVidia's Gelato:
http://www.nvidia.com/page/gelato.html

Check out this render made with Gelato enabled:



"NVIDIA® Gelato® rendering software brings high-quality rendering to the masses. A hardware-accelerated, non-real-time renderer, Gelato allows you to use your NVIDIA GPU to create stunning images fast."

"Gelato is a software program that leverages the NVIDIA GPU as a floating point math processor. This allows Gelato to render images faster than comparable renderers, but without the quality limitations traditionally associated with real-time graphics processing on the GPU."

It is interesting that the system requirements go as low as a nVidia 5200 all the way up to a Quadro.

System Requirements
* NVIDIA GPU, one of the following:
* NVIDIA Quadro FX
* NVIDIA GeForce® 5200 or higher
* Microsoft® Windows® XP or
* Linux 2.4 kernel or later
* 1 GB RAM (recommended)

......................................

In the Mac world, I think it is interesting how Core Image leverages the GPU, in this case via pixel shaders. I feel the promise of nVidia's Gelato shows that quite interesting results can come out of really maximising the GPU and then "handing off" to the CPU for clean-up/ further accuracy.

For developers working with Core Image and Image Units in their applications, I think it will be interesting to see how much accuracy they can get via GPU-driven pixel shaders and how this is "handed off" to the CPU in the best possible way for final renders. Both at 2D and 3D levels.
post #235 of 566
Finally, let's review the nVidia cards out there as compared to the iMac Core Duo's ATI X1600 as clocked by Apple. Estimates based on 3DMark05:

7900GTX: 2.1x faster
7900GT: 1.9x faster
7800GT: 1.8x faster
7600GT: 1.8x faster
7600GS: 1.1x faster
6600GT: 0.9x faster

IMO this leads me to believe that MacPros will ship with 7600GS on the low-end, 7600GT on the mid and high end, with 7900GTX and Quadros as BTO options.
post #236 of 566
Thread Starter 
Quote:
Originally posted by sunilraman
Finally, let's review the nVidia cards out there as compared to the iMac Core Duo's ATI X1600 as clocked by Apple. Estimates based on 3DMark05:

7900GTX: 2.1x faster
7900GT: 1.9x faster
7800GT: 1.8x faster
7600GT: 1.8x faster
7600GS: 1.1x faster
6600GT: 0.9x faster

IMO this leads me to believe that MacPros will ship with 7600GS on the low-end, 7600GT on the mid and high end, with 7900GTX and Quadros as BTO options.

They usually don't offer that many, but I think your close. The two highend ones should be there.
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
post #237 of 566
Let's look at prices:

$600: 7950GX2
$500: 7900GTX
$400: 7900GT
$300: 7800GT
$200: 7600GT
$150: 7600GS

I think Apple will be looking for a $200-300 card in a $2k-ish dual-Woodcrest, so I'm going to say the low-end will come with a 7600GT

Mid and High Ends could start with a 7800GT or a 7900GT with one of the top two as BTO, and a Quadro BTO.
post #238 of 566
Quote:
Originally posted by ZachPruckowski
Let's look at prices:

$600: 7950GX2
$500: 7900GTX
$400: 7900GT
$300: 7800GT
$200: 7600GT
$150: 7600GS

I think Apple will be looking for a $200-300 card in a $2k-ish dual-Woodcrest, so I'm going to say the low-end will come with a 7600GT

Mid and High Ends could start with a 7800GT or a 7900GT with one of the top two as BTO, and a Quadro BTO.

Given Sunilraman's numbers and yours, it looks to me that an individual getting anything other than the 7600GT is basically signing up for the sucker tax. $200 more for 5% faster, or $300 more for 10% faster is hard to justify.
post #239 of 566
Quote:
Originally posted by JeffDM
Given Sunilraman's numbers and yours, it looks to me that an individual getting anything other than the 7600GT is basically signing up for the sucker tax. $200 more for 5% faster, or $300 more for 10% faster is hard to justify.

Assuming my newegg-based numbers are right, people who buy the high-end cards do so because they it's that 5% or 10% performance that get you the highest performance settings in games. A 7600GT is gonna get you the high-res stuff, but with the cool smoothing, anti-aliasing whatever features off and a lower framerate in the ultra-new games. Whereas a 7900GTX in theory gets the high rez, and the AA and the HDR and the everything else at a decent framerate. So the theory is that if you're gonna buy these $60 games and spend hours on them on a $1500-2000 system, you may as well have all the special effects turned up.
post #240 of 566
Quote:
Originally posted by ZachPruckowski
Assuming my newegg-based numbers are right, people who buy the high-end cards do so because they it's that 5% or 10% performance that get you the highest performance settings in games. A 7600GT is gonna get you the high-res stuff, but with the cool smoothing, anti-aliasing whatever features off and a lower framerate in the ultra-new games. Whereas a 7900GTX in theory gets the high rez, and the AA and the HDR and the everything else at a decent framerate. So the theory is that if you're gonna buy these $60 games and spend hours on them on a $1500-2000 system, you may as well have all the special effects turned up.

That's not making sense though unless the 7600GT isn't capable of those features.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Current Mac Hardware
AppleInsider › Forums › Mac Hardware › Current Mac Hardware › Intel unleashes Mac-bound "Woodcrest" server chip