or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › OS X 10.8.3 beta supports AMD Radeon 7000 drivers, hinting at Apple's new Mac Pro
New Posts  All Forums:Forum Nav:

OS X 10.8.3 beta supports AMD Radeon 7000 drivers, hinting at Apple's new Mac Pro - Page 2

post #41 of 201
This is so sad, you sound like a 90 year old wondering why they don't make cars like they use to.
Quote:
Originally Posted by oomu View Post

I don't want a "cosmetic redesign" of the case just because some geeks are bored...
Of course not.

The reason to want a case design is to save the Mac Pro from going the way of the XServe. To put it simply if they build a new Mac Pro in the same case with the same pricing structure the machine will die off. The simple reality is that a expensive tower case is not the modern solution to a high performance computing solution for the next decade. The answer is more modular hardware that consolidates that which is important for a high performance computing machine.
Quote:
the macpro already has a GREAT case : beautiful, solid, easy to tweak inside, good air cooling, good alimentation.
I agree that the current case is great but that means nothing as great cases are simply an engineering effort. You seem to think that it is impossible to make a great case for future machines which if you think about it is ridiculous. I've just barely started to read this thread and have noticed a fixation on the Mac Pros case which is totally unjustified.
Quote:
But if you tell me about an internal redesign, with new features,for example hot plug ssd+hd units, more connectors, and better ideas to improve my workflow and expandability
, of course I buy that.
Let me say this first, it is pretty hard to have a Pro computer without internal PCI-Express expandability. However much of what is currently inside the Mac Pro simply doesn't belong there in a modern replacement. For example drive bays need to go, they should be replaced with slots for high performance SSD cards. Drive bays are a hang over from the days of highly mechanical systems. Yes I up understand the need for bulk storage but that is really better done in an external box where the storage system is optimized for the task at hand.
Quote:
I don't care about a redesign just for the novelty sake, only about work and utilitarian improvements.
I care about the Mac Pro or better stated a high performance computer, being around in two or three years and not seeing Apple say screw it. This focus on making a Mac Pro that looks like past machines will kill the product. And by looks I'm not just talking about the case but the entire architecture of the machine.
Quote:
Apple never change a design just because fashion, only if they think it's better.

Well better in this case has to be a machine that strengthens sales. My biggest fear is that the wishes of many in this thread will come true, that is the new Mac Pro will be a huge expensive tower that doesn't sell any better than the current model. That would lead to the death of the product line. Nothing is more important than a sign that Apple is again innovating its desktop machines. Here one important factor is that they need a box that is capable of delivering far more performance at a reasonable price than the current solution.
post #42 of 201
Quote:
I agree that the current case is great but that means nothing as great cases are simply an engineering effort. You seem to think that it is impossible to make a great case for future machines which if you think about it is ridiculous. I've just barely started to read this thread and have noticed a fixation on the Mac Pros case which is totally unjustified.

The thing is, every time someone talks about a redesign, they're taking away all the features that make it a pro machine.   A new case would be nice (especially if it were lighter) but the iToy crowd just talks about how "antiquated" the mac pro is, and all they want to do to is take away functionality.  I know optical drives are over, but RAM?  PCI-E?  Multiple HDDs?  Proper air flow and heat dissipation?  

post #43 of 201
Quote:
Originally Posted by Tallest Skil 
What happens to the workstation crowd?

Die of old age? It depends on who exactly we are talking about.

There are people who have money and want to spend it on the best that money can buy for the sake of it. This isn't an important market to satisfy.
There are people who buy the Mac Pro because they see it as a superior form factor but could quite happily use a quad-i7 iMac. Again not an important market.
There are people who genuinely push the Mac Pro to the limit and build their businesses around them. This is the only important market for the Mac Pro.

The latter might need the most power they can get for compositing/rendering/transcoding. They might have a multitude of high-end peripherals to connect.

To satisfy the highest power needs, the power needs to scale linearly and be good value. It can't be a case of waiting 2 years for a 30% increase in speed. A better solution is to be able to plug in another Mac Pro without requiring the skillset to manage a computer farm and scale up performance. If the money is there and the need is there, it works. It's what the big boys do:

http://news.cnet.com/8301-13772_3-20068109-52/new-technology-revs-up-pixars-cars-2/

"Pixar had to triple its size, and today, the render farm features 12,500 cores on Dell render blades."

No matter how big you make it, one box isn't going to do that so the workstation has to be as good as possible for real-time tasks, allowing it to scale like a farm so that smaller businesses don't have to invest in server blades and offering the best value so they don't just go for HP or Dell. If they can add a special ARM co-processor, even better.

For peripherals, external is always better because you aren't limiting the form factor of the peripheral, you aren't dictating the power requirements, you don't have to force the computer owner to install it and risk damaging a very expensive machine, you don't have to have the peripheral active when you aren't using it and you aren't forcing it to be used exclusively with one type of machine.

You can connect a simple Firewire adaptor or an entire server rack with the same tiny plug.

This could well alienate people who are used to the tower form factor never changing since computers came into existence but it won't be a case that they can't do the same things with the new machine, they just have to do them differently.

Having a more integrated, unchanging core design like the other machines in the lineup will make the Pro more reliable.

Eventually, once everyone is down the route of external peripherals and the performance of the iMac is 30x higher or more in 10 years, everyone will see it as a workstation, just like the Mini and laptops. From here, we can see where the puck is going and we know the path it has to take to get there.
Quote:
Originally Posted by Tallest Skil 
Are we to be stuck with 3 RAM slots?

4 slots (the Mac Pro has 4 per CPU), allowing up to 64GB RAM with 16GB DIMMs.

No one will ever need more than 64GB of RAM. You heard it here first.
post #44 of 201
Originally Posted by Marvin View Post
It depends on who exactly we are talking about.

 

Your mockup appears more like an xMac than something designed to serve the needs of the Mac Pro's market right now.

 

If the true market is, as you say, those that push it to its limit, cutting its limit in half doesn't seem like the best idea.

 

4 slots (the Mac Pro has 4 per CPU), allowing up to 64GB RAM with 16GB DIMMs.
No one will ever need more than 64GB of RAM. You heard it here first.
 

I'm just… I'm losing it. You're right; it's four now, eight in the dual-chip, so did that mean Nehalem/Westmere can have 6 per processor normally? I guess all that I remember is they shorted us on the maximum number of slots. But that's probably up for grabs now, too… 

Originally Posted by Marvin

The only thing more insecure than Android’s OS is its userbase.
Reply

Originally Posted by Marvin

The only thing more insecure than Android’s OS is its userbase.
Reply
post #45 of 201
Quote:
Originally Posted by PhilBoogie View Post

Quote:
Originally Posted by Tallest Skil View Post

Quote:
Originally Posted by PhilBoogie View Post

Will look great next to all my older MP's; becoming more like a museum.

Hmm. Is that really what you want from what is supposed to be a bleeding-edge computer?

Of course not; the internals are the important factor. Give me 16x PCIe on all 4 slots et cetera. The request for a smaller MP I don't get; which is why I'd rather have the same case design.

The request for the smaller case is being made for multiple reasons, but the big one is that we don't want to see the Mac Pro go the way of XServe. Frankly from what I can see it was rather close to doing that as I have my doubts that sales have been very strong at all. Everyone has seen the sales percentages of laptops versus desktops at Apple and most of the desktop sales are iMacs so it isn't unreasonable to think that Mac Pro sales might be in the low tens of thousands per quarter. That is close to what XServe was selling at when it go canned.

Mind you it isn't so much a request for a smaller case as for a more economical platform that can realistically spur sales. The huge case is just one element that makes the Mac Pro rather expensive for what you get. Frankly it is a big box that offers all sorts of capability that not all power users need. In a nut shel this is the big problem, many people can justify a high performance computer but not everybody is willing to pay for all of the extras that come with the Mac Pro to get it. So here is the challenge,distill what is needed for a high performance workstation down into a reasonably sized and priced box. Those that need extra capability can tack on additional boxes as needed. The idea being to get the core down to a minimal box that can be offered at a rational price.

In the end you are right about internals being important, but what is more important is coming up with the right mix of internal components that effectively allow Apple to deliver a high performance compute module. One example here is the row of disk drive bays in the Mac Pro, frankly they don't belong in a Mac Pro replacement. Now I know some out there will rebel at the thought but even a few minutes of thinking will show that it makes sense to drop them in favor of other solutions. The first rational is that very few users make use of all of those bays, those that do would be better served with an external disk array. I know there are many "yeah buts" about to be offered up but remember you are not the average Mac Pro user. Getting rid of all of the drive bays saves significant case space while cutting power supply needs.

In any event think of how refactoring is done with software. A new Mac Pro refactored into a much smaller box would be an attempt to solve high performance computing in a different way. The computer simply becomes a module that is tailored to specific user needs.

*******************************************

By the way guys, those AMD Radeons are OpenCL power houses delivering some impressive performance specs. By the time this machine ships there might even be lower power variants. The point is if this is coupled with just one of Intels coming many core chips we could see some rather impressive performance from a one socket machine. It is just another reason to not focus on the past when thinking about what is right for the future.
post #46 of 201

Oh hellzzzz nooo!  I don't want my Mac Pro to look like an Octopus full of cables. Going that route means one cable to the PCI-X/GPU box, another for the optical drive, another for external storage plus all the other wires for the displays and peripherals. I prefer to have everything internal if possible.

 

I wouldn't mind one optical drive on top and the second slot for a card reader like this: http://www.sonnettech.com/product/qio.html as an additional option, but removing the optical drive bay would be a bad idea!

 

In the case of the Mac Pro BIGGER is better.

 

As for gartner, they are full of baloney.

Quote:
Originally Posted by Marvin View Post


They might have gone the route of putting a single Xeon in the iMac but the model number MP60 listed in the Bootcamp plist suggests that won't be the case. It also provides good evidence that it won't have an optical drive, which indicates a redesign because that 5.25" unit takes up a lot of space. They could move the hard drives into that space and just lower the height or maybe have a different shaped power supply but I think they should go beyond that.
We get the same comments cycling round every time there's a Mac Pro reference such as putting in more PCI slots and an i7 because it's cheaper.
The entry Mac Pro uses a $294 processor, which costs exactly the same as an i7. The 1000W power supply and motherboard will be more expensive but we know that they upped the Mac Pro price by $300 after the first Mac Pro model so they obviously weren't selling enough to justify the margins they had. The lower the volume, the higher the price.
The parts that go into the Mac Pro can be bought for about $1200-1400 so they are easily at 40% profit margins.
When it comes to Thunderbolt, there's still the issue of how they get it to work with a dedicated GPU. The spec they are required to follow in order to call it Thunderbolt is that it has PCI and displayport on the same connection, no compromise. So they either have PCI slots and no Thunderbolt or they don't have Thunderbolt because if you put in another GPU, it can't know how to route the graphics out the TB ports. If you put in a non-standard GPU, it breaks the TB spec.
There's also the issue about the machine having 40 PCI lanes. If you have 4 slots, you can't give them all 16 lanes and if you max out the lanes on the slots, there's nothing to allocate to Thunderbolt. I think it's very much an either/or situation.
When you consider that the Mac Pro slots only have a 300W power allocation, you can only have multiple low-power GPUs or a single high-end one. The simpler option is the single high-end one.
Once you've decided on the GPU, Thunderbolt can take care of expansion. It would be better if Apple managed to get the 20Gbps Falcon Ridge controller though. This prevents the scenario where Macbook Pro/Air//iMac/Mini professionals are buying Thunderbolt peripherals and Mac Pro professionals are buying PCI cards. They all buy the same peripherals.
The single GPU would still be upgradeable but only from Apple as it has to work with Thunderbolt.
As far as the CPU goes, they can stick with allowing 2 CPUs but Ivy Bridge will bring 10-core chips, maybe 12. These will be expensive chips.
Right now, the highest-end MP uses 2x $1440 CPUs = $2880 but the performance is only about 20% faster than the $1885 single CPU 8-core E5-2687W. The equivalent Ivy Bridge chip will likely be 20% faster so they could offer the same performance as the current $6200 Mac Pro for:
$2499 - $294 + $1885 = ~$3999
While they could still offer a faster dual processor model at $6200, if few people are buying those, the better option would be to offer the best value to the highest volume of customers.
The entry model could do with a 6-core CPU and then have an 8-core in between.
By taking out the optical, the PCI slots and 2nd CPU, they can cut the power consumption down so the PSU can drop to 500-600W.
If they can fit this into a Cube, that would be great but I think they'd struggle with that. They can at least manage the following size as it's just a reworking of what they have already:

If they can put in functionality to allow zero-config connections over Thunderbolt, even better. You could buy as many $3999 models and just plug a TB cable between 1 and 2 then 2 and 3 etc.
Sure the complaints will come in about not being able to access PCI cards but for high-end tasks, wouldn't you rather spend $3999 on another MP and run any task natively on a dedicated 10-core Xeon than spend $4750 on a Red Rocket PCI card that only does one thing? There's always the backup of having an external PCI box anyway.
If they can figure out how to make PCI slots and Thunderbolt work together in a Xeon box, all the better I suppose but they still need to allocate 40 lanes between them so they won't have more than 4 slots.
Ultimately, just like FCPX they have to design this box for the next 10 years, not for the last 10 years and make it appeal to the widest Mac Pro audience. If leaving the design largely unchanged and leaving out Thunderbolt accomplishes this, so be it but I don't think it does. I think the USP should be performance-per-dollar, not expansion - make it more than twice as fast as the iMac for less than twice the price.
Remember what the original Macintosh said:
http://www.youtube.com/watch?feature=player_detailpage&v=2B-XwPjn9YY#t=224s
These big, heavy workstation form factors are becoming unnecessary for workstation use just like the mainframes. Same for servers. One day, so few people will buy them that they will be dropped:
http://www.gartner.com/it/page.jsp?id=2079015
post #47 of 201
Originally Posted by wizard69 View Post
By the way guys, those AMD Radeons are OpenCL power houses delivering some impressive performance specs. By the time this machine ships there might even be lower power variants. The point is if this is coupled with just one of Intels coming many core chips we could see some rather impressive performance from a one socket machine. It is just another reason to not focus on the past when thinking about what is right for the future.

 

Just hope one of them works with my Early 2009…

Originally Posted by Marvin

The only thing more insecure than Android’s OS is its userbase.
Reply

Originally Posted by Marvin

The only thing more insecure than Android’s OS is its userbase.
Reply
post #48 of 201
Quote:
Originally Posted by Conrail View Post

Quote:
I agree that the current case is great but that means nothing as great cases are simply an engineering effort. You seem to think that it is impossible to make a great case for future machines which if you think about it is ridiculous. I've just barely started to read this thread and have noticed a fixation on the Mac Pros case which is totally unjustified.
The thing is, every time someone talks about a redesign, they're taking away all the features that make it a pro machine.
Is the cup half full or half empty. I'm certain if we took a poll we would get a whole lot of differing descriptions as to what a Pro machine is. The fact is the vast majority of Pro users are using Apple Mac Book Pros instead of desktop machine. You can't even argue with that unless you demand that your personal definition of what a Pro is be used.

Now we could focus the discussion about what is a "pro" machine just on Mac Pro users but do you think the definition of a Pro machine would still be homogeneous? I'm certain it wouldn't be and that we would see a wide array of Mac Pro users represented.

In the end we are talking about distilling the Pro down to a base high performance computing module, nothing more and nothing less. These days it is easy to provide the niche capabilities via plug in boxes. So for the few that still use an optical they can plug one in. Likewise for those that need a disk array.
Quote:
  A new case would be nice (especially if it were lighter) but the iToy crowd just talks about how "antiquated" the mac pro is, and all they want to do to is take away functionality.  
Don't be so bull headed. How much of that functionality does the average Pro user really use? Personally I'd much rather have an affordable high performance computing module that to pay current MacPro prices for what is a crap machine for my needs. In the end what Apple would actually do is to improve core functionality. Niche users can then slap what ever they want onto the computer module.
Quote:
I know optical drives are over, but RAM?  PCI-E?  Multiple HDDs?
When did anybody say anything about RAM? However it might not be the RAM as we know it today as there are options and physical realities that will come into play in the near future. For example the latest spec for DRAM includes performance options for RAM soldered onto the motherboard. Then you have the 3D memory technology that Micron and Intel have been working on. The simple reality is that If one wants to improve the performance of RAM arrays the approaches of the past will not work so sooner or later your Mac Pro will implement new RAM technology.

As to PCI-Express again who has said anything about that? Slots will be needed for the foreseeable future. However the GPU might not be plugged into a slot, it all depends upon how they resolve the TB issues.

With respect to multiple hard drive bays yes they need to go. Disk arrays can be easily implemented as external devices these days leaving the user to implement what is best for his needs. The bigger issue is why would anyone even bother with SATA or other legacy disk ports in a modern forward looking computer. Any machine that will be calling itself Pro will need to have a solid state storage system sitting on a high performance PCI-Express port.
Quote:
 Proper air flow and heat dissipation?  
Contrary to popular opinion this is something that a new enclosure could improve upon significantly.
post #49 of 201
Quote:
Originally Posted by Marvin View Post

Quote:
Originally Posted by Tallest Skil 
What happens to the workstation crowd?

Die of old age? It depends on who exactly we are talking about.

There are people who have money and want to spend it on the best that money can buy for the sake of it. This isn't an important market to satisfy.
There are people who buy the Mac Pro because they see it as a superior form factor but could quite happily use a quad-i7 iMac. Again not an important market.
There are people who genuinely push the Mac Pro to the limit and build their businesses around them. This is the only important market for the Mac Pro.
Even then that part of the market is highly varied. One of the problems with Mac Pro discussions is that these people only see their point of view as to what entails professional usage.
Quote:
The latter might need the most power they can get for compositing/rendering/transcoding. They might have a multitude of high-end peripherals to connect.

To satisfy the highest power needs, the power needs to scale linearly and be good value. It can't be a case of waiting 2 years for a 30% increase in speed. A better solution is to be able to plug in another Mac Pro without requiring the skillset to manage a computer farm and scale up performance. If the money is there and the need is there, it works. It's what the big boys do:
The issue of performance increases is interesting especially in the case of the Mac Pro which gets ignored update wise. One thing a distiller Mac Pro offers is the POTENTIAL for more regular updates. That is if the machine is focused on a minimal design validating updates becomes much easier. This might eliminate going for years without a performance update at all. Of course the chip supplier plays a role here.
Quote:
http://news.cnet.com/8301-13772_3-20068109-52/new-technology-revs-up-pixars-cars-2/

"Pixar had to triple its size, and today, the render farm features 12,500 cores on Dell render blades."

No matter how big you make it, one box isn't going to do that so the workstation has to be as good as possible for real-time tasks, allowing it to scale like a farm so that smaller businesses don't have to invest in server blades and offering the best value so they don't just go for HP or Dell. If they can add a special ARM co-processor, even better.
In this realm ARM has nothing (yet).

However I'm in complete agreement with the idea of a modular system. I'm actually hoping that this machine ships with the Phi chip that is expected to ship with super computing networking hardware built in. This would allow for clustering in a simplified way and keep TB focused on what it does best. It is interesting that Intel has been rather quiet about super computing networking being built into some of their coming many core chips.
Quote:
For peripherals, external is always better because you aren't limiting the form factor of the peripheral,
Actually that is baloney.
Quote:
you aren't dictating the power requirements, you don't have to force the computer owner to install it and risk damaging a very expensive machine, you don't have to have the peripheral active when you aren't using it and you aren't forcing it to be used exclusively with one type of machine.

You can connect a simple Firewire adaptor or an entire server rack with the same tiny plug.

This could well alienate people who are used to the tower form factor never changing since computers came into existence but it won't be a case that they can't do the same things with the new machine, they just have to do them differently.
If you look through this thread you will see a huge resistance to change here. This is rather sad to see really as you would seem to think that the computing industry is a mature field. In reality it is anything but mature and is changing rapidly.

So how does Apple deal with stubborn people that can't see another way to address a problem? Compelling hardware is the best approach, That is the new Mac Pro must be refactored in such a way that it causes people to rethink their preconceptions of what a Pro computer should be. Frankly the only way to really do that is to throw a bunch of new tech at the machine.
Quote:
Having a more integrated, unchanging core design like the other machines in the lineup will make the Pro more reliable.
Well hopefully. There is still the question of how they go about solving the TB / GPU issue. However in general I agree that a solid core machine is more reliable and frankly more like a module.
Quote:
Eventually, once everyone is down the route of external peripherals and the performance of the iMac is 30x higher or more in 10 years, everyone will see it as a workstation, just like the Mini and laptops. From here, we can see where the puck is going and we know the path it has to take to get there.
This is a real threat to the big tower designs. Even the near future promises some really interesting iMacs and Minis. However I really don't see the need for a performance machine ever going away completely. It does become a question of Apple bothering to build that performance machine though.
Quote:
Quote:
Originally Posted by Tallest Skil 
Are we to be stuck with 3 RAM slots?

4 slots (the Mac Pro has 4 per CPU), allowing up to 64GB RAM with 16GB DIMMs.

No one will ever need more than 64GB of RAM. You heard it here first.

This whining about RAM is bothersome. First people don't seem to grasp that RAM is or will be performance limited by sitting in slots in the future. I'm talking very near future too, high performance RAM will have to be soldered on the motherboard. Further other technologies like multi chip 3D Ram means that that RAM will take up very little space on the motherboard. It will be rather easy to put 64GBs into a Mini in the not to distant future soldered right on the motherboard. In the end the discussions about RAM are just like the discussions about case size, to much focus on the past! People need to be open to the idea that motherboards will look dramatically different in the future.
post #50 of 201
Originally Posted by wizard69 View Post
This whining about RAM is bothersome. First people don't seem to grasp that RAM is or will be performance limited by sitting in slots in the future. I'm talking very near future too, high performance RAM will have to be soldered on the motherboard. Further other technologies like multi chip 3D Ram means that that RAM will take up very little space on the motherboard. It will be rather easy to put 64GBs into a Mini in the not to distant future soldered right on the motherboard. In the end the discussions about RAM are just like the discussions about case size, to much focus on the past! People need to be open to the idea that motherboards will look dramatically different in the future.

 

Imagine how much controversy there will be when the Mac Pro comes with soldered RAM like half of their laptop lineup right now…

And price? I can see the threads now. 😱

Originally Posted by Marvin

The only thing more insecure than Android’s OS is its userbase.
Reply

Originally Posted by Marvin

The only thing more insecure than Android’s OS is its userbase.
Reply
post #51 of 201
Quote:
Originally Posted by z3r0 View Post

Oh hellzzzz nooo!
Though luck.
Quote:
 I don't want my Mac Pro to look like an Octopus full of cables.
It won't. Mainly because the things you think you need today you won't need in the future.
Quote:
Going that route means one cable to the PCI-X/GPU box,
Why does this nonsense keep getting repeated. There is no way that external PCI-Express / GPU boxes will replace internal slots and GPUs. It is simple engineering as you can't get the required performance. You may see the GPU soldered onto the motherboard though.
Quote:
another for the optical drive, another for external storage plus all the other wires for the displays and peripherals. I prefer to have everything internal if possible.
External storage just makes sense if you want to serve a wide array of users at a reasonable cost. How ugly it looks is up to the designers.

By the way I've been a dues paying member of the internal storage crowd for a very long time now but have seen the light so to speak. Mainly because there are far more options for bulk storage, some of which can't be implemented internally. In the end the big deal for me is shrinking the case and lowering the price. Further you are in the minority when it comes to needing a lot of bulk storage inside that Mac Pro.
Quote:
I wouldn't mind one optical drive on top and the second slot for a card reader like this: http://www.sonnettech.com/product/qio.html as an additional option, but removing the optical drive bay would be a bad idea!
Anybody that is so hung up on optical drives really needs to get with the program.
Quote:
In the case of the Mac Pro BIGGER is better.
Nope! The Mac Pros size is a serious detriment to sales, it has to shrink.
Quote:
As for gartner, they are full of baloney.
You may or may not be full of baloney yourself but one thing is for sure, you are living in the past. Apple needs to come up with an architecture that can serve high performance users for the next several years while controlling machine costs. The current Mac Pro architecture is a non starter in this regard.
post #52 of 201
Quote:
Originally Posted by Tallest Skil View Post

Quote:
Originally Posted by wizard69 View Post

This whining about RAM is bothersome. First people don't seem to grasp that RAM is or will be performance limited by sitting in slots in the future. I'm talking very near future too, high performance RAM will have to be soldered on the motherboard. Further other technologies like multi chip 3D Ram means that that RAM will take up very little space on the motherboard. It will be rather easy to put 64GBs into a Mini in the not to distant future soldered right on the motherboard. In the end the discussions about RAM are just like the discussions about case size, to much focus on the past! People need to be open to the idea that motherboards will look dramatically different in the future.

Imagine how much controversy there will be when the Mac Pro comes with soldered RAM like half of their laptop lineup right now…
And price? I can see the threads now. 😱

Yep I can see it coming. But again that will be the way of the future unless the industry comes up with some amazing socket that eliminates current issues. In the end if the Mac Pro is to be a performance machine in the future it will have to drop RAM sockets for the simple reason it isn't technically possible to operate the high speed signals over current socket technology. When that will happen is any bodies guess but the wheels are already in motion.

The other thing that people seem to mis in this discussion is that even Intel is moving towards SoC technology. Machines will be smaller or more capable in the future simply because there is higher integration and fewer parts to work with to realize a system. This will happen to both mainstream processors and performance processors though what gets integrated will vary. We are not far at all away from a Mini that can match today's Mac Pro in performance and capability.
post #53 of 201
Quote:
Originally Posted by Tallest Skil 
Your mockup appears more like an xMac than something designed to serve the needs of the Mac Pro's market right now.

If the true market is, as you say, those that push it to its limit, cutting its limit in half doesn't seem like the best idea.

It's not quite cutting it in half. Apple doesn't use the fastest CPUs. They use two lower performance processors. This design is just using one of the best single Ivy Bridge processors (10/12-core) instead of two lower performance ones and it means they can put the same parts in every model. It also means nobody can criticise them for not using the best parts.

An xMac would be something like this:

http://www.newegg.com/Product/Product.aspx?Item=N82E16883227438

except the entry 27" iMac is pretty good value compared to that and looks a lot nicer. The xMac that's been described over the years never had more than quad-core processors.
Quote:
did that mean Nehalem/Westmere can have 6 per processor normally? I guess all that I remember is they shorted us on the maximum number of slots.

I think Intel has a document somewhere showing support for 6+ RAM slots. It is more than Apple puts in. IBM has layouts here for more:

ftp://public.dhe.ibm.com/common/ssi/ecm/en/xsw03075usen/XSW03075USEN.PDF

but there are performance hits in certain setups:

"The number and type of DIMMs and the channels in which they reside will also determine the speed at which memory will be clocked."
Quote:
Originally Posted by wizard69 
The request for the smaller case is being made for multiple reasons, but the big one is that we don't want to see the Mac Pro go the way of XServe.

Yes, it's like the FCPX face-lift. It doesn't matter much to people who ignore that but it broadens its appeal.
Quote:
Originally Posted by z3r0 
I don't want my Mac Pro to look like an Octopus full of cables. Going that route means one cable to the PCI-X/GPU box, another for the optical drive, another for external storage plus all the other wires for the displays and peripherals. I prefer to have everything internal if possible.

It still has an internal GPU (the best one you could buy in fact - likely the Radeon 8970 by the time it arrives). Having 3 drives instead of 4 doesn't mean external storage is required but if it's to be a RAID setup, it's better being external as it's hardware RAID. You get bus-powered optical drives so one small cable and it can sit in the drawer 364 days of the year.
Quote:
Originally Posted by z3r0 
In the case of the Mac Pro BIGGER is better.

What they should do is make it 4x the size, support 4 processors, 8 PCI slots, 16 RAM slots, 8 HDDs, quad Blu-Ray drives. But then colour it polka-dot pink with plastic windows and flashing neon internals and see how many people ignore the aesthetics.

Apple can really do whatever they want to be honest. People were posting the usual comments about abandoning Apple and telling all their friends in high places to do the same bringing Apple to its knees at the last potential refresh if it didn't turn out how they wanted and look what Apple did. What was the outcome? Customers have no power over Apple at all in this market segment. By the time it comes out it will have been nearly 3 years since a proper update. For all the noise that goes on about the Mac Pro, that's enough evidence of how ineffective it is.
Quote:
Originally Posted by wizard69 
I really don't see the need for a performance machine ever going away completely

The need for performance won't but what are the tasks that need done? Think of how fast GPUs will be in 10 years. 3D computer graphics will all be real-time and photoreal. Compute power will be insane. I think this will satisfy the needs of the individual and anything more will be in server-space.

edit: in terms of size, new advances in cooling should help. The heatsink in the MP is huge like the following:

http://www.dynatron-corp.com/en/product_detail_2.aspx?cv=&id=184&in=0

The new Sandia CPU heatsink is supposed to be able to do a much better job:

http://www.tomshardware.com/news/cpu-cooler-sandia-heatsink-fan,16100.html

They note 7x more efficient in mass produced models and dirt cheap. If it's as good as that, it should definitely make it's way into the MP and Apple could probably manufacturer it to a higher grade than most.
post #54 of 201
Quote:
Originally Posted by Marvin View Post


If they can put in functionality to allow zero-config connections over Thunderbolt, even better. You could buy as many $3999 models and just plug a TB cable between 1 and 2 then 2 and 3 etc.
Sure the complaints will come in about not being able to access PCI cards but for high-end tasks, wouldn't you rather spend $3999 on another MP and run any task natively on a dedicated 10-core Xeon than spend $4750 on a Red Rocket PCI card that only does one thing? There's always the backup of having an external PCI box anyway.
If they can figure out how to make PCI slots and Thunderbolt work together in a Xeon box, all the better I suppose but they still need to allocate 40 lanes between them so they won't have more than 4 slots.
 

I'm tired today. I really wanted to reference highlander after you said something about people dying of old age (old movie), but I've got nothing other than the obvious joke about consolidating the lineup. Distributed computing has been around for a while, but I don't see this as a good option at a smaller scale like a 1-5 man shop. It might be better in a larger shop where some machines are under-utilized, but gpgpu computation is likely the future for this stuff. OpenCL and CUDA just need to support a wider range of functions. I'd suggest that in a couple years many of these customers will be better served by multiple gpu cards than an ever increasing number of standard X86 Xeons from a performance per dollar standpoint. The other problem is that starting at $4000 limits your market. Right now the $2500 option and the 12 core really cater to different markets. Trying to homogenize them at the $4000 mark means the imac has to pick up a lot of slack at the low end of that. Regarding thunderbolt and PCI slots, the problem was the lack of integrated graphics. It had nothing to do with PCI slots. The chip depended on specific logic board placement, and the certification requirements made integrated graphics the way to go. Right now we lack that on Xeons outside of the E3s. If that changes, you could have integrated graphics + PCI slots containing other cards.

 

As much as I enjoy the links and references, you miss many things. The last point is on single vs dual package. If you're starved for bandwidth, you can go with dual cpu packages for this reason. If it's a minor bottleneck, they can do what they've done up to this point. The current machine is slightly over-subscribed, and Apple has done this with many things. Some of the imacs have done this with firewire using a single bridgeport, so plugging in FW400 +800 meant they'd both run as 400. If the integrated graphics thing isn't so much of an issue, you can lose the lower x4 slot in favor of thunderbolt. Allocate 4 lanes to that with a x8 and x16 slot. It's not really ideal, but you could run gpus off both if they aren't oversized or constrained thermally. That basically leaves you with 12 lanes to cover the rest of the ports. It's cutting it pretty close, but it could work.

 

If Apple goes the route of just trying to chain X86 cores in boxes together, I see it as a product line with little remaining life. They need significant gains and performance per dollar to retain relevance. 20% faster means work gets done a bit quicker, but relying partially on gpu computation can actually alter workflows and enable things that weren't possible or feasible on the older hardware. It was actually one of your links that showed how much differentiation was present between a mobile gpu and a desktop gpu in the $500-1000 range  <200W.

post #55 of 201
Quote:
Originally Posted by Marvin View Post



Yes, it's like the FCPX face-lift. It doesn't matter much to people who ignore that but it broadens its appeal.
 

I forgot to reference this point. If you read over the most common complaints separated out from the general noise, it came down to lack of licensing options for the old thing after such a significant point of departure, and concerns that certain must have features were missing. Multi-cam support was the most commonly listed gripe, and that's one I can understand. Its name should be self explanatory. Apple remedied it with an update later, but the people who migrated were most likely already disappointed with Apple. My point remains that an increase in X86 cores may not be the way forward for much longer. I'd also suggest you'll retain some performance segment much longer. As we've seen outside of that, the point of focus has trended heavily in favor of mobile form factors with lower power requirements. If the power isn't being used, they'll optimize in a different direction.

post #56 of 201
Quote:
Originally Posted by jragosta View Post


And this is based on what? What makes you more of an expert in computer design than Apple?

 

You never used a thin Mac for anything other than net surfing, right?

 

1- Pick some music that has some dBA level. Either heavy metal, punk, or Aqua's Barbie Girl will work fine (http://en.wikipedia.org/wiki/Barbie_Girl)

 

2- Use big, full, earphones, and listen to such music at high volume (but take care of your eardrums, please).

 

3- Start "The Sims 3" in your brand-new ultra-thin Mac (if you pretend you're not a gamer, you can say you run a render job in Maya if you prefer).

 

4- While "The Sims 3" starts a new game, notice the subtle details in Aqua's Barbie Girl.

 

5- No, the wind effect in the song's drum base isn't a new mix of the song: It's your ultra-thin Mac hurrying the fans for the CPU not melting.

 

6- You can now turn off the music, quit "The Sims 3", and continue using your ultra-thin Mac for net surfing. Yes, it was designed for that.

post #57 of 201

*GASP*
This (hopefully) means I can shove a Radeon 7XXX series card in my 2012 MacPro and reap all the nerdy delights a high powered graphics card brings!

I have the 6-core Uni Processor model and that CPU is on par performance wise with the current 6-core E5-XXXX Sandy Bridge processors (albeit running at a higher clock frequency), so its by no means slow. A 7XXX series graphics card would be the cherry on top.

 

Quote:
Originally Posted by rob53 View Post

Case size is important. Just ask all those enterprise technicians who have to move the old Mac Pros around. For me, a Mac Pro case needs to be just large enough to hold a motherboard with several CPUs, plenty of RAM, and a few specialized PCIe cards. Standard I/O ports don't really take up much room. As for optical drives, I wonder if not having them would be better, letting me attach my specialized and constantly changing drives to whichever Mac Pro I want instead of having to settle for older internal drives. The amount of space for internal disk/SSD drives is something that I'd have to think about. There needs to be enough disk room for individual users but for groups of users, having SSD boot drives and external shared drives might be a better option. This would mean a much smaller case design. Think about a stack of Mac mini. Add better CPUs with heat sinks (tall-mini, size of two minis) and something on the order of four tall-minis would only be a foot tall. Redesign to get rid of the optical drives (two quad- or six-core CPUs could fit in the bottom two or three tall-minis) and you could put disks and I/O cards in the top tall-mini space to get four to six CPUs with disks and I/O cards for a nice new Mac Pro. All in a much smaller design and (maybe) with redundant power supplies. Actually, Apple could have fun and build the new Mac Pro as a modular computer: redundant power module, dual CPU module, disk module, I/O module. Just stack them to plug everything together. Start with a single CPU module but allow 2-4 (or more) to be stacked. Same with disk and I/O modules. This ends up looking like a blade server and could provide the same functionality (maybe even as a reborn OS X Server).

Why the hell would you do that? Why would you have it all external modules when its already modular inside a single case? You musn't know much about professional hardware because those Xeon CPUs, even the bleeding edge chips, needs a lot of room, big fans and larger heatsincs to keep cool. I have yet to see a small (1U/2U) rack server with more than two CPU sockets and I have never seen a tower with more than two CPU sockets as small as the Mac Pro. People that require that much horse power should turn to computer clusters. You would a LOT more space than the foot print of a Mini (even one 3x as tall) for even a single high power Xeon chip.

 

Then you have the issue of cost; connecting all those external modules together would require very expensive, high bandwidth, enterprise super/cluster computer grade fibre optic cable. Making drives external is just... dumb... Why would you take your disks out of your system, away from the SATA or SCSI backplane and connect them externally? Thunderbolt is far to expensive and internal disks offer direct connection to the disk controller without going through the PCIe first as with thunderbolt. Its also more secure being inside a locked case, its better for cooling (pro systems and servers have fans for the hard disk bays) and external units would end up being gigantic in order to furnish a fan and the interface ports.

 

You're suggesting all of this to make the system smaller, when in realistically all you're doing it introducing more parts to potentially fail, dangerouse interface cables, racking up the price and actually make the system unfeasibly large.

 

 

Quote:
Originally Posted by ecs View Post

Finally, an interesting rumor!!! (I was quite bored of iOS rumors)

 

I'd like to have a new Mac Pro with as few mechanical parts as possible (no HD, fan-less PSU... and if the whole computer can be designed with just one fan, better than two).

Even the lowest power desktop PCs have fans in the PSU. Those things get HOT. Its not a wall-wart or power brick connecting to a battery in a laptop, its a big brick of a thing that has to power several high performance devices.

 

Also, one fan? Why would that be a good idea in a high performance system? High performance servers and workstations are designed to stay below at least 60C. The Mac Pro has a fan on the GPU, fan in the PSU, fan for the HDDs, a fan for the daughter card holding the processor and ram, a fan inside the processer heatsinc itself and an exhaust fan. It needs all those fans - you can't just chop them down to one and expect it to function. The only single fan I can see dropped in the near future is the HDD fan once HDDs become obsolete in everything except specialist systems requiring large amounts of high-density storage.

 

 

 

 

Its comments like the above two that really infuriate me because they are completely unrealistic with little or no thought towards reality and the laws of physics.

... at night.

Reply

... at night.

Reply
post #58 of 201
Quote:

 

(text cut out because WALL-O-TEXT!)

What you have proposed here is quite possibly one of the dumbest things ever for a professional workstation machine.

 

You've removed the second processor, crippling the machine in the high-end market, you've removed the PCIe slots, meaning it can no longer be upgraded with extra expansions boards, you're removed a drive bay leaving only three (making almost all RAID configurations useless if three drives are employed) and you've removed the top of the machine above the drive bays that held both the ODDs and the PSU, meaning no space left for extras such as a card reader. Your choice of putting the PSU behind the processor and ram daughter board means that cooling has been compromised as there is now no exhaust fan at the back of the machine for said daughter board (with the heat from the PSU also affecting the airflow around the CPU). Smaller fans would have to be employed for the middle of the tower running at a higher RPM, causing lots of excess noise and their is less space around the CPU heatsinc for it to "breath", causing other components around it to heat up from being in such close proximity, needing either lower performance parts or faster, louder fans.

 

All you've done is cripple the machine and made it useless in the high-end market. If these are the omissions you'd make to the Mac Pro then you are clearly not its target market.

 

Machines of this type from any manufacturer have all the features you ommited because they are required by the high-end user. These machiens need to run cool, perform mind-bending calculations quickly, be rated for 24/7 operation with pin point precision and accuracy and have 99.9% uptime at minimum. A consumer PC (home, office, gaming etc.) are none of these things. The Mac Pro and other such workstations should not be treated and thought of as a consumer desktop computer.

... at night.

Reply

... at night.

Reply
post #59 of 201
Quote:
Originally Posted by z3r0 View Post

192GB+ RAM

Lion only sees 96GB (don't know if ML will). Strangely, Windows uses all 128GB under Bootcamp.

OWC 16GB Memory Modules for 2009/2010 Mac Pro — 48GB / 96GB in Mac Pro


Quote:
Originally Posted by Marvin View Post

No one will ever need more than 64GB of RAM. You heard it here first.



Photographers disagree. Read this piece on a person using Photoshop to the max, so to speak. Easily needing 80GB, wishing for more.

I really enjoy this thread. Don't have time to discuss everyones' take on the subject, but I do have one thing to say:

I think pro's use whatever tool is available to get the job done. U'd think someone like Phil Collins would sound different if he plays on a different drumkit? No, it's not the tool that makes the sound; it's the artist creating whatever he wants. A photographer doesn't blame his gear if the picture doesn't look good to him. There are people creating far better pictures taken with their cellphone than others do with a (D)SLR.

Which, for example, means that a Pro won't care that much if there aren't any drivebays in the next model; they'll get external (TB) storage if needed.

Thanks to everyone for their great posts, especially wizard69 and Marvin.

If they do make a smaller box and some folks need to hook up their external devices that might create desktop clutter:

I’d rather have a better product than a better price.
Reply
I’d rather have a better product than a better price.
Reply
post #60 of 201
Quote:
Originally Posted by benanderson89 View Post

[...]

Even the lowest power desktop PCs have fans in the PSU. Those things get HOT. Its not a wall-wart or power brick connecting to a battery in a laptop, its a big brick of a thing that has to power several high performance devices.

 

Also, one fan? Why would that be a good idea in a high performance system? High performance servers and workstations are designed to stay below at least 60C. The Mac Pro has a fan on the GPU, fan in the PSU, fan for the HDDs, a fan for the daughter card holding the processor and ram, a fan inside the processer heatsinc itself and an exhaust fan. It needs all those fans - you can't just chop them down to one and expect it to function. [...]

 

Yes, one fan, because I don't really need a Mac Pro elephant. What I actually want is an iMac without the display and with proper cooling. Put the most complete iMac configuration into a moderately sized cube, and you won't need more than a big high-quality silent fan to keep it cool at intense CPU/GPU work.

 

Problem is that Apple, with the only exception of the Mac Pro, doesn't seem interested in cool (I mean cool thermal-wise) machines, maybe because if they run hot, they last less, and you buy machines more frequently. Put it together with beautiful aesthetics, and all factors are put there to increase sales.

 

This forces you to consider the Mac Pro even when you don't need one. But I don't want an elephant, that's why I'm asking for just a simple machine that can be used for intense CPU/GPU, with as little mechanical parts as possible, while keeping the chips cool. It's not hard to achieve that, although I don't see Apple doing it, for the reasons above.

post #61 of 201
Quote:
Originally Posted by ecs View Post

 

Yes, one fan, because I don't really need a Mac Pro elephant. What I actually want is an iMac without the display and with proper cooling. Put the most complete iMac configuration into a moderately sized cube, and you won't need more than a big high-quality silent fan to keep it cool at intense CPU/GPU work.

 

Trying to engineer in favor of smaller sizes can actually drive up costs. Right now what most people fail to realize is that they're already leveraging a portion of the xmac crowd to keep mac pro sales at an acceptable level. The xmac machine people want aligns quite well with the specs of the base mac pro. Adding in a few drive bays and PCI options has little impact on pricing. They're using high markups for a line with little growth and low volume relative to their other products. I realize you dislike this, but that is how they've chosen to address this market up to this point. Windows is also showing terrible growth on desktops, so I can't see any big changes coming there.

Quote:
Originally Posted by PhilBoogie View Post


Lion only sees 96GB (don't know if ML will). Strangely, Windows uses all 128GB under Bootcamp.
OWC 16GB Memory Modules for 2009/2010 Mac Pro — 48GB / 96GB in Mac Pro


Photographers disagree. Read this piece on a person using Photoshop to the max, so to speak. Easily needing 80GB, wishing for more.
I really enjoy this thread. Don't have time to discuss everyones' take on the subject, but I do have one thing to say:
 

Most photographers will never use that much. You'd have to be stitching some enormous files with a lot of layers. Adobe does recommend 64 GB for optimal performance with After Effects on a 16 core machine. They basically allocate 2GB per logical core. I don't think the ever increasing core count will keep going, but dual package models still provide twice the bandwidth. Whether Apple sells them is another matter. This represents a small portion of their total users. I'd expect Windows to represent a significantly larger chunk of this. The thing is it doesn't matter for most of these applications. When you're in them, they're 90% the same. People usually just continue with whatever operating system they already use unless hardware or software requirements force a change.

post #62 of 201
Quote:
Originally Posted by ecs View Post

 

Yes, one fan, because I don't really need a Mac Pro elephant. What I actually want is an iMac without the display and with proper cooling. Put the most complete iMac configuration into a moderately sized cube, and you won't need more than a big high-quality silent fan to keep it cool at intense CPU/GPU work.

 

Problem is that Apple, with the only exception of the Mac Pro, doesn't seem interested in cool (I mean cool thermal-wise) machines, maybe because if they run hot, they last less, and you buy machines more frequently. Put it together with beautiful aesthetics, and all factors are put there to increase sales.

 

This forces you to consider the Mac Pro even when you don't need one. But I don't want an elephant, that's why I'm asking for just a simple machine that can be used for intense CPU/GPU, with as little mechanical parts as possible, while keeping the chips cool. It's not hard to achieve that, although I don't see Apple doing it, for the reasons above.

Machines running hot does affect reliability - but Apple machines are consistantly some of the most reliable on the market; so much for your "they make them hot so they break" theory. The problem is that you don't seem to understand the requirements of Professional IT equipment: Apple clearly state that the MacPro is a workstation and not a consumer oriented desktop computer, therefore meaning it will cator to workstation users. Workstation users want power, versitility, accuracy and reliability: The way to make the Mac Pro powerful is to use high performance parts - high performance equipment runs hot, so large cooling systems are needed. The system needs to be versitile and ready for any situation, hence the massive expansion capabilities of the Mac Pro right down to removable daughter boards. They need to be accurate and reliable, again pointing towards the performance and cooling but can also mean the ruggedness of the system, its components, chasis and motherboard.

 

The Mac Pro is no smaller than any other professional workstation - infact its quite tiny in comparison with others in the same market segmant (ProAm, Medium and Enterprise). Dell's new T5600 workstation is toated as having a "Compact Chasis":

 

 

 

 

Its 41cm high, 17cm wide and 47cm deep.

The MacPro is around 40cm high (sans the large handles), 20cm wide and 47cm deep.

The Mac Pro is 3cm wider, but it has 4 hard disk bays vs Dell's two. The MacPro case is on par with what other manufacturers are calling "Compact" - cases for this class of machine can get much, much larger and much, much heavier. Your typical gaming computer has a larger case than the MacPro.

 

If you think its easy to keep high-performance chips cool then you really need to take another look at the heatsincs on the market and those installed in other professional workstations; they are gigantic for a reason. To get a desktop computer to stay at the heat levels gained by the MacPro, you better be prepaired to shove one of these on your motherboard + high performance fans:

Lets not forget that a single fan will not be enough to also cool a dedicated graphics card as as well the power supply unit. Even the cheapest desktop computer with a modest GPU card has a minimum of three fans.

 

 

Quote:

Originally Posted by hmm View Post

Most photographers will never use that much. You'd have to be stitching some enormous files with a lot of layers. Adobe does recommend 64 GB for optimal performance with After Effects on a 16 core machine. They basically allocate 2GB per logical core. I don't think the ever increasing core count will keep going, but dual package models still provide twice the bandwidth. Whether Apple sells them is another matter. This represents a small portion of their total users. I'd expect Windows to represent a significantly larger chunk of this. The thing is it doesn't matter for most of these applications. When you're in them, they're 90% the same. People usually just continue with whatever operating system they already use unless hardware or software requirements force a change.

Given the sheer pixel density of modern cameras, its very possible that a photographer could use that much RAM in a heartbeat. I draw colour comics in photoshop, and I have no trouble reaching the 32GB+ mark of Real Memory on my MacPro after several solid hours toiling away over a wacom tablet. If I was doing professional grade work with CMYK and/or print ready that would easilly top 64GB+. A CMYK or print ready file (300DPi+, 16-bit colour or better and/or very high resolution) can be hundreds of megabytes in size, some even a gigabyte, and this is just the file on disk!

... at night.

Reply

... at night.

Reply
post #63 of 201
Thank you benanderson89 for elaborating on that. I looked that Workstation up at Dell.com and found the picture to be funny (with the SF movie on the screen):



All fun aside, yes, there are definitely people out there maxing their RAM. Some wish for OSX to go beyond 96GB for good reason. Sound crazy, but then again, we have crazy cameras nowadays like the D800 that shoot 36MP, resulting in an uncompressed raw of 75MB. A lossless compressed 14-bit raw is around 40MB.

If your gonna work on these in Photoshop you might hit that 96GB boundary.
I’d rather have a better product than a better price.
Reply
I’d rather have a better product than a better price.
Reply
post #64 of 201
Quote:
Originally Posted by hmm 
Distributed computing has been around for a while

I see it being less like that and more like a co-processor. In much the same way you'd add external GPUs to a Mac Pro using a PCI extender but whichever works best.

This wouldn't in any way be the normal setup, it would be the exception. A single 10/12-core Ivy Bridge Xeon is going to be pretty fast on its own and decent enough value for $4000. Instead of the best value starting around $4000, that's where it ends.
Quote:
Originally Posted by hmm 
Regarding thunderbolt and PCI slots, the problem was the lack of integrated graphics. It had nothing to do with PCI slots. The chip depended on specific logic board placement, and the certification requirements made integrated graphics the way to go.

Surely they can connect a desktop GPU directly to the TB controller though, it just has to be a more restrictive design, which is what I'd suggest. They could add a separate GPU onto the motherboard I suppose but it's not going to be used for much unless they ship entry MPs without add-on GPUs at a lower price.
Quote:
Originally Posted by benanderson89 
You've removed the second processor, crippling the machine in the high-end market

The single Ivy Bridge CPU it uses could have the same number of cores they have now.
Quote:
Originally Posted by benanderson89 
you've removed the PCIe slots, meaning it can no longer be upgraded with extra expansions boards

You can get an external PCI box but the preferred route would be Thunderbolt peripherals.
Quote:
Originally Posted by benanderson89 
you're removed a drive bay leaving only three (making almost all RAID configurations useless if three drives are employed)

The OS has to go on one of them anyway. If you have RAID01/10, your OS is on a RAID0 setup, which isn't a good idea. RAID5 is supported with 3 drives. Ideally they are going to ship these with SSD blades/Fusion drives too though so you still technically have 4 drives.
Quote:
Originally Posted by benanderson89 
you've removed the top of the machine above the drive bays that held both the ODDs and the PSU, meaning no space left for extras such as a card reader.

SD card readers are tiny, it would go next to the USB ports on the front.
Quote:
Originally Posted by benanderson89 
Your choice of putting the PSU behind the processor and ram daughter board means that cooling has been compromised as there is now no exhaust fan at the back of the machine

The PSU doesn't take up the full width (or depth from this view) of the machine - it's not a 1kW PSU any more, I didn't show that very well in the image. The air would flow past the gap in front of it. You can see the available depth when they pull out the giant heatsinks here:

http://www.youtube.com/watch?v=SQWnrXt5XFQ&feature=player_detailpage#t=154s

Those massive heatsinks shouldn't be needed with the new Sandia heatsink design mentioned above either.
Quote:
Originally Posted by benanderson89 
Smaller fans would have to be employed for the middle of the tower running at a higher RPM

Possibly if they leave the design like that. The GPU doesn't have to be like that though. I wouldn't expect them to maintain the layout like I've done, that was just to show roughly what a reworking of the internals can do.
Quote:
Originally Posted by PhilBoogie 
Photographers disagree. Read this piece on a person using Photoshop to the max, so to speak. Easily needing 80GB, wishing for more.

He didn't quite max out 64GB but a few extra layers might do it. Working with multiple 16bpc 22MPixel images and saving a 24GB PSD file isn't respresentative of a widespread need though - he even said it was for comparisons so presumably loading a whole load of images in as layers to see the differences. Photoshop isn't meant for that. But one processor support 6 slots anyway so if they went this route, Apple could support 96GB. Photoshop shouldn't use that much RAM for doing this. Either they need to figure out how to keep layers compressed in RAM or using an intelligent proxy system.

Photoshop has various caching features, they even use cache tiles:

http://helpx.adobe.com/photoshop/kb/optimize-performance-photoshop-cs4-cs5.html

but when you look at Google Maps, it's a set of photos of the entire world and it runs in a web browser. You aren't doing any filters but Photoshop should be able to only load as much as it needs for the zoom level you are at and let you work with infinite resolution images. Any processing should be done directly to disk.

As you rightly say, people work with what they've got as they've done over the years, even with machines with less power than modern mobile phones and the other thing to remember is that Apple isn't out to sell customers a machine to satisfy their needs. They still want to sell more machines.

They'd be better off selling Cubes so that people will want a new one the following year. It creates growth. Sure some people might switch to Windows workstations but they can do that now. Apple doesn't sell dual E5-2687W workstations.

Think about the following spec:

- 10-core Ivy Bridge 3.1GHz
- 4/6 RAM slots up to 64/96GB RAM
- 3GB Radeon 8970 GPU, possibly fixed design
- 4/6 20Gbps TB ports
- Fusion drive option with up to 12.7TB total storage over 3 drives
- 8" Cube best case, worst case 8"x14.5"x14.5"

$3999

I think that's a pretty good workstation machine. If you need slots, buy an $800 PCI box or get Thunderbolt equivalents. It would have the glossy black Apple logo on the side but smaller.
post #65 of 201
Quote:
Originally Posted by benanderson89 View Post


 

Quote:

Given the sheer pixel density of modern cameras, its very possible that a photographer could use that much RAM in a heartbeat. I draw colour comics in photoshop, and I have no trouble reaching the 32GB+ mark of Real Memory on my MacPro after several solid hours toiling away over a wacom tablet. If I was doing professional grade work with CMYK and/or print ready that would easilly top 64GB+. A CMYK or print ready file (300DPi+, 16-bit colour or better and/or very high resolution) can be hundreds of megabytes in size, some even a gigabyte, and this is just the file on disk!

Scratch disks always built up over a number of hours. With adequate ram you're just storing the information directly in memory. Anyway I've dealt with CMYK files in the past. I'm not sure why you'd feel the need to retain 16 bpc there. CMYK spaces tend to be pretty locked down, so you shouldn't run into banding issues. Hundreds of megabytes in size is nothing. You can deal with 2GB (on disk) files comfortably on modern hardware. It was possible a decade ago. It's just you wouldn't have used 16bpc modes. You didn't have cpu intensive things like smart objects. You didn't assemble large spherical hdri imagery directly in PS (it didn't even support the radiance format). Digidlloyd talks things up a bit at times, but he does provide file sizes for reference and lists the actions applied. When you look at a 16 bpc 15-20k image and run a set of intense tasks, it allows you to really see stratification within the lineup, and it is nice being able to deal with things in real time without lag. It's just silly to suggest it's otherwise unworkable. I mentioned that Adobe suggests 48GB of ram for a 12 core machine or 64 for a 16 core if you want to retain maximum performance, especially during rendering. After Effects is the most memory intensive application they publish. Photoshop isn't as bad, even in CMYK or indexed color and 10k files.

 

Quote:
Originally Posted by Marvin View Post


He didn't quite max out 64GB but a few extra layers might do it. Working with multiple 16bpc 22MPixel images and saving a 24GB PSD file isn't respresentative of a widespread need though - he even said it was for comparisons so presumably loading a whole load of images in as layers to see the differences. Photoshop isn't meant for that. But one processor support 6 slots anyway so if they went this route, Apple could support 96GB. Photoshop shouldn't use that much RAM for doing this. Either they need to figure out how to keep layers compressed in RAM or using an intelligent proxy system.

 

 

You can't save a 24GB PSD file. I don't know how much of that bulk is layers or if its compressed, but PS has a limit on size. Once you go over 2GB on disk, you have to save in .PSB (large document format) anyway. My primary use these days would be for stitching spherical hdr images. You could do a lot of these things all the way back to the G4/G5 days, but it was a lot slower. At that time 16 bpc was also uncommon in PS. It's overrated anyway. 32 bpc is useful if you need to work with linear data.

Quote:

Photoshop has various caching features, they even use cache tiles:
http://helpx.adobe.com/photoshop/kb/optimize-performance-photoshop-cs4-cs5.html
but when you look at Google Maps, it's a set of photos of the entire world and it runs in a web browser. You aren't doing any filters but Photoshop should be able to only load as much as it needs for the zoom level you are at and let you work with infinite resolution images. Any processing should be done directly to disk.
As you rightly say, people work with what they've got as they've done over the years, even with machines with less power than modern mobile phones and the other thing to remember is that Apple isn't out to sell customers a machine to satisfy their needs. They still want to sell more machines.
They'd be better off selling Cubes so that people will want a new one the following year. It creates growth. Sure some people might switch to Windows workstations but they can do that now. Apple doesn't sell dual E5-2687W workstations.

 

I was going to avoid that topic, but photoshop can run in a lot of ways. If you load it with ram and set its memory allocation high, it will use it. People still worked on huge files for movie posters and things prior to 64 bit photoshop. It's just that today it's practical to let it cache more data to ram rather than scratch disks if the resources are available. It's not the leanest application out there, but memory prices are low. For anyone dealing with large files, I'd just say max ram and if necessary, add an ssd after that.

 

 

Quote:
Think about the following spec:
- 10-core Ivy Bridge 3.1GHz
- 4/6 RAM slots up to 64/96GB RAM
- 3GB Radeon 8970 GPU, possibly fixed design
- 4/6 20Gbps TB ports
- Fusion drive option with up to 12.7TB total storage over 3 drives
- 8" Cube best case, worst case 8"x14.5"x14.5"
$3999

I'm not sure there's a good way to implement more than a single thunderbolt chip, and each chip supports 2 ports according to everything I can find. In the past Apple has limited the number of "specialty" ports. They never had more than 3 firewire ports. This assumes we'll see thunderbolt right now. It could skip a generation, as it would have little impact on the overall shape of the case. The best way to implement it remains integrated graphics, which you wouldn't have on the E5 you mentioned. You're still stuck on engineering it to be smaller. At a $4000 price point, I don't see how that could possibly matter unless you're going the lenovo C20 route and trying to make more fit in a server rack. It's actually more expensive than some of the others. When you engineer specifically for size, it costs money. The cost of materials to build a large aluminum case is paltry compared to the costs of trying to make something as compact as possible while retaining performance parts. I think you just like to imagine this stuff. You told me you like new/innovative solutions before. I get that, but I think your abstract concepts are misaligned with the priorities of such a machine. I'm also skeptical of the Radeon drivers when the rest of the lineup has moved to NVidia for now. The ability to leverage OpenCL and CUDA is important when it comes to reaching the widest market possible. If you're offering a fixed graphics solution that is limited to AMD, you've limited your market yet again.

 

 

Quote:
I think that's a pretty good workstation machine. If you need slots, buy an $800 PCI box or get Thunderbolt equivalents. It would have the glossy black Apple logo on the side but smaller.

 

You knew this was nonsense logic when you typed it. I'm really puzzled by this. At $4000 the price is a large determining factor in your potential market. You're looking for users with either bleeding edge requirements or complex needs, and this solution does basically nothing to drive anything forward. If you're looking for a design that must last for a number of years, gpu computation must be a part of the core design, not something allocated to third party generic boxes and a limited set of cards with hit or miss OSX support. Apple really needs the widest market possible. If they're limiting it to a high price point, the worst thing they could do would be to drive away anyone that can afford it. I think that kind of pricing + artificially imposed limitations would be enough to finally kill the line due to negative growth.

 

I should add that most of the people who want a smaller case assume that this would directly lower the price. It's a false assumption in terms of product positioning. Apple could lower the price if they wanted to, and they would sell more. Using thick aluminum doesn't contribute more than a few dollars to the material cost, and any initial setup costs should have been covered long ago given the age of the outer shell design.

post #66 of 201
Quote:
Originally Posted by PhilBoogie View Post

Quote:
Originally Posted by z3r0 View Post

192GB+ RAM

Lion only sees 96GB (don't know if ML will). Strangely, Windows uses all 128GB under Bootcamp.
Last I knew the limit was still 96 GB. It is not something I have to worry about though obviously important to others.
Quote:
OWC 16GB Memory Modules for 2009/2010 Mac Pro — 48GB / 96GB in Mac Pro


Quote:
Originally Posted by Marvin View Post

No one will ever need more than 64GB of RAM. You heard it here first.



Photographers disagree. Read this piece on a person using Photoshop to the max, so to speak. Easily needing 80GB, wishing for more.
My photographic interests are amateur at best but have noticed performance problems due to the lack of RAM. As a side note, the Mac may suffer from the lack of RAM but on iOS devices the lack of RAM is a critical issue.
Quote:
I really enjoy this thread. Don't have time to discuss everyones' take on the subject, but I do have one thing to say:

I think pro's use whatever tool is available to get the job done. U'd think someone like Phil Collins would sound different if he plays on a different drumkit?
Actually yes he would sound different. You may have picked a poor craft to base your argument on because musicians can be down right obsessive about their instruments. A good one anyways will hear things that fly right past me. At times the will have preferences for instruments based on the song they wish to play.
Quote:
No, it's not the tool that makes the sound; it's the artist creating whatever he wants. A photographer doesn't blame his gear if the picture doesn't look good to him. There are people creating far better pictures taken with their cellphone than others do with a (D)SLR.
That is one sided, there is a real world of optics and the physical effects associated with light that does impact a picture. The wrong lens can mess up a picture as badly as a poor composition. Admittedly a good photographer selects the lens best suited for what he wants to achieve but even then sometimes what is seen in the view finder never makes it to film. Well at least not as intended.
Quote:
Which, for example, means that a Pro won't care that much if there aren't any drivebays in the next model; they'll get external (TB) storage if needed.
Almost every pro photographer you come across is working with some sort of external storage array often more than one. Often this is because it ends up being easier to manage external devices rather than internal. Plus external devices can go off site with a laptop. Generally there are significant advantages a associated with external arrays.
Quote:
Thanks to everyone for their great posts, especially wizard69 and Marvin.

If they do make a smaller box and some folks need to hook up their external devices that might create desktop clutter:
Cute.

One point here, for the last few years I've been using a MBP as my primary computer. As such I've had or have a bit of a rats nest of devices plugged into the laptop. Except for one external always there I don't see a desktop reducing the amount of wires significantly no matter the size of the box. Video monitors, audio cables and what have you would still be plugged into the machine. Done right a disk array would be designed to mate nicely with the compute module and hardly be noticeable.
Quote:
post #67 of 201
Quote:
Originally Posted by wizard69 View Post

…you call yourself a professional but can't see the importance of these standards so it makes me wonder just what you are profession wise.

I am a recording engineer. In my post, I made several references to audio recording, and included a link to PCI audio hardware made by Mark of the Unicorn. I stand by what I said: USB 3.0 and Thunderbolt are fine, but that is not how we—nor anyone else in the industry—gets multiple channels of 192kHz audio onto a hard drive. We have a frightfully large investment in PCI hardware because that's how this works.

post #68 of 201
Quote:
Originally Posted by hmm 
I'm not sure there's a good way to implement more than a single thunderbolt chip

For 4 ports, I reckon they'd get away with a 4-lane 20Gbps per lane controller. This would let you run 2 displays and have 2x 20Gbps ports free. If they can implement two controllers or have 8 lanes, that would be the preferred setup.

I wonder if the Thunderbolt setup has been the major cause of the current state of the Mac Pro because there was really no other reason to avoid Sandy Bridge.

They can obviously avoid using it in the Mac Pro but they've put so much effort into pushing it as a standard. Leaving it out of their most expensive machine doesn't seem like a sensible idea.
Quote:
Originally Posted by hmm 
You're still stuck on engineering it to be smaller. At a $4000 price point, I don't see how that could possibly matter unless you're going the lenovo C20 route and trying to make more fit in a server rack.

At $4000 the price is a large determining factor in your potential market. You're looking for users with either bleeding edge requirements or complex needs, and this solution does basically nothing to drive anything forward. If you're looking for a design that must last for a number of years, gpu computation must be a part of the core design, not something allocated to third party generic boxes and a limited set of cards with hit or miss OSX support.

Apple really needs the widest market possible. If they're limiting it to a high price point, the worst thing they could do would be to drive away anyone that can afford it. I think that kind of pricing + artificially imposed limitations would be enough to finally kill the line due to negative growth.

The Mac Pro is $6200 at the top-end just now. The new one would be $4000 with a single 10/12-core CPU. It would start at $2499 with a 6-core.

When you say it's not driving anything forward, what exactly does leaving the machine unchanged do? You're not getting any more GPUs in there with a 300W power limit on the slots. Just put one high-end GPU in and computation is covered.

The thing about supporting legacy technology is that if you leave it in, people keep investing in it. Then after a few years, people say 'you can't remove that because I have loads of hardware based on it now'. It's like the edit-to-tape thing in FCPX. If you leave it in, people just stick with the same workflow and then after 10 years, they'll say they've been using it for 10 years so they still can't change it.

Apple can leave the design the same if they want and what would happen is that it appeals to exactly the same amount of people the current one does. The quad-core would still be $2499 and nobody would want it because it's a huge box and price for just a quad. The $4000+ models are too expensive.

The 'Cube' would be:

6-core Ivy Bridge $2499
8-core $2999
10-core $3499
12-core $3999

Each with an option for a high-end GPU like the 8970 or GTX 780, each with 4-6 RAM slots, each with 4-6 Thunderbolt ports.

It's more cost-effective because you aren't putting in a huge PSU or optical drives and cabling, you use single CPUs.

This segment is dying whether people like it or not, it may as well go out with style.
post #69 of 201
Quote:
Originally Posted by Marvin View Post


For 4 ports, I reckon they'd get away with a 4-lane 20Gbps per lane controller. This would let you run 2 displays and have 2x 20Gbps ports free. If they can implement two controllers or have 8 lanes, that would be the preferred setup.
I wonder if the Thunderbolt setup has been the major cause of the current state of the Mac Pro because there was really no other reason to avoid Sandy Bridge.
They can obviously avoid using it in the Mac Pro but they've put so much effort into pushing it as a standard. Leaving it out of their most expensive machine doesn't seem like a sensible idea.
The Mac Pro is $6200 at the top-end just now. The new one would be $4000 with a single 10/12-core CPU. It would start at $2499 with a 6-core.
When you say it's not driving anything forward, what exactly does leaving the machine unchanged do? You're not getting any more GPUs in there with a 300W power limit on the slots. Just put one high-end GPU in and computation is covered.

I misinterpreted part of your post. I thought you meant just offer a $4000 model, and I'm not sure that would bring in enough volume to be considered sustainable for Apple. It could be completely different for a smaller vendor. Apple has somehow supported dual gpus in their cto options before, although they weren't as power hungry. If they see this as a growing use case, it would make sense to design a box that could adequately cool a couple of them without tacking additional coolers directly to the card. Using 2 might require power cables. As you mentioned the PCI lanes have a 300W limit. This is something that is currently better accommodated by some of the PC vendors. They make significant design updates more often, and more of them have been going this route with the current generation of machines. It makes sense to look at opportunities for growth if they're going to keep making this line. Growth requires real improvements across the line. Right now the higher end mac pros have greatly outpaced the lower ones, which have remained somewhat stagnant.

 

I forgot to include this before. Ivy Bridge E5s don't make it any easier to implement thunderbolt. What have you read that suggested otherwise? It doesn't integrate thunderbolt at a native level. It doesn't provide integrated graphics. Consider that they may not have devoted resources to the project, especially after repeated delays from intel. There was likely no mac pro team to work on it. While I still see perfectly valid use cases for such a machine, I thought they'd either update or cancel it. As for thunderbolt, I don't think Apple cares beyond their own peripherals. It was initially placed where they had a mini displayport connection. In either case you could plug in a display, and they built out from that. It allowed them to offer integration that wasn't possible on the notebooks and imacs, which obviously make up the bulk of mac sales.

 

Quote:

 

The thing about supporting legacy technology is that if you leave it in, people keep investing in it. Then after a few years, people say 'you can't remove that because I have loads of hardware based on it now'. It's like the edit-to-tape thing in FCPX. If you leave it in, people just stick with the same workflow and then after 10 years, they'll say they've been using it for 10 years so they still can't change it.
Apple can leave the design the same if they want and what would happen is that it appeals to exactly the same amount of people the current one does. The quad-core would still be $2499 and nobody would want it because it's a huge box and price for just a quad. The $4000+ models are too expensive.
The 'Cube' would be:
6-core Ivy Bridge $2499
8-core $2999
10-core $3499
12-core $3999
Each with an option for a high-end GPU like the 8970 or GTX 780, each with 4-6 RAM slots, each with 4-6 Thunderbolt ports.
It's more cost-effective because you aren't putting in a huge PSU or optical drives and cabling, you use single CPUs.
This segment is dying whether people like it or not, it may as well go out with style.

 

If the pricing strategy remains somewhat similar, you'll still attract most of the same people. It needs to attract some kind of growth, especially in driving faster repurchasing cycles. People buying these probably update every 2-4 years. Buying the new one with 15% faster X86 cores doesn't do much. It usually means that anything bound by machine time gets done a bit faster, which doesn't describe the overall for most of these users. The other circumstance is that the machine you already own is limiting so any amount of additional speed is welcome. I've been interested in things like CUDA because they open up newer workflows. After Effects probably would not have implemented a raytracer if they were still bound to X86 cores. I agree that the $2500 model should have shifted to a hex to maintain some kind of value. I don't agree with the idea of starting with a form factor and determining what will fit within it when building a workstation as opposed to determining what should be included and designing around that.


Edited by hmm - 11/30/12 at 1:58pm
post #70 of 201
Quote:
Originally Posted by PhilBoogie View Post


There are quite a few people posting this very request; a smaller MP, an in-between iMac and MP. Knowing Apple, they never seize to amaze people. You might get your wish, but I doubt it. Because:
iMac starts at $1299 Mac Pro starts at $2499. Say they want to release an mid-Mac, if you will. I think that price will need to be in between there, $1899.

I would camp outside an Apple store and pay $1500 for a mid sized Mac that had an i7 processor, some expansion (2 hard drives plus room for an optical drive for folks like me that still sues it) and didn't have a built in monitor.

The Mac Pro is simply overkill in size, price and processor horsepower.

But the mini and the iMac are too restricted. Built in monitor? No way. Not for me. Mini with no expansion? No way. Both are desktop (stationary where a little bit of size and weight isn't that big of a deal) but have been shrunk so much that you can't even get an optical drive in one.

 

$1500 mid sized, some expansion, sans built in monitor Mac and I'm all over it.

post #71 of 201
Quote:
Originally Posted by MacTac View Post

I would camp outside an Apple store and pay $1500 for a mid sized Mac that had an i7 processor, some expansion (2 hard drives plus room for an optical drive for folks like me that still sues it) and didn't have a built in monitor.

The Mac Pro is simply overkill in size, price and processor horsepower.

But the mini and the iMac are too restricted. Built in monitor? No way. Not for me. Mini with no expansion? No way. Both are desktop (stationary where a little bit of size and weight isn't that big of a deal) but have been shrunk so much that you can't even get an optical drive in one.

 

$1500 mid sized, some expansion, sans built in monitor Mac and I'm all over it.

The time to implement such a machine would have been a little after they moved to intel to grab some of that market. It's been in a growth slump, which tends to be a bad thing for new products, especially when it comes to a company the size of Apple. I'm not discounting the advantages of the form factor. I just don't expect to see this. The idea of the mac pro becoming an xmac and falling to this price level is really unlikely. The people who criticize the mac pro as being overkill often ignore that the base model is very xmac like in its hardware choices. The hardware used there isn't that expensive, and sharing a backplane adds negligible costs. In fact the daughter board design was likely a cost cutting measure to prevent having to use more expensive dual package parts in the single version they split off as of 2009. Really it wouldn't be much cheaper to build the xmac than it would the bottom mac pro. The performance would be pretty similar. People just misunderstand the cost of that model. It's there because Apple wanted it there, and they use it to maintain minimum volume for the rest of the line. If you don't believe me, look up the cpu costs at the launch of the $800 quad mini and the mac pro at the last "refresh". They drop a $300 cpu option into the $2500 mac pro, and they aren't paying the costs of a full dual package board due to the daughterboard design. As to thick aluminum and extra drive bays, they don't add much. It's cheaper aluminum, but it's thick. Things like drive bays and other random features are also  commonly found in xmac like machines on newegg.
 

What all of you really want is the base mac pro in a smaller case for $1000 less, and this would require a change in philosophy for the machine rather than a reduction in costs.

post #72 of 201
Quote:
Originally Posted by hmm 
Buying the new one with 15% faster X86 cores doesn't do much. It usually means that anything bound by machine time gets done a bit faster, which doesn't describe the overall for most of these users. The other circumstance is that the machine you already own is limiting so any amount of additional speed is welcome.

So you mean they have to get round Intel's slow upgrade cycle somehow like by allowing you to buy multiple machines and easily hook them together to get guaranteed performance scaling with both CPU and GPU. Yeah, that's a good idea. I wonder how they'd connect them together to allow you to control a CPU and GPU in a plug and play manner though.
Quote:
Originally Posted by hmm 
I don't agree with the idea of starting with a form factor and determining what will fit within it when building a workstation as opposed to determining what should be included and designing around that.

It can never be one or the other. They don't design an iMac and then worry if it can just take a ULV processor with integrated graphics.

They can start with an 8" Cube and see what they can fit inside. If it's too small, they try something a bit bigger like an 8.1" Cube.

There are 3 real options:

- similar design and just throw in Ivy Bridge at the same prices points after 3 years
- radical redesign with better performance per dollar that will work for the next few years
- no more Mac Pro

Given what's happened, I think they were about to drop it. You can see this trend everywhere. No big company wants to be in the tower market any more.

The Pippin failed, the Newton failed and the Quicktake failed, the iPhone and iPad succeeded. It's the Cube's time to shine. Think about the cube store with cubes inside.
post #73 of 201
Quote:
Originally Posted by Marvin View Post


So you mean they have to get round Intel's slow upgrade cycle somehow like by allowing you to buy multiple machines and easily hook them together to get guaranteed performance scaling with both CPU and GPU. Yeah, that's a good idea. I wonder how they'd connect them together to allow you to control a CPU and GPU in a plug and play manner though.

This isn't quite how I see it. I've mentioned several things such as performance per dollar, potential range of use cases, and what upgrades actually enable. These are aimed at users with either demanding workloads or atypical requirements. I think we can agree on that much. They're not really designed for distributed computing as a primary use. You could try that, and it might work if you were looking to harvest extra cycles from under utilized machines in a larger shop driven by mac pros. It's also true that as they become more powerful, they could leverage some workloads that were previously dedicated to clusters, assuming those workloads hit flat growth and workstations have now caught up. I was merely pointing out that the most cost effective improvement to a wide range of workloads is currently tied to computation allocated to the gpu. The comparisons are generally a CUDA card to a 12 core mac pro, and even with the testing biases, it presents a strong performance per dollar. Take a look at NVidia's propaganda. What's really interesting is how well it works at various levels. Adobe certainly didn't optimize in favor of cpu computation, but the results are still really impressive. Looking at where things will go moving forward, I see that as a better thing to address at a core level. The gains are likely to be much stronger, and it would be something to drive growth in the mac pro as the software matures. If you're just looking to build a server farm, there are cheaper methods. Do you see Apple as a company that would research ways to daisy chain mac pros at a plug and play level if they really were recently considering its cancellation? This isn't the kind of thing they've pursued since the Xserve. Given their professed interest in OpenCL and recent migration back to NVidia, I assumed a strong focus on GPGPU to align better with their current path.

 

Quote:
t can never be one or the other. They don't design an iMac and then worry if it can just take a ULV processor with integrated graphics.
They can start with an 8" Cube and see what they can fit inside. If it's too small, they try something a bit bigger like an 8.1" Cube.
There are 3 real options:
- similar design and just throw in Ivy Bridge at the same prices points after 3 years
- radical redesign with better performance per dollar that will work for the next few years
- no more Mac Pro

You're leaving out the potential for a late Sandy Bridge E. If you look at Westmere, they still used some nehalem options. It could be the same thing here. It may be mixed either way. If they're waiting for Ivy, these drivers will likely never be used. The AMD 8XXX will be out by the time we have Ivy Bridge E5s. NVidia seems like a better option anyway. I think leaving the Mac Pro as the only computer without CUDA options is just a terrible move for its future when it needs to soak up as much volume as possible. No more mac pro makes less sense to me unless they were undecided. They could have cancelled it when they announced new products or sunset it like they did with the Xserve. I only see a redesign if they think they can capture new customers with it. This is the whole point of what I've mentioned. Even in workstations, not everything is a $10k/seat configuration. They've been going kind of cheap on X86 cores with just a quad in the $2500 model. If that is the continued direction, they need something else to prop up its value.

 

 

Quote:

 

Given what's happened, I think they were about to drop it.

 

I agree. Pushing it back to next year likely means that no one was allocated to work on it.

 

Quote:
You can see this trend everywhere. No big company wants to be in the tower market any more.
The Pippin failed, the Newton failed and the Quicktake failed, the iPhone and iPad succeeded. It's the Cube's time to shine. Think about the cube store with cubes inside.

I think the last two lines there are more your imagination, although it does make these discussions more interesting.

post #74 of 201
Quote:
Originally Posted by hmm 
I've mentioned several things such as performance per dollar, potential range of use cases, and what upgrades actually enable. These are aimed at users with either demanding workloads or atypical requirements.

I was merely pointing out that the most cost effective improvement to a wide range of workloads is currently tied to computation allocated to the gpu.

But you have to be suggesting that if they offer better performance per dollar with multiple GPUs that they include more double-wide slots and increase the power limit of the slots. They'd have to double that power allocation. Bigger PSU, more slots, possibly a bigger case on top of the standard prices. You would be able to get a quad-core with 2 GTX 780s, which would be more cost-effective for certain tasks than a dual processor machine but still pricey and not that appealling.
Quote:
Originally Posted by hmm 
They're not really designed for distributed computing as a primary use.



I love the quote "at a price of $5.2m practically anyone can build a supercomputer". Yeah, anyone with $5.2m. In terms of today's machines, 50 little Ivy Bridge cubes would match the performance of that cluster. If you add in GPU computing, it could be as few as 10.

I don't see it as a primary use but still a significant one. Think how many people deal with video ingesting/transcoding. If Apple can work with Adobe to get AE renders done in a distributed way, that's huge.
Quote:
Originally Posted by hmm 
Do you see Apple as a company that would research ways to daisy chain mac pros at a plug and play level if they really were recently considering its cancellation?

Yes. Look at what happened with the Mini. It went ages without an update and people just assumed that was it but then BAM, they came out with a totally rethought, redesigned model that in itself is a work of art. This says to me that deep down in their darkest workshops, they are retooling everything to build a next-gen Mac Pro. It might not be built the way I suggest but I would be surprised if it remained largely unchanged.
Quote:
You're leaving out the potential for a late Sandy Bridge E.

There's no potential for this. Why would they use Sandy Bridge when Ivy Bridge E is a few months away? Q3 is right around WWDC:

http://www.engadget.com/2012/10/17/intel-roadmap-reveals-10-core-xeon-e5-2600-v2-cpu/

If you're going to make a new machine, you might as well use the newest parts instead of making people wait 3 years for last year's CPUs.
Quote:
Quote:
You can see this trend everywhere. No big company wants to be in the tower market any more.

The Pippin failed, the Newton failed and the Quicktake failed, the iPhone and iPad succeeded. It's the Cube's time to shine. Think about the cube store with cubes inside.
I think the last two lines there are more your imagination, although it does make these discussions more interesting.

HP doesn't want to be in the market, they were going to sell off their entire desktop business. Both Dell and HP, the biggest companies in this sector are severely struggling:

http://www.zdnet.com/dell-hp-and-the-folly-of-the-consumer-pc-business-7000003072/
http://www.zdnet.com/server-sales-slow-but-dell-shows-growth-hp-ibm-tied-for-no-1-7000003427/
http://www.wired.com/wiredenterprise/2012/09/29853/

While they do better in the enterprise, if they cut the consumer market, their business solutions will suffer too because their volumes will shrink to a fraction of what they are now. The server market will go the custom build route possibly with ARM to save millions for big companies and Tesla and similar will take over for compute. Dell and HP are dead in the water.

The workstation market is tiny and over time it will merge into the AIO market. This is one thing HP actually gets right. I know people are going to rattle off the usual 'always needs' like internal RAID, multiple GPUs, internal expansion but just give it another few years.

Custom PCI cards are designed to do things the computer is too slow to do natively, this is going to change. GPUs will reach a point very quickly where you just won't need multiple GPUs for real-time use and the rest will be server-side. For IO expansion, everything is going to standardise around USB 3 and Thunderbolt (or some other form of external PCI).
post #75 of 201
Quote:
Originally Posted by Marvin View Post

The Pippin failed, the Newton failed and the Quicktake failed, the iPhone and iPad succeeded. It's the Cube's time to shine. Think about the cube store with cubes inside.

But the Cube failed as well, selling a mere 125.000 IIRC. Do like your thought on Cubes in the Cube!
I’d rather have a better product than a better price.
Reply
I’d rather have a better product than a better price.
Reply
post #76 of 201
Quote:
Originally Posted by Marvin View Post


But you have to be suggesting that if they offer better performance per dollar with multiple GPUs that they include more double-wide slots and increase the power limit of the slots. They'd have to double that power allocation. Bigger PSU, more slots, possibly a bigger case on top of the standard prices. You would be able to get a quad-core with 2 GTX 780s, which would be more cost-effective for certain tasks than a dual processor machine but still pricey and not that appealling.

Actually workstations don't always run the hottest gpus. This is mostly unique to Apple, but even Apple has allowed 2 in the 150W range on several prior mac pros. A lot of workstation gpus are clocked lower, but they can perform significantly better under certain circumstances. It varies to a degree, which is why a lot of people used GTX580s for CUDA processing. Supposedly the Teslas held up better with double precision math, and obviously you have more ram available there. I expect integrated graphics may eventually show up on some E5 variants. The E5 equivalent of Haswell is the earliest I'd expect that if they go that route. Broadwell is more likely, as I don't expect them to go up in core counts indefinitely. So far that strategy hasn't worked perfectly. A lot of these algorithms weren't written to be split among so many threads, and algorithm development tends to be largely academic. If you ever read Siggraph articles, you'll find a lot of Phd dissertations referenced among them.

 

Anyway I think you're a bit imaginative in the way you envision these things. In 10 years a lot of things that exist as primary computing devices today may operate more like slim clients. It's just too early to gauge with 100% accuracy. People claim the cloud will take over in every regard, yet it's useless without the proper infrastructure and back end programming.

 

 

Quote:
I love the quote "at a price of $5.2m practically anyone can build a supercomputer". Yeah, anyone with $5.2m. In terms of today's machines, 50 little Ivy Bridge cubes would match the performance of that cluster. If you add in GPU computing, it could be as few as 10.

I don't see it as a primary use but still a significant one. Think how many people deal with video ingesting/transcoding. If Apple can work with Adobe to get AE renders done in a distributed way, that's huge.

I could see distributed computing in a larger shop if it could be worked out as a way to harvest extra cpu cycles from workstations that are not used at the time or under- utilized. Apple lacks any kind of efficient infrastructure for this though, and they've been moving away from this direction. In the G5 era, Xgrid was supported. There was a reason for the choice in some of these clusters, although I can't remember what it was. That video doesn't really go much into the logic behind their hardware choices.

 

 

 

 

Quote:
Yes. Look at what happened with the Mini. It went ages without an update and people just assumed that was it but then BAM, they came out with a totally rethought, redesigned model that in itself is a work of art. This says to me that deep down in their darkest workshops, they are retooling everything to build a next-gen Mac Pro. It might not be built the way I suggest but I would be surprised if it remained largely unchanged.

I don't necessarily view this the same way. The mini still aligned well with Apple's future goals. In the case of the mac pro, they had plenty of time to redesign for Sandy Bridge E if they wanted to do this. Even when they released Westmere, Sandy Bridge E was already looking pretty far out. At that time Sandy Bridge E was scheduled for the first quarter with E5s late in the third quarter. The opportunity was there. It's more likely that they simply did not allocate any kind of team to such a project, then later decided they didn't want to cancel it. They have pushed out two redesigns this year, so it could happen. I don't agree with your list of priorities though. The historical price points of this line dictate its markets.

 

 

 

Quote:
There's no potential for this. Why would they use Sandy Bridge when Ivy Bridge E is a few months away? Q3 is right around WWDC:

http://www.engadget.com/2012/10/17/intel-roadmap-reveals-10-core-xeon-e5-2600-v2-cpu/

If you're going to make a new machine, you might as well use the newest parts instead of making people wait 3 years for last year's CPUs.

You're ignoring what I mentioned before. Apple may not have a starter option within Ivy Bridge E. Look at the mac pro now. The lowest option is nehalem from 2009. The low end from westmere would have been the 2.4, and it was still above their price target, and possibly not compatible with a single package board. The other thing I would mention is that Intel's release dates are not necessarily accurate. Sandy Bridge E5s officially launched in early March. Most oems weren't shipping until early July. Typically the supercomputer vendors get first pick in terms of contract purchases. Q3 could easily mean machines shipping in December by Intel's math. Apple likely has access to more information on this matter, which is why I've stated the possibility of a late Sandy Bridge E still exists. I'm not denying that is an incredibly screwed up release cycle.

 

 

Quote:

 

HP doesn't want to be in the market, they were going to sell off their entire desktop business. Both Dell and HP, the biggest companies in this sector are severely struggling:

http://www.zdnet.com/dell-hp-and-the-folly-of-the-consumer-pc-business-7000003072/
http://www.zdnet.com/server-sales-slow-but-dell-shows-growth-hp-ibm-tied-for-no-1-7000003427/
http://www.wired.com/wiredenterprise/2012/09/29853/

While they do better in the enterprise, if they cut the consumer market, their business solutions will suffer too because their volumes will shrink to a fraction of what they are now. The server market will go the custom build route possibly with ARM to save millions for big companies and Tesla and similar will take over for compute. Dell and HP are dead in the water.

The workstation market is tiny and over time it will merge into the AIO market. This is one thing HP actually gets right. I know people are going to rattle off the usual 'always needs' like internal RAID, multiple GPUs, internal expansion but just give it another few years.

Custom PCI cards are designed to do things the computer is too slow to do natively, this is going to change. GPUs will reach a point very quickly where you just won't need multiple GPUs for real-time use and the rest will be server-side. For IO expansion, everything is going to standardise around USB 3 and Thunderbolt (or some other form of external PCI).

 

 

What they wanted to ditch was their consumer desktop market. The margins on their Z series workstations are quite high, and they already built the Z1 as an AIO design. They priced it pretty aggressively. Well outfitted it comes out around $5k. I think it's higher if you go with the Dreamcolor display, which is necessary if you're coming from NEC/Eizo. HP's markups on upgrades have always been extremely high. If I bought one, I'd probably wait for a configuration I like to show up as one of the standardized configurations because of this.

 

Regarding the server market, it's not all a trend to ARM.

 

http://www.wired.com/wiredenterprise/2011/11/server-world-bermuda-triangle/

http://www.wired.com/wiredenterprise/2012/09/29853/

 

Note the two links. The first is regarding Facebook and others directly negotiating server purchases with ODMs. The second relates to companies like Google building their own servers.

 

If anything X86 has grown in the server market. I don't expect them to simply ignore ARM. Viewing this as a foregone conclusion without even examining what ARM gives up in favor of power efficiency is really illogical. The two designs remain far enough apart to where I'm not sure how it will end.

post #77 of 201
Quote:
Originally Posted by PhilBoogie 
But the Cube failed as well, selling a mere 125.000 IIRC.

That's what I was getting at, some things deserve a second chance. People wanted the Cube when it came out but it was hard to justify the price next to the larger workstation and the cooling method it used along with cracks in the plastic case just didn't sit well. They'd design it properly this time.

The price will put people off just like the Mac Pro but it would be unique. Right now the Mac Pro is like all the other machines out there but bigger, heavier and expensive. It has better cooling but if that power and more could fit into something you could pick up with one hand, it's more impressive.

I don't see the downsides of the smaller design. It can take a 10/12-core chip for a lower price. It can hold a good amount of RAM. Storage might be a bit tight but even with 2 drives + SSD, it can handle ~8TB of storage. It can hold a very fast GPU. All that's missing is PCI expansion but there are external options and it's probably going to have a minimal effect.

If more people buy the Pro for the slots than the cores, then sticking with the slots and ignoring TB is the way to go but I think the Mac Pro's area of emphasis needs to stop being expansion and start being performance per dollar.

The Mini is about being small and entry level, the iMac is simple and has a great display bundled, the Mac Pro needs to be a powerhouse for creative tasks. A quad-core 2009 CPU and 5770 for $2499 is terrible value and a drop-in upgrade would leave it that way.
Quote:
Originally Posted by hmm 
Apple may not have a starter option within Ivy Bridge E.

It would all be Ivy Bridge, none of this selling old CPUs on the low-end. It's about performance-per-dollar. They would design it and price it in a way that they could use the latest architecture in the whole lineup. The latest 6-cores would be around $500-600. They can absord the extra $300 just from the margins but also from the redesign (no optical, smaller PSU etc).
Quote:
Originally Posted by hmm 
the possibility of a late Sandy Bridge E still exists

If that's the case, why not release it now; what are they waiting for? Furthermore, why do it a few months after what they did at WWDC? It doesn't add up. It seems like they couldn't decide on which direction to take it next. They've probably been going over the same discussions we have about how they actually get TB support in there and if they need to bother with it.

It has to be something that stopped them redesigning around Sandy Bridge. They purposely avoided it. It's not as if they had a change of heart because of WWDC, the Mac Pro release coincided with it. For whatever reason, Sandy Bridge just didn't work for them. They might need a better TB controller like Redwood Ridge or Falcon Ridge or if they are going with single CPUs, they need chips that scale to 10/12-cores, which only Ivy Bridge offers.

I think the big reveal will happen at WWDC 2013. That's the audience for it.
post #78 of 201
Quote:
Originally Posted by Marvin View Post


It would all be Ivy Bridge, none of this selling old CPUs on the low-end. It's about performance-per-dollar. They would design it and price it in a way that they could use the latest architecture in the whole lineup. The latest 6-cores would be around $500-600. They can absord the extra $300 just from the margins but also from the redesign (no optical, smaller PSU etc).

We aren't in disagreement here about what they could do. It just contradicts their previous behavior. I'm not sure what their strategy is at the moment. Their current pattern of behavior is inch up pricing and cut costs. That's pretty much a product death spiral. It could be that they foresee the imac cannibalizing enough in the future to phase this out in another generation or two. I'm not sure though. I don't disagree with you that they could eat the price increase. It's just that this contradicts their prior behavior. Intel cut the price of 6 core cpus long ago. They only repriced it at WWDC to $3000. I'd expect that price point to most likely carry over to Sandy or Ivy. 

 

 

Quote:

If that's the case, why not release it now; what are they waiting for? Furthermore, why do it a few months after what they did at WWDC? It doesn't add up. It seems like they couldn't decide on which direction to take it next. They've probably been going over the same discussions we have about how they actually get TB support in there and if they need to bother with it.
It has to be something that stopped them redesigning around Sandy Bridge. They purposely avoided it. It's not as if they had a change of heart because of WWDC, the Mac Pro release coincided with it. For whatever reason, Sandy Bridge just didn't work for them. They might need a better TB controller like Redwood Ridge or Falcon Ridge or if they are going with single CPUs, they need chips that scale to 10/12-cores, which only Ivy Bridge offers.
I think the big reveal will happen at WWDC 2013. That's the audience for it.

 

 

We discussed the possibility that it was initially slated for cancellation. If this was the case and they had no one working on it, it would make sense. I thought Ivy Bridge only went to 10 cores? Anyway I've never seen workstation boards morph on in between generations like that. The chipsets right now are the same ones they'll have available then. You get less mileage from them when skipping the first generation. This means they'd have to fabricate a new board for Haswell E5s. I wonder if we'll see them before 2015. Going with a single package 10 core system would offer some explanation, but I'm still not seeing it. My best guess is low priority + change of plans. At this point they can't be making much off the line. You really think an Ivy machine will be ready by WWDC? Even with delayed shipping dates, that seems unlikely to me given the disparity between Intel's "launch dates" and actual shipping dates from oems.

 

I'm still curious when Apple will support OpenCL 1.2. If only I could get Snow Leopard with OpenCL 1.2. It's much leaner than Lion and ML. The one problem is the lack of ability to leverage newer things.

post #79 of 201
Quote:
Originally Posted by Marvin View Post

Quote:
Originally Posted by PhilBoogie 
But the Cube failed as well, selling a mere 125.000 IIRC.

That's what I was getting at, some things deserve a second chance. People wanted the Cube when it came out but it was hard to justify the price next to the larger workstation and the cooling method it used along with cracks in the plastic case just didn't sit well. They'd design it properly this time.
The cube was an interesting even desirable machine but Apple really screwed up with pricing. Badly! In fact the Cube was so poorly priced that it turned me off with respect to Apple for a very long time. I eventually replaced my Mac Plus with a series of Linux machines because of it. The proverbial straw that broke the camels back so to speak as I just could never see a price justification for that machine nor for many that came between the Plus and the Cube.

The Cube want a bad idea though. Something similar today probably would work well unless it was big enough to house the capability of the Mac Pro. Notably the Mini is a far better "Cube" that the Cube ever was. So a modern day Cube would need to be able to house both high performance CPUs and GPUs. That at the very least means a bigger box.
Quote:
The price will put people off just like the Mac Pro but it would be unique. Right now the Mac Pro is like all the other machines out there but bigger, heavier and expensive. It has better cooling but if that power and more could fit into something you could pick up with one hand, it's more impressive.
Price does kill the Mac Pro even though many don't want to admit it. The problem is that the Mac Pro is really only attractive to people that configure it in rather high performance configurations, which means they are by definition less sensitive to pricing. At the low end, for people looking for really good midrange performance and maybe a decent GPU the machine is a joke and way over priced. Thus the terrible sales.
Quote:
I don't see the downsides of the smaller design. It can take a 10/12-core chip for a lower price. It can hold a good amount of RAM. Storage might be a bit tight but even with 2 drives + SSD, it can handle ~8TB of storage. It can hold a very fast GPU. All that's missing is PCI expansion but there are external options and it's probably going to have a minimal effect.
I really don't see the problem either. For the most part, that is for the majority of users the Mac Pro is one big box of dead air. The only thing to argue with is the need for a PCI-Express slot. Such a shrunken machine needs at least a couple of slots as external connections would never be fast enough nor have the reliability some users want. As for bulk storage it simply doesn't belong inside a CPU box anymore, there are now multiple ways to interface such hardware and still maintain the performance required.
Quote:
If more people buy the Pro for the slots than the cores, then sticking with the slots and ignoring TB is the way to go but I think the Mac Pro's area of emphasis needs to stop being expansion and start being performance per dollar.
The two aren't mutually exclusive! You can have slots in a small box and t the same time target performance per dollar. The problem is that certain segments of industry will always need some sort of applications accelerator that can only really be leveraged in a high performance slot.
Quote:
The Mini is about being small and entry level, the iMac is simple and has a great display bundled, the Mac Pro needs to be a powerhouse for creative tasks. A quad-core 2009 CPU and 5770 for $2499 is terrible value and a drop-in upgrade would leave it that way.
Terrible isn't the word for it. It is down right highway robbery. We need to look deeper though and to try to determine where those high prices come from and what can be done to reduce them. I still see the first order of business should be to rip everything out of the box that can't be justified for making a high performance module. Thus anything SATA related must go, that is almost a third of the box and motherboard right there.
Quote:
Quote:
Originally Posted by hmm 
Apple may not have a starter option within Ivy Bridge E.

It would all be Ivy Bridge, none of this selling old CPUs on the low-end. It's about performance-per-dollar. They would design it and price it in a way that they could use the latest architecture in the whole lineup. The latest 6-cores would be around $500-600. They can absord the extra $300 just from the margins but also from the redesign (no optical, smaller PSU etc).
Apple does have configuration issues that make the low end machines very in appealing.

As to Ivy Bridge, I just don't see it in a new 2013 Mac Pro. Frankly it really isn't worth waiting for, at least it doesn't justify pissing off your loyal customer base for. Instead I see something from the Xeon Phi line up going into the machine. Not so much the accelerator chips already released/announced but rather a Main CPU Phi that has been rumored about. In other words a chipset that allows Apple to implement a dramatically different Mac Pro and something they might see as justifying good margins.

To put it another way, they need something that makes people say wow. Something that changes the mindset as to the Mac Pros value. Frankly if they rolled out yet another Ivy Bridge based Pro machine, in the same mold as the current Pros, I don't see a lot of NEW users rushing to embrace the new machine. New being the key word as to remain viable the new Pro needs to suck in many more new users.
Quote:
Quote:
Originally Posted by hmm 
the possibility of a late Sandy Bridge E still exists

If that's the case, why not release it now; what are they waiting for? Furthermore, why do it a few months after what they did at WWDC? It doesn't add up. It seems like they couldn't decide on which direction to take it next. They've probably been going over the same discussions we have about how they actually get TB support in there and if they need to bother with it.
The same logic more or less applies to an Ivy Bridge based machine. Or it will when a stable of Ivy based Xeons is out. Things like TB and other technologies are really pushing us to dramatically different Mac Pro architectures. The thing is what is taking so long? Hard question to answer but nothing available Ivy Bridge wise really seems to justify the long delay in a new architecture.
Quote:
It has to be something that stopped them redesigning around Sandy Bridge. They purposely avoided it. It's not as if they had a change of heart because of WWDC, the Mac Pro release coincided with it. For whatever reason, Sandy Bridge just didn't work for them. They might need a better TB controller like Redwood Ridge or Falcon Ridge or if they are going with single CPUs, they need chips that scale to 10/12-cores, which only Ivy Bridge offers.
To this I agree, but I really can't see anything compelling in Ivy Bridge Xeons either. Think about it, Apple risked many customers by releasing that "bump" machine a few months ago. Does Ivy Bridge justify that? Nope! At least I can't see anything so compelling in Ivy Bridge that I'd risk my customer base waiting for it. This is why I expect something different, who knows Apple could be partnering with Intel on a Xeon Phi specific for Apples needs. All I do know is that they must have something compelling up their sleeves to justify all the foot dragging and non updates we have gotten.
Quote:
I think the big reveal will happen at WWDC 2013. That's the audience for it.

I was thinking February. I suppose another half year doesn't mean much if you haven't done a real update in four years but the customer base is getting itchy. In any event you would think that we would be hearing leaks or rumors rather soon. .
post #80 of 201
Originally Posted by wizard69 View Post
I was thinking February. I suppose another half year doesn't mean much if you haven't done a real update in four years but the customer base is getting itchy. In any event you would think that we would be hearing leaks or rumors rather soon. .

 

You don't consider Nehalem -> Westmere a real update?

Originally Posted by Marvin

The only thing more insecure than Android’s OS is its userbase.
Reply

Originally Posted by Marvin

The only thing more insecure than Android’s OS is its userbase.
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › OS X 10.8.3 beta supports AMD Radeon 7000 drivers, hinting at Apple's new Mac Pro