That's because by being open source, Linux is extremely flexible.
Linux is flexible but his post was BS. Many companies won't even consider a Linux based machine in their shops. Again this comes back to my point that the board here is filled with extremist view points. I use Linux at home but it isn't even worth suggesting it at work nor would many other companies near by bother with Linux.
I kind of wonder how many of them are potential mac users. I was suggesting that these notions were due to software availability. Shake and Final Cut Pro were for all practical purposes Mac only applications. I know Shake had Irix and Windows NT support, but Apple bought them out. Quark Express started on the Mac. Photoshop started on the Mac. There are a few other desktop publishing programs that started there establishing a major presence. A few turnkey solutions existed prior to applications like those, but they were extremely costly. Autodesk also ported Smoke to OSX from their Linux turnkey solution. I'm just saying these perceptions are from a prior era as Apple actively catered to these markets.
Apple still does! However third party vendors would be foolish to remain Mac only for many of their products.
Regarding notebook hardware, its improvements in terms of raw percentages have outpaced some of the gains at the workstation level in recent years, especially with an inelastic budget. A lot of software lacks n-core scaling, even in some of the markets commonly addressed on here.
Some software does suffer in that regard. Like wise some doesn't but that is hardly the point these days. Multi processors allow for many apps to run at the same time without suffering. That can be huge for many users.
A lot of it hits a wall past 4 cores and still holds onto single threaded processes in some areas. This is a an overall problem for the general health of the workstation market. Even if you can benefit from greater speeds, it doesn't mean you necessarily benefit from ever increasing core counts.
It really isn't as huge a problem as people make it out to be.
Sandy Bridge E is a little bettter in that some of the core count/clock speed tradeoffs aren't as severe at the 8-12 core level, but damn those cpus run hot.
The trade off of clock speed and heat will go on forever. What is interesting is that we are nearing 4Ghz clocks in desktop machines these days. That is almost a full GHz over what was possible a couple of years ago. This means apps bound to a single thread are still seeing speed ups which is why I say it isn't an issue now.
In any event apps like those are yesterday's issues. The interesting apps of tomorrow will be highly threaded. The question is what will grab the community and become successful. Imagine the power of Siri right on your desktop. Or modeling and designing the machines of the future in real time. Or how about a genetics work station app. Lots of cores will makes such things possible and economical.
Think about it Pro Level hardware is always at the bleeding edge. Even if the average desktop was operating fabless and cool to the touch pro users will be demanding far better performance. With performance comes heat.
I am going to throw this curveball into the equation (and I'm sure Intel is working on it) but how long will it take for a Pro level desktop to be developed that runs fairly cool?
Never!
Think about it Pro Level hardware is always at the bleeding edge. Even if the average desktop was operating fabless and cool to the touch pro users will be demanding far better performance. With performance comes heat.
Think about it Pro Level hardware is always at the bleeding edge. Even if the average desktop was operating fabless and cool to the touch pro users will be demanding far better performance. With performance comes heat.
This isn't always true. Xeons tend to be tested for 24/7 use, and operating temperatures that are considered within spec for constant use are significantly lower than those that cause thermal shutdown. It's normal to have a beefy cooling system in place. If you're examining workstation gpus or those like the tesla processors that are optimized for computation, they're often clocked lower. They also run at lower levels of power consumption and much cooler. Gaming gpus might die if put through those kinds of workloads. I mean yes they can run hot if you're examining things tailored for bleeding edge workloads, yet temperatures are a relative thing as you do need lower ones for reliable service if they are to be subjected to constant use.
Quote:
Originally Posted by Winter
Thankfully there's smcfancontrol, right? *laughs*
I used to use that, yet it's hard to get it just right, and I found it difficult to test whether it messed with the normal settings post installation even if uninstalled. People claimed this, and I meant to test it but didn't get around to it. I will say that there's always after market thermal paste if you're trying to bring things down a few degrees. Considering doing that on a couple older machines. You just need to have all of the tools handy and a suitable method of keeping screws and things organized. Otherwise it could be bad.
This isn't always true. Xeons tend to be tested for 24/7 use, and operating temperatures that are considered within spec for constant use are significantly lower than those that cause thermal shutdown.
I think you missed the point machines built for pro usage run hotter than those of lower performance given the same generation of chips. When someone talks workstation it is rational to assume it will run hotter than the run of the mill office PC.
It's normal to have a beefy cooling system in place.
Of course it is because they run hotter.
If you're examining workstation gpus or those like the tesla processors that are optimized for computation, they're often clocked lower. They also run at lower levels of power consumption and much cooler. Gaming gpus might die if put through those kinds of workloads. I mean yes they can run hot if you're examining things tailored for bleeding edge workloads, yet temperatures are a relative thing as you do need lower ones for reliable service if they are to be subjected to constant use.
I have no doubt that running a chip cooler makes it more reliable. However those big heat sinks on those Tesla cards are not there for good looks.
I used to use that, yet it's hard to get it just right, and I found it difficult to test whether it messed with the normal settings post installation even if uninstalled. People claimed this, and I meant to test it but didn't get around to it. I will say that there's always after market thermal paste if you're trying to bring things down a few degrees. Considering doing that on a couple older machines. You just need to have all of the tools handy and a suitable method of keeping screws and things organized. Otherwise it could be bad.
After market thermal paste just improves thermal transfer, power into the chip remains the same or increases. Why would it increase? Better thermal transfer means further opportunities to speed step thus putting more power into the chip. It is certainly good to see a chip run cooler but you need to remember the same heat is being put into the room. If you are concerned about power usage the route to take is lower power chips.
I think you missed the point machines built for pro usage run hotter than those of lower performance given the same generation of chips. When someone talks workstation it is rational to assume it will run hotter than the run of the mill office PC.
Of course it is because they run hotter.
I have no doubt that running a chip cooler makes it more reliable. However those big heat sinks on those Tesla cards are not there for good looks.
After market thermal paste just improves thermal transfer, power into the chip remains the same or increases. Why would it increase? Better thermal transfer means further opportunities to speed step thus putting more power into the chip. It is certainly good to see a chip run cooler but you need to remember the same heat is being put into the room. If you are concerned about power usage the route to take is lower power chips.
Tesla gpus still don't have quite the max power consumption of some of the gaming cards. It's not just the heat sinks. It is important to remember that these are set up to be run hard for hours or days at a time. Similar use of a gaming card could literally destroy the thing as they can reach some insane temperatures with their stock cooling solutions. As for after market paste, I'm aware of what it does. I considered it to see if it would improve efficiency. Right now the readings for the cpu and its heatsink are quite far apart even though the distance between sensors is relatively minor. I expect that these machines have the classic bad oem thermal paste application. Perhaps I should have worded my statement differently though. Performance parts tend to generate more heat, but it's typical to run them at lower absolute temperatures than might be considered acceptable in something like a gaming machine /.
Comments
Originally Posted by Winter
70-80 degrees Fahrenheit idle, 100 degrees under heavy load.
That's a pretty small range there… Does the Mac Mini even idle at 100?
The first-gen (1GHz Intel Atom-based) Apple TV even ran at 104!
Linux is flexible but his post was BS. Many companies won't even consider a Linux based machine in their shops. Again this comes back to my point that the board here is filled with extremist view points. I use Linux at home but it isn't even worth suggesting it at work nor would many other companies near by bother with Linux. Apple still does! However third party vendors would be foolish to remain Mac only for many of their products. Some software does suffer in that regard. Like wise some doesn't but that is hardly the point these days. Multi processors allow for many apps to run at the same time without suffering. That can be huge for many users. It really isn't as huge a problem as people make it out to be. The trade off of clock speed and heat will go on forever. What is interesting is that we are nearing 4Ghz clocks in desktop machines these days. That is almost a full GHz over what was possible a couple of years ago. This means apps bound to a single thread are still seeing speed ups which is why I say it isn't an issue now.
In any event apps like those are yesterday's issues. The interesting apps of tomorrow will be highly threaded. The question is what will grab the community and become successful. Imagine the power of Siri right on your desktop. Or modeling and designing the machines of the future in real time. Or how about a genetics work station app. Lots of cores will makes such things possible and economical.
Think about it Pro Level hardware is always at the bleeding edge. Even if the average desktop was operating fabless and cool to the touch pro users will be demanding far better performance. With performance comes heat.
Thankfully there's smcfancontrol, right? *laughs*
Quote:
Originally Posted by wizard69
Never!
Think about it Pro Level hardware is always at the bleeding edge. Even if the average desktop was operating fabless and cool to the touch pro users will be demanding far better performance. With performance comes heat.
This isn't always true. Xeons tend to be tested for 24/7 use, and operating temperatures that are considered within spec for constant use are significantly lower than those that cause thermal shutdown. It's normal to have a beefy cooling system in place. If you're examining workstation gpus or those like the tesla processors that are optimized for computation, they're often clocked lower. They also run at lower levels of power consumption and much cooler. Gaming gpus might die if put through those kinds of workloads. I mean yes they can run hot if you're examining things tailored for bleeding edge workloads, yet temperatures are a relative thing as you do need lower ones for reliable service if they are to be subjected to constant use.
Quote:
Originally Posted by Winter
Thankfully there's smcfancontrol, right? *laughs*
I used to use that, yet it's hard to get it just right, and I found it difficult to test whether it messed with the normal settings post installation even if uninstalled. People claimed this, and I meant to test it but didn't get around to it. I will say that there's always after market thermal paste if you're trying to bring things down a few degrees. Considering doing that on a couple older machines. You just need to have all of the tools handy and a suitable method of keeping screws and things organized. Otherwise it could be bad.
The next Mac Pro is Sandy Bridge Xeon
If that was the case why the wait till 2013?
Originally Posted by wizard69
If that was the case why the wait till 2013?
Gotta hold onto the hope that it'll even exist…
Quote:
Originally Posted by wizard69
I think you missed the point machines built for pro usage run hotter than those of lower performance given the same generation of chips. When someone talks workstation it is rational to assume it will run hotter than the run of the mill office PC.
Of course it is because they run hotter.
I have no doubt that running a chip cooler makes it more reliable. However those big heat sinks on those Tesla cards are not there for good looks.
After market thermal paste just improves thermal transfer, power into the chip remains the same or increases. Why would it increase? Better thermal transfer means further opportunities to speed step thus putting more power into the chip. It is certainly good to see a chip run cooler but you need to remember the same heat is being put into the room. If you are concerned about power usage the route to take is lower power chips.
Tesla gpus still don't have quite the max power consumption of some of the gaming cards. It's not just the heat sinks. It is important to remember that these are set up to be run hard for hours or days at a time. Similar use of a gaming card could literally destroy the thing as they can reach some insane temperatures with their stock cooling solutions. As for after market paste, I'm aware of what it does. I considered it to see if it would improve efficiency. Right now the readings for the cpu and its heatsink are quite far apart even though the distance between sensors is relatively minor. I expect that these machines have the classic bad oem thermal paste application. Perhaps I should have worded my statement differently though. Performance parts tend to generate more heat, but it's typical to run them at lower absolute temperatures than might be considered acceptable in something like a gaming machine /.