Apple frees Mac OS X Leopard Server to run in virtual machines

2

Comments

  • Reply 21 of 50
    jeffdmjeffdm Posts: 12,949member
    Quote:
    Originally Posted by MiMiC View Post


    Yes, i would like to know this as well. Can someone give a real world example why this is needed? If you have to increase the processors/ram/bandwidth anyway, why would you not just purchase another box? There has to be some overhead running in VM mode?



    There is some overhead. You're running several copies of the OS, so obviously, there's some seemingly unnecessary duplication going on.



    What it's also useful for is allowing you to provision one server into virtual servers, so each client gets their own server, configure the OS how they like and it doesn't alter the configuration of other servers. But if it costs $499 or $999 for each virtual server, I'm not sure how attractive that would be. Sure, it's competitive with Windows, but most of the virtual server host services I've seen offer Linux or BSD - which don't have a per-instance license fee.
  • Reply 22 of 50
    vineavinea Posts: 5,585member
    Quote:
    Originally Posted by mstone View Post


    I can't imagine a scenario where I would want to run multiple copies of OS X on the same box. When I think of virtualization, the notion of having OS X, Windows and Unix all running on the same server might be more appealing.



    In terms of maximizing resource utilization as Mel mentioned, I prefer to have plenty of reserve capacity to handle unpredictable demand rather than trying to squeeze the last bit of performance out of a given machine.



    The new blade server concept is quite attractive because the servers are discreet machines which can be serviced/upgraded without bringing down the entire complex, unlike a virtualization environment. The one down side to IBM's blades is the installation expense for 220v power requirements, but that aside I like the management benefits of the shared chassis consolidation.



    But without more compelling benefits, I would not see our business being significantly enhanced by OS X virtualization.



    We have both blades and virtualization servers and both have benefits and drawbacks.



    The primary advantage of virtualization the ability to configure a number of VMs on a 4-way or larger machine. Each are independently configured and isolated from each other. A 4U 4 Processor virtualization server can represent a good number of boxes.



    A blade server gives you higher density but without virtualization each blade may or may not be overkill for any particular application/service. Plus the blade enclosure represents a much higher up front cost than traditional servers...in addition to the 220 power requirements. An IBM BladeCenter enclosure is the cost of several servers by itself.



    If you don't need the compute density then going with virtualization is much less expensive and gives you greater flexibility over blades. For downtime, you can always move the VMs to another machine and fail over to them.



    Multiple instances of OSX Server is useful if you want to have say a production environment and a development environment or multiple development environments on the same XServe.



    And you can't get OSX on a blade anyway.
  • Reply 23 of 50
    hmurchisonhmurchison Posts: 12,294member
    Quote:
    Originally Posted by vinea View Post




    And you can't get OSX on a blade anyway.



    For now.....muahhahahahahahahahaha



    j/k I don't know if Apple will ever design a Blade Server but I sure do love to

    dream about it. I'd love for them to develop a Media Blade system focused on

    their clients with HPC and Media Production needs.



    I could just imagine a time 5 years from now when Final Cut Studio production would

    consist of a small team all hooked up to a Media Blade system editing and processing

    4k video, multitrack audio and Motion Graphics simulataneously over 10G connections

    and banks of Xserve RAID.
  • Reply 24 of 50
    vineavinea Posts: 5,585member
    Quote:
    Originally Posted by hmurchison View Post


    For now.....muahhahahahahahahahaha



    j/k I don't know if Apple will ever design a Blade Server but I sure do love to

    dream about it. I'd love for them to develop a Media Blade system focused on

    their clients with HPC and Media Production needs.



    I could just imagine a time 5 years from now when Final Cut Studio production would

    consist of a small team all hooked up to a Media Blade system editing and processing

    4k video, multitrack audio and Motion Graphics simulataneously over 10G connections

    and banks of Xserve RAID.



    Well you can get that big stack-o-minis that I've seen pictures of...oh here it is:







    http://www.macminicolo.net/



    too bad it really looks like this:







    There is something Star Warsish about that picture...



    For a small team a stack of 1U servers is cheaper than any blade server. How many did you want? You can stack 7 in the space of a blade enclosure. That's only half the density of a blade server but hey...



    How loud is an XServe? We don't have one...
  • Reply 25 of 50
    I don't understand the use of 'first time'. It seems to me that the only restriction was the license agreement. There weren't any technological restrictions. So why is this a big deal? It's not like license agreements hold any importance.
  • Reply 26 of 50
    Quote:
    Originally Posted by vinea View Post


    We have both blades and virtualization servers and both have benefits and drawbacks.



    The primary advantage of virtualization the ability to configure a number of VMs on a 4-way or larger machine. Each are independently configured and isolated from each other. A 4U 4 Processor virtualization server can represent a good number of boxes.



    A blade server gives you higher density but without virtualization each blade may or may not be overkill for any particular application/service. Plus the blade enclosure represents a much higher up front cost than traditional servers...in addition to the 220 power requirements. An IBM BladeCenter enclosure is the cost of several servers by itself.



    If you don't need the compute density then going with virtualization is much less expensive and gives you greater flexibility over blades. For downtime, you can always move the VMs to another machine and fail over to them.



    Multiple instances of OSX Server is useful if you want to have say a production environment and a development environment or multiple development environments on the same XServe.



    And you can't get OSX on a blade anyway.



    Really? 220? You mean a business will have to add a separate 220V line draw? Gee. You have to do that if you want a washer/dryer, any useful machining, wood/metal lathes, welding and much more.



    Of course, this would be a non-issue if the US was 220 to the like 90% of the world.



    It costs me 1400 t go from 200W panel to a 400W panel upgrade.



    If I'm buying Blade Servers I'm expecting to generate a ROI. That power upgrade will be an infrastructure expense that I'll write down or slowly disperse back into the cost of my business services.
  • Reply 27 of 50
    aplnubaplnub Posts: 2,596member
    Quote:
    Originally Posted by hazkid View Post


    I don't understand the use of 'first time'. It seems to me that the only restriction was the license agreement. There weren't any technological restrictions. So why is this a big deal? It's not like license agreements hold any importance.



    Maybe because MS is saying you have to have a licensed copy for every VM you have with their OS on it. Apple says you don't?? I may be reading that all wrong but that is what I get from it. It shows Apple is not squeezing VM owners like MS is. I think...
  • Reply 28 of 50
    mcdavemcdave Posts: 1,200member
    Quote:
    Originally Posted by melgross View Post


    Your term " curry favor" is way out of place.



    The correct interpretation, is that they are finally, after several years, delivering a product that medium, large business, universities, and government have been telling Apple they need.



    This is supplying a product that they should have been supplying.



    Offering potential customers products they need for their business is simply the proper way to do business.



    If that were Apple's way they would have licensed OSX for general PCs or produced another Windows PC - isn't that what business 'needs'?



    Apple isn't renown for pandering to popular perceptions of right & wrong in IT hence no low end Macs, the bundled iLife software to coerce usage, no expandable mini-towers. While the switch to Intel sounded plausible on the basis of supply, performance-per-watt & technology roadmap you can't deny a convenient symptom was the hardware-specification-obsessed general public's ability to make a more direct comparison of Macs to current PCs (even to the point of justifying the performance enhancements by publishing generic SPEC benches whilst ignoring the SIMD and real world work). I'm not lamenting the PPC's demise it's more the commercial mechanics of the switch which hints that Apple has changed tactics and has adopted a stance of patronage like everyone else.



    As I say, I'm no server tech but the arguments above still don't seem to hold water. The point that virtualisation gives you the opportunity to build two server instances up front just in case you need to expand later & the idea that running two OS instances use less resources than one don't make sense to me. The perceived security benefits are fine when selling hosted services but are they really necessary for a relatively secure & stable platform? Wouldn't standard user-based security be OK. The point about allocating resources seems fine but in securing resources for each instance do the virtualisation solutions fully utilise all spare/available resources?



    On the topic of scalable web servers, surely Apple would have been better to make these services/apps (in fact most services/apps) Xgrid aware so you could bolt on extra Xserves as you would plug in extra HDs? And maybe an advanced version of process wizard for sys admins to allocate resources across applications/users would negate the need for load balancing virtual machines.



    This isn't intended to sound like judgement, I'm in no way qualified to give it. However, the observation that they appear to be currying favour, albeit to improve much needed credibility in the server space, still stands.



    McD
  • Reply 29 of 50
    melgrossmelgross Posts: 31,977member
    Quote:
    Originally Posted by McDave View Post


    If that were Apple's way they would have licensed OSX for general PCs or produced another Windows PC - isn't that what business 'needs'?



    Apple isn't renown for pandering to popular perceptions of right & wrong in IT hence no low end Macs, the bundled iLife software to coerce usage, no expandable mini-towers. While the switch to Intel sounded plausible on the basis of supply, performance-per-watt & technology roadmap you can't deny a convenient symptom was the hardware-specification-obsessed general public's ability to make a more direct comparison of Macs to current PCs (even to the point of justifying the performance enhancements by publishing generic SPEC benches whilst ignoring the SIMD and real world work). I'm not lamenting the PPC's demise it's more the commercial mechanics of the switch which hints that Apple has changed tactics and has adopted a stance of patronage like everyone else.



    As I say, I'm no server tech but the arguments above still don't seem to hold water. The point that virtualisation gives you the opportunity to build two server instances up front just in case you need to expand later & the idea that running two OS instances use less resources than one don't make sense to me. The perceived security benefits are fine when selling hosted services but are they really necessary for a relatively secure & stable platform? Wouldn't standard user-based security be OK. The point about allocating resources seems fine but in securing resources for each instance do the virtualisation solutions fully utilise all spare/available resources?



    On the topic of scalable web servers, surely Apple would have been better to make these services/apps (in fact most services/apps) Xgrid aware so you could bolt on extra Xserves as you would plug in extra HDs? And maybe an advanced version of process wizard for sys admins to allocate resources across applications/users would negate the need for load balancing virtual machines.



    This isn't intended to sound like judgement, I'm in no way qualified to give it. However, the observation that they appear to be currying favour, albeit to improve much needed credibility in the server space, still stands.



    McD



    First of all, a company has choices as to where they decide their customers are. They can choose to go to a particular market or not.



    Your statements notwithstanding, all companies make these choices. Don't pick on Apple.



    IBM just sold their PC division. Are you going to scold them as well?



    Sun doesn't make home machines. How about them?



    Where should we go with this?



    Apple sees their customers as being outside the medium/large corporations, and also government.



    It's just possible that they are seeing an advantage to slowly moving in that direction. This could be a very cheap move in that direction. It it works out well, Apple could take another asked for step. We don't know.



    I, for one, am very excited by this development.



    If you don't understand the way larger servers are used, then you should do some research before commenting. What you are saying is incorrect.



    Manufacturers of larger servers charge by the core count. But, these servers aren't always fully utilized. When paying for leasing and service, the cost is too high to allow any of the system to remain idle.



    Larger servers are also upgradable. A larger machine may come with 4 cores, but can be upgraded to 16, or even, sometimes, 32.



    It's much cheaper to upgrade the mainframe of the unit than to buy numerous smaller units. It's also easier to administer.



    This is a complex subject, and others here have offered good explanations of the purpose.



    The fact is that with your own admitted lack of knowledge, you can't really understand what's being said, your comments are inappropriate.
  • Reply 30 of 50
    mcdavemcdave Posts: 1,200member
    Quote:
    Originally Posted by melgross View Post


    First of all, a company has choices as to where they decide their customers are. They can choose to go to a particular market or not.



    Your statements notwithstanding, all companies make these choices. Don't pick on Apple.



    IBM just sold their PC division. Are you going to scold them as well?



    Sun doesn't make home machines. How about them?



    Where should we go with this?



    Apple sees their customers as being outside the medium/large corporations, and also government.



    It's just possible that they are seeing an advantage to slowly moving in that direction. This could be a very cheap move in that direction. It it works out well, Apple could take another asked for step. We don't know.



    I, for one, am very excited by this development.



    If you don't understand the way larger servers are used, then you should do some research before commenting. What you are saying is incorrect.



    Manufacturers of larger servers charge by the core count. But, these servers aren't always fully utilized. When paying for leasing and service, the cost is too high to allow any of the system to remain idle.



    Larger servers are also upgradable. A larger machine may come with 4 cores, but can be upgraded to 16, or even, sometimes, 32.



    It's much cheaper to upgrade the mainframe of the unit than to buy numerous smaller units. It's also easier to administer.



    This is a complex subject, and others here have offered good explanations of the purpose.



    The fact is that with your own admitted lack of knowledge, you can't really understand what's being said, your comments are inappropriate.



    I'm not aware that I said Apple shouldn't go for this or any market, I'm just saying toeing the line isn't characteristic of Apple. Their approach to the consumer market hasn't exactly been one of compliance with popular trends in fact, they seem to fly in the face of them. My point is there are a few recent instances where they appear have conceded; the Intel switch, running Windows on a Mac, porting iTunes to Windows to improve popularity and gain new footholds. If this is how they're looking to improve ratings in the server market fine! But unusual - they don't seem to have a point of difference with this one, they are just another box doing what others do but much later - more of a Microsoft tactic than Apple.



    I suppose a licensing change is a cheaper way to quick gains than any development but maybe that's the server market - less innovation, more stability hence the change of tack.



    McD
  • Reply 31 of 50
    melgrossmelgross Posts: 31,977member
    Quote:
    Originally Posted by McDave View Post


    I'm not aware that I said Apple shouldn't go for this or any market, I'm just saying toeing the line isn't characteristic of Apple. Their approach to the consumer market hasn't exactly been one of compliance with popular trends in fact, they seem to fly in the face of them. My point is there are a few recent instances where they appear have conceded; the Intel switch, running Windows on a Mac, porting iTunes to Windows to improve popularity and gain new footholds. If this is how they're looking to improve ratings in the server market fine! But unusual - they don't seem to have a point of difference with this one, they are just another box doing what others do but much later - more of a Microsoft tactic than Apple.



    I suppose a licensing change is a cheaper way to quick gains than any development but maybe that's the server market - less innovation, more stability hence the change of tack.



    McD



    Apple's been "toeing the line" for a bunch of years now. In fact, once Jobs came back, and tried to continue moving Apple in the closed fashion he, and others did before, he released that remaining proprietary wasn't working, he began a big effort to turn the company around by dropping those proprietary standards and products. Once they began to drop their proprietary standards, they began to "toe the line".



    Did you prefer their special cables? Their own software standards no one else would support? HDD's with their own ROMS on board?



    The fact that they gave in to their customers by allowing any old CD/DVD burner to work? That was bad?



    How about OS X, which is filled with, and based on, open software, leaving their old System software behind?



    Where do you want to start with this? And where do you want to end.



    You have to understand good corporate policy. That means looking for a profitable customer base, and then doing what needs to be done to win them over.



    Apple has ignored the 50% of the market too long. It's about time they release that and do something about it. It's the proper way to go.



    It has nothing to do with "toeing the line".



    Your example of Apple moving to standards (Intel, etc.) are thrown out as though they are minor changes. They are not. They are intended to move Apple's marketshare up to higher levels, and it's working.



    They do have closed machines, but that's different.



    They still connect to USB, which many Apple fans thought would NEVER happen. They dropped their own serial ports for that. More "toeing the line".



    Companies don't want innovation in the server market. They want known values, and stability. If Apple can offer that to them, then part of the battle will be won.



    The rest, they don't seem to be prepared to give in to yet, though I'm sure hoping they do, and soon!
  • Reply 32 of 50
    mcdavemcdave Posts: 1,200member
    Quote:
    Originally Posted by melgross View Post


    They do have closed machines, but that's different.



    How? At what point do you think compliance becomes significant? I think the adoption of those standards was good thing except for being standards based and standards compliant are two very different things - a bit like saying missing the bus by 20 seconds is better than 2 minutes. Macs may use the same technologies but how they are used makes all the difference - they are not 'standard' PCs and that difference hasn't been conceded. Yes they adopted USB but the wholesale approach and elimination of the old ports was very Apple and very different to the clutter that late 90's PCs had with 9-way, 25-way, PS2 & the odd USB port. Their prescriptive approach may sound harsh but customers ultimately benefited.



    Apple still have their points of difference but server virtualisation isn't an example of them. I don't think for a minute that 're-inventing the server' would go down well in a conservative area of the market but I would have expected them to leverage their tried & tested server technologies such as Xgrid to provide something new. "More clients? Need a faster web server? Plug in, switch on and upgrade with zero configuration". Though I suppose that would go down as well with infrastructure divisions as 'switch to Mac desktops and let half your level 1&2 staff go!' Hardly in keeping with CIO's empire-building!



    Are they learning about how to address markets or are they throwing in the towel on basic principles - have you noticed that silver boxes are really grey boxes with shiny finish?



    McD
  • Reply 33 of 50
    melgrossmelgross Posts: 31,977member
    Quote:
    Originally Posted by McDave View Post


    How? At what point do you think compliance becomes significant? I think the adoption of those standards was good thing except for being standards based and standards compliant are two very different things - a bit like saying missing the bus by 20 seconds is better than 2 minutes. Macs may use the same technologies but how they are used makes all the difference - they are not 'standard' PCs and that difference hasn't been conceded. Yes they adopted USB but the wholesale approach and elimination of the old ports was very Apple and very different to the clutter that late 90's PCs had with 9-way, 25-way, PS2 & the odd USB port. Their prescriptive approach may sound harsh but customers ultimately benefited.



    Apple still have their points of difference but server virtualisation isn't an example of them. I don't think for a minute that 're-inventing the server' would go down well in a conservative area of the market but I would have expected them to leverage their tried & tested server technologies such as Xgrid to provide something new. "More clients? Need a faster web server? Plug in, switch on and upgrade with zero configuration". Though I suppose that would go down as well with infrastructure divisions as 'switch to Mac desktops and let half your level 1&2 staff go!' Hardly in keeping with CIO's empire-building!



    Are they learning about how to address markets or are they throwing in the towel on basic principles - have you noticed that silver boxes are really grey boxes with shiny finish?



    McD



    By closed, I mean that you can't add boards, easily change CPU's GPU's, etc.



    That's different from standards. There are no standards in that area. Other companies offer machines that are closed in that way.



    Based on standards, and standard compliant, are pretty much the same thing from the users standpoint.



    What is a "standard PC"?



    Is it one that runs Windows exclusively? Because that's the vast majority of machines out there. It's a defacto "standard". So therefore, you are right, the Mac isn't standard.



    But what about the Mac Pro? What about the fact that almost no portable machines allow easy interchangeability of their internals?



    As far as XGrid goes, that's very specialized. Third party software companies must rewrite their programs to work with that. Scientific programs, which it was designed for, often do support it now, as do some 3D rendering programs.



    What more do you expect? It doesn't work well for web servers, or transactional machines.



    I don't understand you last sentence. What have I been saying?
  • Reply 34 of 50
    mcdavemcdave Posts: 1,200member
    Quote:
    Originally Posted by melgross View Post


    By closed, I mean that you can't add boards, easily change CPU's GPU's, etc.



    That's different from standards. There are no standards in that area. Other companies offer machines that are closed in that way.



    Based on standards, and standard compliant, are pretty much the same thing from the users standpoint.



    What is a "standard PC"?



    Is it one that runs Windows exclusively? Because that's the vast majority of machines out there. It's a defacto "standard". So therefore, you are right, the Mac isn't standard.



    But what about the Mac Pro? What about the fact that almost no portable machines allow easy interchangeability of their internals?



    As far as XGrid goes, that's very specialized. Third party software companies must rewrite their programs to work with that. Scientific programs, which it was designed for, often do support it now, as do some 3D rendering programs.



    What more do you expect? It doesn't work well for web servers, or transactional machines.



    I don't understand you last sentence. What have I been saying?



    I think the opposite on standards-based & standards-compliant. Just because WMV was based on MPEG-4/ASP doesn't help the consumer when MPEG-4 players can't play the content. Being standards based only helps developers in literal terms (aside from reduced development costs if they find their way to the consumer).



    My point about Xgrid is that unspecialising it would have been an example of innovation. Factor it into OSX services & apps and you have the ability to expand a logical OS instance across multiple physical servers in the same what you can across multiple cores within a server - unvirtualisation if you will. Surely this would have answered iGuess's dilemma of expanding requirements more effectively than silo-ing apps in virtual servers up front. Wouldn't that be more like what we expect from Apple? Hence the point about whether they are really becoming another grey box, with a splash of glitter.



    I'd like to see Apple do well in all their markets and bring something to them that I like about Apple in mine but concession is a two-way street and that could bring mediocrity back the other way.



    McD
  • Reply 35 of 50
    melgrossmelgross Posts: 31,977member
    Quote:
    Originally Posted by McDave View Post


    I think the opposite on standards-based & standards-compliant. Just because WMV was based on MPEG-4/ASP doesn't help the consumer when MPEG-4 players can't play the content. Being standards based only helps developers in literal terms (aside from reduced development costs if they find their way to the consumer).



    My point about Xgrid is that unspecialising it would have been an example of innovation. Factor it into OSX services & apps and you have the ability to expand a logical OS instance across multiple physical servers in the same what you can across multiple cores within a server - unvirtualisation if you will. Surely this would have answered iGuess's dilemma of expanding requirements more effectively than silo-ing apps in virtual servers up front. Wouldn't that be more like what we expect from Apple? Hence the point about whether they are really becoming another grey box, with a splash of glitter.



    I'd like to see Apple do well in all their markets and bring something to them that I like about Apple in mine but concession is a two-way street and that could bring mediocrity back the other way.



    McD



    XGrid was specifically designed as a rendering tool. It can be used outside that box to a limited extent, but not to the extent you believe.



    I really do think you are missing what is actually going on.
  • Reply 36 of 50
    mcdavemcdave Posts: 1,200member
    Quote:
    Originally Posted by melgross View Post


    XGrid was specifically designed as a rendering tool. It can be used outside that box to a limited extent, but not to the extent you believe.



    I really do think you are missing what is actually going on.



    Story of my life Mel. I just never had Apple down as perpetuating status quo that's all. More reading for me I think.



    Speaking of threads running elsewhere, I've had enough of this one.



    Cheers & good night.



    McD
  • Reply 37 of 50
    vineavinea Posts: 5,585member
    Quote:
    Originally Posted by mdriftmeyer View Post


    Really? 220? You mean a business will have to add a separate 220V line draw? Gee. You have to do that if you want a washer/dryer, any useful machining, wood/metal lathes, welding and much more.



    Of course, this would be a non-issue if the US was 220 to the like 90% of the world.



    It costs me 1400 t go from 200W panel to a 400W panel upgrade.



    If I'm buying Blade Servers I'm expecting to generate a ROI. That power upgrade will be an infrastructure expense that I'll write down or slowly disperse back into the cost of my business services.



    Yes, and a few server rooms aren't provisioned for 220 power to all locations in the center. Which mean electricians to run power to where you want it. Not a big deal but a potential annoyance. Not too many wood lathes or washer/dryers in a server room either.



    If you're buying blade servers you need the density. If you don't need the density you won't get an ROI over other alternatives. That's the point.
  • Reply 38 of 50
    vineavinea Posts: 5,585member
    Quote:
    Originally Posted by McDave View Post


    As I say, I'm no server tech but the arguments above still don't seem to hold water.



    Then why do you feel qualified to say it doesn't hold water?



    Quote:

    The point that virtualisation gives you the opportunity to build two server instances up front just in case you need to expand later & the idea that running two OS instances use less resources than one don't make sense to me.



    Because it allows you to move things around later easier. Which about everyone that has been burned once by a monolithic install appreciates.



    Quote:

    The perceived security benefits are fine when selling hosted services but are they really necessary for a relatively secure & stable platform? Wouldn't standard user-based security be OK. The point about allocating resources seems fine but in securing resources for each instance do the virtualisation solutions fully utilise all spare/available resources?



    Not just security but stability. Do virtualization solutions FULLY utilise all resources? Nope, but it provides finer grain configuration options.



    Quote:

    This isn't intended to sound like judgement, I'm in no way qualified to give it. However, the observation that they appear to be currying favour, albeit to improve much needed credibility in the server space, still stands.



    McD



    Currying favor by providing a useful feature? That sounds like anyone improving their product line is "currying favor".
  • Reply 39 of 50
    mstonemstone Posts: 11,510member
    Quote:
    Originally Posted by mdriftmeyer View Post


    Really? 220? You mean a business will have to add a separate 220V line draw? Gee. You have to do that if you want a washer/dryer, any useful machining, wood/metal lathes, welding and much more.



    220v is not normally available in a data center environment and the expense of bonded certified data center electricians is many fold higher than your garden variety garage/shop example. Furthermore, you need to consider the 220v UPS requirements which for the most part is nonexistent in the data center.



    I just got a single leopard server up and running last week. I can see that maybe in a situation where a company has a couple of small departments, each with their own web masters who need complete control of the OS X system, virtualization might make sense. For our organization, a single department controls all web deployments so it is not so much of an advantage.



    I have long been a proponent of multiple small servers over large multi-use deployments, so I still prefer a diverse collection of Linux, Solaris, Windows and now OS X boxes. That, to me offers the best diversification to handle any unexpected requests for specialized applications.
  • Reply 40 of 50
    jeffdmjeffdm Posts: 12,949member
    Quote:
    Originally Posted by mstone View Post


    220v is not normally available in a data center environment and the expense of bonded certified data center electricians is many fold higher than your garden variety garage/shop example. Furthermore, you need to consider the 220v UPS requirements which for the most part is nonexistent in the data center.



    For something that's supposedly practically nonexistent, it's easy to configure such a UPS through APC's site. Most servers can take 220 as-is, but I don't think it's necessarily the same as appliance 220. I've known people that operate servers at 220 for their employer.
Sign In or Register to comment.