Apple looking for help building 'next generation' of data centers

Posted:
in General Discussion edited January 2014


Even as Apple has broken ground on a new server farm project in Oregon, a job listing posted by the company references plans for the "design and construction implementation of the next generation of data centers."



With a of "Data Center Construction Project Manager," a successful candidate would "become an important part" of the team building Apple's next-gen data centers, as noted by Wired. The role is fairly broad and involves managing data center projects from "all levels administrative, cost, and contracting perspective."



It's not immediately clear whether Apple's plans for a "green" data center in Prineville, Oregon, count toward this so-called next generation, but the company has begun work there on a "small, initial phase," according to Prineville city planner Josh Smith.



"It's just kind of a first phase that they've thrown together to get things started on the ground," Smith The Oregonian.



This first phase appears to be just 10,000 square feet, just a fraction of the 160 acres that Apple purchased for the new data center. Prineville city engineer Eric Klann told Wired last month that Apple plans to have eight modular data center units installed on the property before eventually building a bigger facility "similar" to a nearby Facebook server farm.



Local officials have said that Facebook "kind of helped recruit" Apple to the area last year by providing representatives with a tour of its facility.





Facebook's data center in Prineville, Oregon | Source: Facebook.







The Cupertino, Calif., company confirmed last month plans to build a data center in Oregon, but it declined to reveal what about the project would make it environmentally friendly.



Apple also operates a large-scale server farm in Maiden, N.C. The company is currently in the process of building fuel cell and solar installations to provide green power to the facility, which supports Apple's iTunes and iCloud products.





Apple's server farm in Maiden, N.C.





[ View article on AppleInsider ]

Comments

  • Reply 1 of 15
    nasseraenasserae Posts: 3,167member
    They are looking for a construction project manager. Most likely someone with a civil or architectural engineering degree. This is a standard requirements for this type of position. But one criteria that might be difficult to fulfill and that is "Experience managing large Data Center construction projects". There are probably few "large" data center construction in the last few years and they are most likely designed and built by different consultants and contractors.



    Construction project managers don't have to be experienced in a particular set of building type. They need to be experienced in the construction process. Other criteria are standard though. They are probably looking for someone with 10 to 15 years experience in engineering.



    Seven years ago I was a deputy project manager for two years before I decided to do my masters and PhD. The time I spend on my graduate study is not considered experience. It sucks but that's how things are!
  • Reply 2 of 15
    macarenamacarena Posts: 365member
    Thats where Apple's core competency lies...



    I suggest Apple modify its Apple TV solution, to make it the world's best server hardware package.



    - Add Thunderbolt port, including Power over Thunderbolt - so one cable will handle power, data, and everything required. Should support full Thunderbolt spec of 20Gbps.



    - 8 GB Flash for local storage, and all units can access Petabytes of data on external SANs via the Thunderbolt port - at faster speeds than they can access local drives.



    - Use A6 processor, with 2 options. Quad CPU cores with 1 GPU core for CPU intensive operations, and Single CPU core and Quad GPU cores for Graphics rendering, etc.



    - 2 GB DDR3 RAM.



    - Remove HDMI port, Wifi capability, to reduce cost. Retain LAN port with support for PoE, in case of cheaper deployments that do not use Thunderbolt.



    - Server should be fan less, with excellent heat dissipation through conduction - would generate very little heat compared to usual servers.



    - Create 1U formfactor server boxes, with built in Thunderbolt hubs, and ability to hold 16 of these micro servers in a 4x4 layout, with Ample room for ventilation, heat dissipation, etc. It is also possible to design these cases with Liquid Cooling channels for even higher duty cooling. Because of the height savings, it is also possible to have 4 3.5" SATA/SCSI HDDs in this 1U server - so that there can be local RAID as well. If Apple does not want to use SATA disks at all, they can also use 32 Apple micro servers in a 1U case - by slightly reducing the height or totally removing the existing case.



    The overall cost for this server would be about $3000 ($1600 for the 16 Apple Servers, $600 for the 4 3TB SATA drives, $600 for a Thunderbolt hub supporting 16 ports, and $200 for the case) with 400 watt power supply.



    And for this $3000, the server will support 64 Cores, 32GB RAM, 128GB Flash, 12 TB HDD, and very low power consumption for the SATA configuration. Without SATA, the server would have 128 cores, 64GB RAM, 256GB Flash memory and would cost about $5000.



    In terms of Software, Apple already has Grand Central Dispatch and Open CL ready to take advantage of this sort of server. Mac OS is already capable of efficiently utilizing multiple processors and multiple cores. Thunderbolt is already a mass market solution, and has been implemented across Apple's entire product range - extending it to Apple MicroServer should not be too difficult for Apple.



    Thunderbolt is expensive, but considering that HDMI is being removed, it should more or else compensate for the extra cost of Thunderbolt. 2GB RAM will easily be compensated by the cost of Wifi/Bluetooth module.
  • Reply 3 of 15
    realisticrealistic Posts: 1,154member
    Quote:
    Originally Posted by macarena View Post


    Thats where Apple's core competency lies...



    I suggest Apple modify its Apple TV solution, to make it the world's best server hardware package.



    - Add Thunderbolt port, including Power over Thunderbolt - so one cable will handle power, data, and everything required. Should support full Thunderbolt spec of 20Gbps.



    - 8 GB Flash for local storage, and all units can access Petabytes of data on external SANs via the Thunderbolt port - at faster speeds than they can access local drives.



    - Use A6 processor, with 2 options. Quad CPU cores with 1 GPU core for CPU intensive operations, and Single CPU core and Quad GPU cores for Graphics rendering, etc.



    - 2 GB DDR3 RAM.



    - Remove HDMI port, Wifi capability, to reduce cost. Retain LAN port with support for PoE, in case of cheaper deployments that do not use Thunderbolt.



    - Server should be fan less, with excellent heat dissipation through conduction - would generate very little heat compared to usual servers.



    - Create 1U formfactor server boxes, with built in Thunderbolt hubs, and ability to hold 16 of these micro servers in a 4x4 layout, with Ample room for ventilation, heat dissipation, etc. It is also possible to design these cases with Liquid Cooling channels for even higher duty cooling. Because of the height savings, it is also possible to have 4 3.5" SATA/SCSI HDDs in this 1U server - so that there can be local RAID as well. If Apple does not want to use SATA disks at all, they can also use 32 Apple micro servers in a 1U case - by slightly reducing the height or totally removing the existing case.



    The overall cost for this server would be about $3000 ($1600 for the 16 Apple Servers, $600 for the 4 3TB SATA drives, $600 for a Thunderbolt hub supporting 16 ports, and $200 for the case) with 400 watt power supply.



    And for this $3000, the server will support 64 Cores, 32GB RAM, 128GB Flash, 12 TB HDD, and very low power consumption for the SATA configuration. Without SATA, the server would have 128 cores, 64GB RAM, 256GB Flash memory and would cost about $5000.



    In terms of Software, Apple already has Grand Central Dispatch and Open CL ready to take advantage of this sort of server. Mac OS is already capable of efficiently utilizing multiple processors and multiple cores. Thunderbolt is already a mass market solution, and has been implemented across Apple's entire product range - extending it to Apple MicroServer should not be too difficult for Apple.



    Thunderbolt is expensive, but considering that HDMI is being removed, it should more or else compensate for the extra cost of Thunderbolt. 2GB RAM will easily be compensated by the cost of Wifi/Bluetooth module.



    They have already committed to the building the data center. They won't kill the data center or build a niche low volume server like you describe.
  • Reply 4 of 15
    Quote:
    Originally Posted by macarena View Post


    Thats where Apple's core competency lies...



    I suggest Apple modify its Apple TV solution, to make it the world's best server hardware package.



    - Add Thunderbolt port, including Power over Thunderbolt - so one cable will handle power, data, and everything required. Should support full Thunderbolt spec of 20Gbps.



    - 8 GB Flash for local storage, and all units can access Petabytes of data on external SANs via the Thunderbolt port - at faster speeds than they can access local drives.



    - Use A6 processor, with 2 options. Quad CPU cores with 1 GPU core for CPU intensive operations, and Single CPU core and Quad GPU cores for Graphics rendering, etc.



    - 2 GB DDR3 RAM.



    - Remove HDMI port, Wifi capability, to reduce cost. Retain LAN port with support for PoE, in case of cheaper deployments that do not use Thunderbolt.



    - Server should be fan less, with excellent heat dissipation through conduction - would generate very little heat compared to usual servers.



    - Create 1U formfactor server boxes, with built in Thunderbolt hubs, and ability to hold 16 of these micro servers in a 4x4 layout, with Ample room for ventilation, heat dissipation, etc. It is also possible to design these cases with Liquid Cooling channels for even higher duty cooling. Because of the height savings, it is also possible to have 4 3.5" SATA/SCSI HDDs in this 1U server - so that there can be local RAID as well. If Apple does not want to use SATA disks at all, they can also use 32 Apple micro servers in a 1U case - by slightly reducing the height or totally removing the existing case.



    The overall cost for this server would be about $3000 ($1600 for the 16 Apple Servers, $600 for the 4 3TB SATA drives, $600 for a Thunderbolt hub supporting 16 ports, and $200 for the case) with 400 watt power supply.



    And for this $3000, the server will support 64 Cores, 32GB RAM, 128GB Flash, 12 TB HDD, and very low power consumption for the SATA configuration. Without SATA, the server would have 128 cores, 64GB RAM, 256GB Flash memory and would cost about $5000.



    In terms of Software, Apple already has Grand Central Dispatch and Open CL ready to take advantage of this sort of server. Mac OS is already capable of efficiently utilizing multiple processors and multiple cores. Thunderbolt is already a mass market solution, and has been implemented across Apple's entire product range - extending it to Apple MicroServer should not be too difficult for Apple.



    Thunderbolt is expensive, but considering that HDMI is being removed, it should more or else compensate for the extra cost of Thunderbolt. 2GB RAM will easily be compensated by the cost of Wifi/Bluetooth module.



    Um, how long did it take you to think/write this? Not that I read the whole thing... I did think it was pretty funny, however.
  • Reply 5 of 15
    kibitzerkibitzer Posts: 1,114member
    A pattern begins to emerge. Huge, widely dispersed server farms located far away from metropolitan areas. What are the strategic iCloud and Apple ecosystem implications?
  • Reply 6 of 15
    icoco3icoco3 Posts: 1,474member
    In other news, the new Samsung data-center will be opening soon....



  • Reply 7 of 15
    Quote:
    Originally Posted by Kibitzer View Post


    A pattern begins to emerge. Huge, widely dispersed server farms located far away from metropolitan areas. What are the strategic iCloud and Apple ecosystem implications?



    Higher infrastructure costs compared to areas with lots of dark fibre, combined with increased latency?



    Is that about right?
  • Reply 8 of 15
    mstonemstone Posts: 11,510member
    Quote:
    Originally Posted by icoco3 View Post


    In other news, the new Samsung data-center will be opening soon....







    Looks like a really nice infrastructure. Can't get much greener than that. Geo sourced cooling water powered by wind.
  • Reply 9 of 15
    macarenamacarena Posts: 365member
    Quote:
    Originally Posted by damage976 View Post


    Um, how long did it take you to think/write this? Not that I read the whole thing... I did think it was pretty funny, however.



    Don't be afraid to dream - it costs nothing, and invariably, it is the most improbable dreams that lead to the most exciting breakthroughs!



    Maybe you don't know what are the major issues impacting Data Centers today - it is not power that is an issue - it is power per watt that is the issue. Not just in terms of the power consumed by the server itself, but also other costs like the cost of cooling the data center, etc. An ARM based server that can be clocked at higher speeds would likely solve most of the issues of the data center.



    And designing multiple such units in a 1U enclosure would just be a way of making sure this solution can fit in seamlessly with existing data center infrastructure.



    Over the last decade, Servers have become more and more powerful on one side, and have been virtualized into smaller machines on the other side. Intel knows the writing on the wall - and they are delaying the inevitable by supporting Virtualization technology at the chip level itself. Server consolidation is a disaster waiting to happen - at some point a company will face a problem on the motherboard, or on some other component common to all the virtualized servers - and will find that dozens of their servers are down at the same time! Why waste money on faster servers, only to spend more money on VMWare licenses to create several virtual servers that are a lot less powerful? And the cost of these VMWare licenses is quite ridiculous - in some cases, more expensive than the underlying hardware itself!



    Have you heard of Virginia Tech's Terascale Computing facility? Designed by tying together several XServes (initially) and MacMinis (later)? At one point, it was the cheapest supercomputer in the top 100 list of fastest supercomputers in the world, while being in Top 10 in performance. This was back in 2005 - and current generation ARM processors are better performance than processors from back then. So you can effectively create an even faster supercomputer at a fraction of what it costed back in 2005.



    There are enough people who expect ARM based Mac Book Airs to be released by Apple sometime soon. If you know your technology, you will realize that ARM actually stands a much better chance of replacing Intel on the server side - because most server side applications are designed to take advantage of parallelization, and multiple compute cores etc - whereas on the desktop, most applications are not designed to take advantage of this. So an inherently slower processor is a much bigger handicap on a desktop than it is on a server - where the slowness handicap can be overcome by using multiple slow processors - which still consume a lot less power than the higher speed processors. Also, most server side applications have their biggest bottleneck in network speeds - and not on CPU speeds - so a slower processor will not really cost anything in terms of performance.



    BTW - your comment says more about you, than it says about me or my post! Maybe I am wasting my time responding to your post - if you did not read the first one, what are the chances you would read my second post!
  • Reply 10 of 15
    mstonemstone Posts: 11,510member
    Quote:
    Originally Posted by macarena View Post


    Not just in terms of the power consumed by the server itself, but also other costs like the cost of cooling the data center, etc.



    Cooling is the largest issue in data center design. In traditional raised floor data centers the cooling is either on the roof air-conditioning which cools the entire space very inefficiently or chilled water running through coils below the floor also cooling the entire space inefficiently.



    The new way that data centers are being built is in modules that completely encapsulate the rack space in very close proximity then dividing the space in front and behind the racks separating the hot side from the cold side. The hot air is then recycled back to the air-conditioning where it's temperature is measured. If it is hotter than outside air it is exhausted and fresh air is used instead. If it is a really warm day outside then the return air is recooled.



    The main thing we are trying to avoid is recycling the air that we just cooled before it gets to do any work and also trying not to cool space or building structures that do not need to be cooled.



    In your earlier post you were going on about 1U server configurations which I personally am getting away from for several reasons. With modern high density computing you get much more bang for you buck using 2U servers. The reason is that you can use dual Xeons with larger heatsinks, bigger fans and more efficient case fans rather than those crappy iU blowers that constantly fail. With lots of ram and lots of fans you can run virtualized servers with much more computing power per U of rack space. Forget low power cpus! Go for as much power as you can fit in a U of rack space and then VM with 5-10 hypervisor machines. Sure, lower power consumption is ideal but not at the sacrifice of gigaflops per rack unit. You get much better utilization per watt in 2U configurations. Data center performance ultimately is measured in how much application computing you are delivering per sq. ft. AND per watt of electricity used.



    One reason that Apple is building in rural areas is that the price of land is much less than the offsetting expense of bringing fiber to it. Laying fiber is much less expensive, especially in rural areas.
  • Reply 11 of 15
    kibitzerkibitzer Posts: 1,114member
    Quote:
    Originally Posted by I am a Zither Zather Zuzz View Post


    Higher infrastructure costs compared to areas with lots of dark fibre, combined with increased latency?



    Is that about right?



    Probably right. There are always costs - spend money to make money. What about the strategic role in growing the business?
  • Reply 12 of 15
    aaarrrggghaaarrrgggh Posts: 1,609member
    Quote:
    Originally Posted by NasserAE View Post


    They are looking for a construction project manager. Most likely someone with a civil or architectural engineering degree. This is a standard requirements for this type of position. But one criteria that might be difficult to fulfill and that is "Experience managing large Data Center construction projects". There are probably few "large" data center construction in the last few years and they are most likely designed and built by different consultants and contractors.



    Construction project managers don't have to be experienced in a particular set of building type. They need to be experienced in the construction process. Other criteria are standard though. They are probably looking for someone with 10 to 15 years experience in engineering.



    Seven years ago I was a deputy project manager for two years before I decided to do my masters and PhD. The time I spend on my graduate study is not considered experience. It sucks but that's how things are!



    So the AE does stand for Architectural Engineer?



    I'm surprised (slightly) that Apple wants that type of position internally. Usually companies focus on having an internal group work standards and act as the owner's representative; this description sounds like more of an active design and construction supervision role. There are four or five design firms that dominate the industry, and another 5-10 Construction firms. There are also 4-5 electrical contractors that dominate in California that could also be a pool for candidates.



    Data Centers are a bit unique compared to a general construction and even similar specialized construction activities such as clean rooms, manufacturing, or healthcare. Systems may be similar, but the culture of how you do work is different.



    I'm qualified, and Apple would probably match my current pay, but it seems hard to justify jumping ship from my own company.
  • Reply 13 of 15
    libdemlibdem Posts: 36member
    Quote:
    Originally Posted by macarena View Post


    Thats where Apple's core competency lies...



    That's an idea.

    However,I feel that is more inclined towards enterprise than end users ,don't you think?
  • Reply 14 of 15
    afrodriafrodri Posts: 190member
    Quote:
    Originally Posted by libdem View Post


    That's an idea.

    However,I feel that is more inclined towards enterprise than end users ,don't you think?



    Yes, a server would be.



    But, there are still some of us who miss our Mac Servers - they had a lot of nice things going for them. And an ARM-based server has much to recommend it (lower power, high density, etc...). Unlike the consumer space, recompiling for a new architecture (ARM vs x86) is not as much of an issue.



    The biggest problem for such a server (right now) is that ARM is currently 32-bit, which limits its effectiveness in many areas. This should be changing over the next few years.
  • Reply 15 of 15
    macarenamacarena Posts: 365member
    Quote:
    Originally Posted by afrodri View Post


    The biggest problem for such a server (right now) is that ARM is currently 32-bit, which limits its effectiveness in many areas. This should be changing over the next few years.



    How many tasks currently being handled by servers really need 64-bit support? There are a few use cases - like Database servers, massive number crunching, etc - where sheer performance is the only metric that matters. Those use cases will not be suitable for ARM at the moment. But there are dozens of use cases that are extremely suitable for ARM...



    For instance - the biggest deployments of servers today are for supporting the "Cloud". For Cloud purposes, there is no point in even considering super powerful dual/quad Xeon based systems (especially if you are going to just create smaller virtual servers from them!). Most of the bottleneck in the cloud server space is in the Network - not in the Processor. Think of all the Web Servers and Mail servers in the world? How many of them really are compute intensive? Most of these servers have bottleneck in the Network and possibly in Storage. Today's ARM performance is more than adequate to max out Storage and Network limits - so end users will not even realize that there is a micro server serving their requirements.



    Most server deployments today are handled by Cluster servers - where your requests are serviced by one amongst a set of homogenous servers. Most large server deployments operate this way. For most of these purposes, a cluster of ARM servers should easily be able to replace higher power options. You get much better performance by adding more nodes to the cluster, than by increasing the performance of the node itself (when the service is network limited, increasing the performance of the node makes almost no difference in overall throughput).



    Definitely ARM will be more suitable for the server space when 64 bit support is there. But that does not mean that ARM is not of much use today.



    If you are an experienced sys admin, just take a look around you - and see how many virtual servers are configured with more than 4GB RAM. Why bother with 64 bit computing, if you are going to create small virtual machines anyway! And even in larger machines, the overhead of 64 bit instruction and data set is probably costing you more than the speed bump you get from having access to more memory - not many use cases are designed to take advantage of having huge amounts of RAM.



    There is one major issue that stops ARM from making headway in data centers - and that is inertia. Server administrators are notorious about not rocking the boat - and can happily sacrifice lower cost options, lower power options, etc for the sake of stability and continuity. Such a major shift, like moving to an ARM based architectures, is only possible when there is a brand new deployment - or when a company has the resources and the willingness to look at other technologies, and where there is enough importance being given to lowering power consumption, environmental impact etc. Apple is possibly the one company that can pull off this sort of shift.



    The only other company that can look at doing something similar is Facebook (and they are already emracing a wimpy server strategy). Google has way too much baggage to be able to shift strategies.
Sign In or Register to comment.