Maybe not the A9 -- but possibly a custom variant of the A10. Say the A10C for Cloud, A10D for Distributed or A10S for Server.
Multiple of these SoCs (1-n) could easily be combined in a server and outperform Intel's best.
"Google is said to be working with Qualcomm to design servers based on ARM processors, which would be a significant endorsement for ARM as it tries to challenge Intel's dominance in data centers."
No surprise there, unless you're talking about a rendering farm or some other specialized application, "cloud" services are bandwidth limited not CPU limited; so having super fast burning hot CPUs is a total waste since for the most part they are twiddling their imaginary thumbs while waiting for data to be pushed around the internet or in and out of mass storage.
This should be good for all of us from a performance standpoint. I have 50,000 photos and who knows how many videos. I am not happy right now with the time it takes to sync these across devices. Sometimes speed is ok. Sometimes it takes a day. A whole day!
50,000 photos............really. You may have a hoarding problem.
Hoarding problem? Are you kidding? Even a semi-professional photographer easily shoots a few hundred pics in a session, many for good reason do not delete pictures and instead simply rank them. A pro photographer accumulates that many pictures easily within a year,
Same with music: a few thousand albums are quickly together if you like music.
Movies: just the Criterion collection of some of the best movies is a few hundred, and then there are other good movies; then throw in some flicks for the kids, documentaries, etc. and again you get up there.
That's also why Apple's current offerings suck: first 1TB isn't enough and second it's too expensive. A cloud cache service with selective backup of user specified critical data would be a lot more useful, even if in a default configuration it would act as it does now. We need something along the lines of PogoPlug, just working better...
50,000 photos............really. You may have a hoarding problem.
Hoarding problem? Are you kidding? Even a semi-professional photographer easily shoots a few hundred pics in a session, many for good reason do not delete pictures and instead simply rank them. A pro photographer accumulates that many pictures easily within a year,
Same with music: a few thousand albums are quickly together if you like music.
Movies: just the Criterion collection of some of the best movies is a few hundred, and then there are other good movies; then throw in some flicks for the kids, documentaries, etc. and again you get up there.
That's also why Apple's current offerings suck: first 1TB isn't enough and second it's too expensive. A cloud cache service with selective backup of user specified critical data would be a lot more useful, even if in a default configuration it would act as it does now. We need something along the lines of PogoPlug, just working better...
I was on Thom Hogan's site(bythom.com) long ago, and I think that Thom mentioned an anecdote about his friend Galen Rowell who had a gig for National Geographic. He shot something like 55,000 color negatives, and when he dropped them off, there was concern as the typical number was closer to 250,000.
I"m thinking that personal iCloud devices with ad hoc networking could be the next big thing, and I even imagine an AppleTV growing into a personal iCloud media server.
The AppleTV could well be an option in they open up some APIs in tvOS -- or, more likely, a box similar to the AppleTV -- with a dedicated OS, more RAM, CPUs, Storage and I/O.
Another approach to running servers on ARM is Docker.
Docker is used by IBM in their recently-released Kitura Swift Web Server.
Docker provides platform independence and facilitates distributed systems. It does this using containers which encapsulate an app (like a web server) and its I/O OS (Windows, Linux, OS X, etc.) interfaces. These containers are managed by a small, embedded, performant Linux Kernel and VM.
It is quite easy to install Docker/Kitura on OS X:
"We believe that we need to own and control the primary technologies behind the products we make, and participate only in markets where we can make a significant contribution." - Tim Cook, January 2009 FQ1 2009 Earnings Call Source: http://http//www.asymco.com/2011/01/17/the-cook-doctrine/
Maybe Apple can re-animate the Xserve project too. But this time with A9 SoCs instead of G4 chipsets, running RISC-enabled OS X.
That is wishful thinking. A9 SoCs are not built for the heavy duty data crunching and multitasking that Intel Processors are designed for. They won't work for XServes.
Apple limits the A9 to 2 CPU cores since this is optimal for handset use to balance speed versus energy, and focus on single thread tasks.
Intel Processors are built for brute force. The Xeon E7 for example can have up to 18 cores and 36 threads. In a server, a single Xeon kills the A9. The i7 can have up to 8 cores and 16 threads and runs up to 4 GHZ. Again, a single Xeon kills the A9.
Horses for courses. As Steve Jobs said, some people need trucks (desktops and servers), and some people need cars (iPhones and iPads).
Have you looked at TSMC's roadmap? By 2018 they plan on two separate 7 nm node processes. One for mobile and one for high performance processors. It is outlined on the semiwiki web site.
This begs the question, why is TSMC moving forward with a high performance node in addition to one for mobile CPUs. There isn't any true commitment from anyone else but Apple to justify TSMC spending the capital to produce a high performance node.
While the A9 and A9X would be trounced by a Xeon, Apple could "easily" design a high performance CPU using the ARM ISA and built on the TSMC high performance 7 nm node. The CPU would cost Apple much less than what Intel would charge for a Xeon and Apple could custom design the CPU for its own workloads.
The Xeon's days are numbered. The big question is whether Apple decides to sell cloud services to third parties. Their cost advantage would overwhelm Google, Microsoft and AWS. It is precisely the reason Microsoft desperately needs to move off of x86, but they have shown no ability to do so. And it is the likely reason that Google has been looking at designing their own ARM based CPU.
I myself bought my last x86 CPU based machine 3 years ago. Until Apple comes out with an ARM based MacBook or Mac Mini, all my computing needs will be handled by my iPhone, iPad and current iMac which I don't plan on upgrading from until Apple releases an ARM based machine.
The iMac is essentially used only as a print server as my iPad pro serves nearly all of my computing needs otherwise.
50,000 photos............really. You may have a hoarding problem.
Hoarding problem? Are you kidding? Even a semi-professional photographer easily shoots a few hundred pics in a session, many for good reason do not delete pictures and instead simply rank them. A pro photographer accumulates that many pictures easily within a year,
Same with music: a few thousand albums are quickly together if you like music.
Movies: just the Criterion collection of some of the best movies is a few hundred, and then there are other good movies; then throw in some flicks for the kids, documentaries, etc. and again you get up there.
That's also why Apple's current offerings suck: first 1TB isn't enough and second it's too expensive. A cloud cache service with selective backup of user specified critical data would be a lot more useful, even if in a default configuration it would act as it does now. We need something along the lines of PogoPlug, just working better...
Whatever happens the 50,000 sync is a product of your upload speed not Apple's backend.
The AppleTV could well be an option in they open up some APIs in tvOS -- or, more likely, a box similar to the AppleTV -- with a dedicated OS, more RAM, CPUs, Storage and I/O.
Another approach to running servers on ARM is Docker.
Docker is used by IBM in their recently-released Kitura Swift Web Server.
Docker provides platform independence and facilitates distributed systems. It does this using containers which encapsulate an app (like a web server) and its I/O OS (Windows, Linux, OS X, etc.) interfaces. These containers are managed by a small, embedded, performant Linux Kernel and VM.
It is quite easy to install Docker/Kitura on OS X:
I would be surprised if Apple and IBM don't already have this working on Apple's ARM CPUs.
Dick all of this stuff ( including the FoundationDB's noSQL server) is already out there in some form or other. In fact AWS and Azure certainly use them. Look up Hadoop. Look up couchbase.
I am not convinced that if Amazon can't handle the download speed for all those iOS devices that Apple will be able too. Netflix runs flawlessly worldwide on AWS.
It could be Apple's middleware as well, of course. Apple still puts the software on these boxes, nodes and VMs
One reason why Apple purchased land within China was because of the demand from the Chinese government that all data from Chinese citizens remain within China.
That and the closer you are geographically to your users the fewer router hops, thus faster.
jameskatt2 said: That is wishful thinking. A9 SoCs are not built for the heavy duty data crunching and multitasking that Intel Processors are designed for. They won't work for XServes.
Apple limits the A9 to 2 CPU cores since this is optimal for handset use to balance speed versus energy, and focus on single thread tasks.
Intel Processors are built for brute force. The Xeon E7 for example can have up to 18 cores and 36 threads. In a server, a single Xeon kills the A9. The i7 can have up to 8 cores and 16 threads and runs up to 4 GHZ. Again, a single Xeon kills the A9.
Horses for courses. As Steve Jobs said, some people need trucks (desktops and servers), and some people need cars (iPhones and iPads).
I am seeing a trend toward lower powered Atom servers, half depth, by 1U. Easy to deploy, less power requirements and easy to maintain. Heavy duty Xeons in a data center are usually full of VMs. Either way is fine, one is not necessarily better than the other in terms of performance, space and cooling requirements but I think the Atom servers are more cost efficient. ARMs would probably be the same in terms of low power and cooling requirements and cost effectiveness.
"We believe that we need to own and control the primary technologies behind the products we make, and participate only in markets where we can make a significant contribution." - Tim Cook, January 2009 FQ1 2009 Earnings Call Source: http://http//www.asymco.com/2011/01/17/the-cook-doctrine/
Maybe Apple can re-animate the Xserve project too. But this time with A9 SoCs instead of G4 chipsets, running RISC-enabled OS X.
That is wishful thinking. A9 SoCs are not built for the heavy duty data crunching and multitasking that Intel Processors are designed for. They won't work for XServes.
Apple limits the A9 to 2 CPU cores since this is optimal for handset use to balance speed versus energy, and focus on single thread tasks.
Intel Processors are built for brute force. The Xeon E7 for example can have up to 18 cores and 36 threads. In a server, a single Xeon kills the A9. The i7 can have up to 8 cores and 16 threads and runs up to 4 GHZ. Again, a single Xeon kills the A9.
Horses for courses. As Steve Jobs said, some people need trucks (desktops and servers), and some people need cars (iPhones and iPads).
Only one thing counts, and that's performance per watt, and in this (again) a single A9 core kills the Xeon (and all other Intel processors). Also, A processors are build for heavy multithreading, and could simply be updated to support much more (hardware) threads in a server version. Who knows what the A team already has running. You lack imagination, look further than you can see in front of you.
Dick all of this stuff ( including the FoundationDB's noSQL server) is already out there in some form or other. In fact AWS and Azure certainly use them. Look up Hadoop. Look up couchbase.
I am not convinced that if Amazon can't handle the download speed for all those iOS devices that Apple will be able too. Netflix runs flawlessly worldwide on AWS.
It could be Apple's middleware as well, of course. Apple still puts the software on these boxes, nodes and VMs
True, all this stuff is out there in some form including some early FoundationDB implementations. Unfortunately, about a month after the Apple acquisition of FoundationDB, all the detail info was removed from the FoundationDB web site -- Then the site was taken down. All that remains are articles and blogs by 3rd parties.
When the FoundationDB site was available, I spent about a week reviewing its capabilities -- but I wasn't able to download the system.
Here's a video that illustrates a typical FoundationDB setup with distributed servers combined to comprise a cluster:
Note those minimal servers -- they could, certainly, be ARM SoCs for many use cases.
In the video, each member of the cluster held a full copy of the DB and maintained sync among the copies. In that use case, there would be little (if any) advantage for download speed. But, because of the FoundationDB ordered key/value store, there would be a significant gain in search time -- the ordered key/value store allows highly efficient range searches.
In other examples (no longer accessible) FoundationDB presented examples of:
separate clusters for indexes and for data
sparse indexes
sparse data -- all the high-use data is in the index
Also, they presented examples where the clusters were easily distributed by usage patterns -- traffic, time zone, geographically, etc. In these cases, server traffic was handled by the location(s) best qualified to efficiently handle it.
This use case suggests: many, small datacenters as opposed to few, large datacenters. Likely, this would be a decision to tradeoff processing power for bandwidth.
For something like the Apple Online Store or iTunes, it would make sense to distribute the database this way.
All-in-all, I was impressed that FoundationDB was flexible enough that you could configure (and reconfigure) the system based on the needs at moments in time -- as opposed to configuring the system based on the needs of a specific database structure.
One reason why Apple purchased land within China was because of the demand from the Chinese government that all data from Chinese citizens remain within China.
That and the closer you are geographically to your users the fewer router hops, thus faster.
Purchasing land for their own Apple servers there makes sense. Moving Chinese user data to China-controlled servers last year had nothing to do with being closer and everything to do with China saying it's our way or the highway. Not saying that the Chinese government doesn't have a perfect right to dictate the playground rules either. It's their citizens buying the products.
Apple hasn't bought a single pcs of land in Hong Kong nor in China. There were a rumors that the land Google previously bought in HK but abundant later would be sold to Apple, but that never happened. Apple is still looking for a precise location to build its DC in China, with Renewable Energy operation and VERY decent networking. Networking Exchange in China is tough, unlike US where you simply pay and build out your way.
The current ongoing China DC are built with cooperation of a Chinese Telecom company ( which i cant' remember its name ), and it is not a Apple only DC. Judging from the growth of Chinese iPhone users Apple will likely have built a similar if not bigger DC in China ASAP.
Remember any DC that is now planned to built will only come into operation 18 months later at its earliest. Since Apple is has some extra concerns and needs this will likely lengthen to 2 - 3 years.
I have posted on AI a few times over the past years about how Apple is slow to react with DC and Cloud business. But with the move to Google Cloud it has become clear that Apple is playing with its leverage here. Apple is one of AWS key customers, and I guess one of the reason why AWS's figures were never released to public until AWS has grown a lot less relying on Apple. Then Apple switch to Microsoft Azure and get all its benefits of pricing. Remember Cloud business is ALL about scale, It is much easier to scale out with Apple's contract. Now that Azure is used, they are going with Google Cloud. In that end Apple's contract is properly heavily discounted, and it will last at least 2 years. Buying Apple additional time to figure out its DC.
My bet is that Apple will move to IBM Softlayer two years later. Move isn't really the right word as Apple will continue to use AWS, Azure and Google Cloud. Then Apple has enough leverage to wiggle with pricing.
I think Apple will also need a DC in Japan, which funny enough has yet to be rumored about.
"We believe that we need to own and control the primary technologies behind the products we make, and participate only in markets where we can make a significant contribution." - Tim Cook, January 2009 FQ1 2009 Earnings Call Source: http://http//www.asymco.com/2011/01/17/the-cook-doctrine/
Maybe Apple can re-animate the Xserve project too. But this time with A9 SoCs instead of G4 chipsets, running RISC-enabled OS X.
Totally! ARM64 is more than adequate. Actually, what they should really do for power users of whom there will be more and more: morph the AppleTV and MacMini into an ARM64 home server, where all HomeKit and user data live, i.e. a self-administered iCloud server at home, with Apple's iCloud acting as rendezvous server (akin to back to my Mac), encrypted internet cache, and encrypted backup. Once this tech exists, OS X server can also host the full iCloud functionality for organizations who have sensitive data they don't want to entrust any third parties.
Apple would solve the scalability issue, privacy issue in one go while giving AppleTV and HomeKit a boost at the same time...
Apple hasn't bought a single pcs of land in Hong Kong nor in China. There were a rumors that the land Google previously bought in HK but abundant later would be sold to Apple, but that never happened. Apple is still looking for a precise location to build its DC in China, with Renewable Energy operation and VERY decent networking. Networking Exchange in China is tough, unlike US where you simply pay and build out your way.
The current ongoing China DC are built with cooperation of a Chinese Telecom company ( which i cant' remember its name ), and it is not a Apple only DC. Judging from the growth of Chinese iPhone users Apple will likely have built a similar if not bigger DC in China ASAP.
Remember any DC that is now planned to built will only come into operation 18 months later at its earliest. Since Apple is has some extra concerns and needs this will likely lengthen to 2 - 3 years.
I have posted on AI a few times over the past years about how Apple is slow to react with DC and Cloud business. But with the move to Google Cloud it has become clear that Apple is playing with its leverage here. Apple is one of AWS key customers, and I guess one of the reason why AWS's figures were never released to public until AWS has grown a lot less relying on Apple. Then Apple switch to Microsoft Azure and get all its benefits of pricing. Remember Cloud business is ALL about scale, It is much easier to scale out with Apple's contract. Now that Azure is used, they are going with Google Cloud. In that end Apple's contract is properly heavily discounted, and it will last at least 2 years. Buying Apple additional time to figure out its DC.
My bet is that Apple will move to IBM Softlayer two years later. Move isn't really the right word as Apple will continue to use AWS, Azure and Google Cloud. Then Apple has enough leverage to wiggle with pricing.
I think Apple will also need a DC in Japan, which funny enough has yet to be rumored about.
Migration is very difficult between DCs anyway. And between providers. You have to transfer the data from amazons cloud to apples cloud and there's no standard way to do that. Furthermore the user is probably still uploading to the old account so you can't cutover until he stops or you lose data. Then you have to handle the cutover with no downtime and not forcing him to login again.
I believe that new cloud structure is for new accounts.
It will be fascinating to see how this all plays out. Apple's development teams have a history of paying others for services they feel are more cost effectively served at the time but all the while developing their own in-house solutions and waiting for technology to catch up to their vision. For Apple in the near future to move over to an all Apple cloud / server solution powered by all Apple technology would be of no surprise to me whatsoever.
Talking of waiting for technology to catch up to the Apple vision, I've no doubt somewhere in a lab Apple have a quantum based iPhone and Mac not to mention a Tricorder in the planning stage. Seriously though as someone who sat in a Apple dealer meeting and watch the Knowledge Navigator movie back in the day I am still waiting for a Siri we can opt to actually see talking to us.
Comments
Hoarding problem? Are you kidding?
Even a semi-professional photographer easily shoots a few hundred pics in a session, many for good reason do not delete pictures and instead simply rank them. A pro photographer accumulates that many pictures easily within a year,
Same with music: a few thousand albums are quickly together if you like music.
Movies: just the Criterion collection of some of the best movies is a few hundred, and then there are other good movies; then throw in some flicks for the kids, documentaries, etc. and again you get up there.
That's also why Apple's current offerings suck: first 1TB isn't enough and second it's too expensive. A cloud cache service with selective backup of user specified critical data would be a lot more useful, even if in a default configuration it would act as it does now.
We need something along the lines of PogoPlug, just working better...
The problem is, iOS devices are convertibles, and what I'm looking for is a hatchback or station wagon...
The AppleTV could well be an option in they open up some APIs in tvOS -- or, more likely, a box similar to the AppleTV -- with a dedicated OS, more RAM, CPUs, Storage and I/O.
Another approach to running servers on ARM is Docker.
Docker is used by IBM in their recently-released Kitura Swift Web Server.
Docker provides platform independence and facilitates distributed systems. It does this using containers which encapsulate an app (like a web server) and its I/O OS (Windows, Linux, OS X, etc.) interfaces. These containers are managed by a small, embedded, performant Linux Kernel and VM.
It is quite easy to install Docker/Kitura on OS X:
https//github.com/ibm-swift/kitura#installation-os-x
There is currently quite a bit of activity installing Docker on ARM:
http://blog.hypriot.com/post/test-build-and-package-docker-for-arm-the-official-way/
I would be surprised if Apple and IBM don't already have this working on Apple's ARM CPUs.
This begs the question, why is TSMC moving forward with a high performance node in addition to one for mobile CPUs. There isn't any true commitment from anyone else but Apple to justify TSMC spending the capital to produce a high performance node.
While the A9 and A9X would be trounced by a Xeon, Apple could "easily" design a high performance CPU using the ARM ISA and built on the TSMC high performance 7 nm node. The CPU would cost Apple much less than what Intel would charge for a Xeon and Apple could custom design the CPU for its own workloads.
The Xeon's days are numbered. The big question is whether Apple decides to sell cloud services to third parties. Their cost advantage would overwhelm Google, Microsoft and AWS. It is precisely the reason Microsoft desperately needs to move off of x86, but they have shown no ability to do so. And it is the likely reason that Google has been looking at designing their own ARM based CPU.
I myself bought my last x86 CPU based machine 3 years ago. Until Apple comes out with an ARM based MacBook or Mac Mini, all my computing needs will be handled by my iPhone, iPad and current iMac which I don't plan on upgrading from until Apple releases an ARM based machine.
The iMac is essentially used only as a print server as my iPad pro serves nearly all of my computing needs otherwise.
Dick all of this stuff ( including the FoundationDB's noSQL server) is already out there in some form or other. In fact AWS and Azure certainly use them. Look up Hadoop. Look up couchbase.
I am not convinced that if Amazon can't handle the download speed for all those iOS devices that Apple will be able too. Netflix runs flawlessly worldwide on AWS.
It could be Apple's middleware as well, of course. Apple still puts the software on these boxes, nodes and VMs
Also, A processors are build for heavy multithreading, and could simply be updated to support much more (hardware) threads in a server version.
Who knows what the A team already has running.
You lack imagination, look further than you can see in front of you.
True, all this stuff is out there in some form including some early FoundationDB implementations. Unfortunately, about a month after the Apple acquisition of FoundationDB, all the detail info was removed from the FoundationDB web site -- Then the site was taken down. All that remains are articles and blogs by 3rd parties.
When the FoundationDB site was available, I spent about a week reviewing its capabilities -- but I wasn't able to download the system.
Here's a video that illustrates a typical FoundationDB setup with distributed servers combined to comprise a cluster:
Note those minimal servers -- they could, certainly, be ARM SoCs for many use cases.
In the video, each member of the cluster held a full copy of the DB and maintained sync among the copies. In that use case, there would be little (if any) advantage for download speed. But, because of the FoundationDB ordered key/value store, there would be a significant gain in search time -- the ordered key/value store allows highly efficient range searches.
In other examples (no longer accessible) FoundationDB presented examples of:
Also, they presented examples where the clusters were easily distributed by usage patterns -- traffic, time zone, geographically, etc. In these cases, server traffic was handled by the location(s) best qualified to efficiently handle it.
This use case suggests: many, small datacenters as opposed to few, large datacenters. Likely, this would be a decision to tradeoff processing power for bandwidth.
For something like the Apple Online Store or iTunes, it would make sense to distribute the database this way.
All-in-all, I was impressed that FoundationDB was flexible enough that you could configure (and reconfigure) the system based on the needs at moments in time -- as opposed to configuring the system based on the needs of a specific database structure.
Apple hasn't bought a single pcs of land in Hong Kong nor in China. There were a rumors that the land Google previously bought in HK but abundant later would be sold to Apple, but that never happened. Apple is still looking for a precise location to build its DC in China, with Renewable Energy operation and VERY decent networking. Networking Exchange in China is tough, unlike US where you simply pay and build out your way.
The current ongoing China DC are built with cooperation of a Chinese Telecom company ( which i cant' remember its name ), and it is not a Apple only DC. Judging from the growth of Chinese iPhone users Apple will likely have built a similar if not bigger DC in China ASAP.
Remember any DC that is now planned to built will only come into operation 18 months later at its earliest. Since Apple is has some extra concerns and needs this will likely lengthen to 2 - 3 years.
I have posted on AI a few times over the past years about how Apple is slow to react with DC and Cloud business. But with the move to Google Cloud it has become clear that Apple is playing with its leverage here. Apple is one of AWS key customers, and I guess one of the reason why AWS's figures were never released to public until AWS has grown a lot less relying on Apple. Then Apple switch to Microsoft Azure and get all its benefits of pricing. Remember Cloud business is ALL about scale, It is much easier to scale out with Apple's contract. Now that Azure is used, they are going with Google Cloud. In that end Apple's contract is properly heavily discounted, and it will last at least 2 years. Buying Apple additional time to figure out its DC.
My bet is that Apple will move to IBM Softlayer two years later. Move isn't really the right word as Apple will continue to use AWS, Azure and Google Cloud. Then Apple has enough leverage to wiggle with pricing.
I think Apple will also need a DC in Japan, which funny enough has yet to be rumored about.
I believe that new cloud structure is for new accounts.
Talking of waiting for technology to catch up to the Apple vision, I've no doubt somewhere in a lab Apple have a quantum based iPhone and Mac not to mention a Tricorder in the planning stage. Seriously though as someone who sat in a Apple dealer meeting and watch the Knowledge Navigator movie back in the day I am still waiting for a Siri we can opt to actually see talking to us.