I'm thinking eventually there won't be an issue of bandwidth allocation. Think of a single optical channel that operates at 1Tbit/s. It gets split into 5 ports and these ports would be tiny dots around the size of an earphone jack. There can be a magnetic ring round them that can be used to hold it in place and provide power like magsafe. The plug ends would be circular with no wrong way to connect them. Bandwidth is shared across all ports and 1Tbit/s for all of them is a lot. The circular design would make the magnetic connection stronger to avoid accidental disconnects and the wires could be pretty thin, especially ones that didn't require power. This can even replace headphone jacks so you get optical audio quality and no broken jacks. It would have a simple in-line converter for 3.5mm.
I don't think a unified solution is possible with electrical connections but optical is just a light signal so a multi-protocol optical connection would automatically be universal because it just needs a basic connector. For now, the closest we will get to unification is Thunderbolt + USB 3, which I think is fine.
I don't think thunderbolt will displace the use of fibre channel HBAs and mini-SAS at the moment. There are a lot of existing things using mini-SAS with greater bandwidth, and it's a very small connector. I agree a universal one would be great. I just don't see it as something that will be out soon, especially as the current one has to provide more bandwidth to make up for inefficiencies in delivery. It'll be interesting to see what it's like when they make a true optical version, but costs might be an issue for now. The cost of cabling is also important if you want to reach a really broad market. Intel seemed concerned about this as they did mention it would come down to how much bandwidth people were willing to pay for. I still see it as thunderbolt kool-aid for now. Intel likes to propagate far out numbers, and I don't really trust them.
They don't have to use the higher powered chips though. Apple already makes the decision they don't want to offer the faster CPUs. They could even stick with a 6-core. The co-processor would make up the performance lost from having more full Xeon cores.
HP managed to put a Xeon in their AIO Z1 so it's possible. It might be a pretty expensive machine though.
I think you may not have looked at HP's hardware specifications. HP has a big workstation market. This was somewhat aimed at the lighter end of it. They're defined by features and serviceability in an AIO rather than raw power. The most powerful cpu is an E3-1280. It's one of the lighter Sandy Bridge cpus rather than Sandy Bridge E. Sandy Bridge E may have been too hot. Note that when you opt for one of the Xeons, integrated graphics stops being a configuration option. I think their entry configuration is pretty bleh for what it costs. It appears to be beautifully engineered, yet starting with an i3 and Intel HD2000 graphics is really sub par for its price. Configured to the minimum specs I'd personally want pushes it to around $4k. Some of their upgrade steps are oddly structured. A Quadro 1000m is around $400 for a very cheap card. The 3000m is only a couple hundred more. The 4000m is about $600 on top of that, and that's where you're really into the performance of a lower workstation card. It still wouldn't perform at the level of a desktop Quadro which retails for around $700-800. Anything below the 3000m is pretty bad for the market HP seems to want (I'm assuming CAD design market). That Xeon isn't like the ones in the Mac Pro. It's based on desktop parts. You get 20 PCI lanes total. The Ivy version also tops out at 4 cores. Haswell is unlikely to up the cores. It's an architectural change rather than a die shrink, and processors of that class tend to chase ghz more than cores. In the case of high end Sandy E, the 8 core units are still clocked lower. It's just that the change in architecture allowed them to be clocked at a rate that wouldn't appear laughably low. Anyway I was just trying to offer some perspective on that. I don't think a 6 core will be possible next year in that socket type for the Z1 or the imac.
If you think of a single supercomputer installation, they'd order a few thousand PCI cards. The amount of those kind of installations worldwide is very small. NVidia Tesla launched 2007. This article here says only 150,000 units total have been shipped:
Even the world's 2nd fastest supercomputer only has 7,168 Tesla GPUs. Compare total Tesla units over 3 years to Apple shipping 150,000 machines with a co-processor in 3 months and they can easily get the prices down. In an iMac, it's even better because that's Apple's best selling desktop.
NVidia may not ship that many of them, but the margins are most likely quite high. It was interesting R&D on their part. They leveraged in on cost and power consumption. It's not to say that tesla cards are cheap. It's just that for NVidia's market, the comparison was what that kind of power would cost if powered by x86 cores from intel. That was a cool article overall. I've always liked NVidia's way of thinking. I like AMD too. Both are smaller companies relative to their competition, and they come out with some very cool ideas. While I could see such a thing bringing prices down assuming enough capacity exists to turn out this many, I don't think Apple would be the one to make such a bold move in the workstation market. I'm still wondering where you saw a late 2013 quote. The only ones I can turn up are quite ambiguous and merely point to 2013 without even truly confirming a mac pro.