Then you will agree that Apple's variations on existing connectors are equally "idiotic" and "confusing"?
Mini VGA
Mini DVI
Micro DVI
Mini Displayport
1) How do those single, miniaturization options for an external display equal the same situation as found in USB?
2) The only one I'd take issue with is making mini-DVI which was just slightly too thick for the original MBA so they used micro-DVI. This is much like those god awful micro and mini-USB standards. Micro-HDMI is not a problem since it has a clear path of usage on certain portables.
3) Note that micro-DVI was only used in that one Mac or that one generation. In 2008 with the 2nd gen MBA Apple had designed, built, licensed free of charge, and had adopted into VESA's DisplayPort standard miniDisplayPort. We've had that port on Macs for 5 years as standard. You call that idiotic and confusing?
Who has enough money to buy a 4K Display or Camcorder... these days the standard HD resolution is 1920x1080 and goes all the way up to the Macbook Pro retina display dimensions... Unless you're a filthy rich movie director not many people will need this due to the fact that content is not being created at those dimensions. Heck websites still follow the 1024 x768 rule, and if they don't they're dynamic for bigger displays but not at the 4K resolution.
Previous reported said it was doubling to 20Gb/s in each direction. And it does, but unlike TB1 it's aggregate isn't double the unidirectional capacity any longer.
Yep - it was 40Gbps aggregate throughput all along.
Regarding his "created' comment, in another thread we did cover that content is and has been created using 4K cameras for some time now. What's not being done is released 4K on some Blu-ray Xtreme format.
No… Because Thunderbolt is 10Gbps each way right now. And this is 20Gbps each way… :no:
Add in the two channels of the current TB chips and you get 40Gbps in the aggregate. The Falcon Ridge chipset has not changed that, except that it can combine those two channels into one pipe.
No… Because Thunderbolt is 10Gbps each way right now. And this is 20Gbps each way
The current one is 2 x 10Gbps each way, now it's 1 x 20Gbps each way. They just aren't distinguishing display and data channels so it means that if you don't have a display attached, say just a Pegasus, it means 10Gbps won't be left unused. If you have a display and a Pegasus attached to the same port, it will probably still be a bit faster for data as the display might not use a full 10Gbps but closer to the current version in speed.
Essentially, it treats the video like another data stream rather than reserving 10Gbps for it, which it really should have done in the first place. Maybe they wanted to ensure that display bandwidth was prioritised over data so that a drive couldn't make the display lag or something. It likely wouldn't happen in practise anyway.
Possibly, Falcon Ridge wasn't due until next year but it depends which way they go and the more important factor is the Xeon chips. I suspect the Intel spec will maintain the requirement for Displayport, which makes it difficult in machines with dedicated GPUs. Desktops/laptops have integrated GPUs in the CPUs so they have been using software (Lucid on PCs) to copy the framebuffers from the dedicated GPUs into the IGP and output them over Thunderbolt. Workstation chips typically don't have these IGPs.
Intel could perhaps put an IGP into some models of the Ivy Bridge E5 Xeon chips like they did for Ivy Bridge E3 chips. This would allow them to put PCI slots in along with Thunderbolt as well as save power. But, this means that GPUs can have direct outputs so the OS has to figure out how to route the video and that seems like it would be messy.
My personal preference with the Pro would be for them to go small form factor, use MXM GPUs (which have Tesla, Firepro and Quadro options and they can clock them higher in the Pro) as they don't have direct outputs. Then they'd have an IGP in a single CPU, which connects to 4 external Thunderbolt ports, each 20Gbps. People could still upgrade the GPUs but all IO expansion would be Thunderbolt-based. For special use cases, there would be the option of a chassis for cards:
[VIDEO]
Small form factor also means it's more suitable for server use too.
In one direction. The old chipset was, and the new chipset is, full duplex. 40Gbps in the aggregate.
Where are you seeing that? I'm seeing that it was 10Gb/s unidirectional and now it's 20Gb/s bidirectional.
edit: I see what you're talking about now. From Marvin's post above. "The current one is 2 x 10Gbps each way, now it's 1 x 20Gbps each way. They just aren't distinguishing display and data channels so it means that if you don't have a display attached, say just a Pegasus, it means 10Gbps won't be left unused. If you have a display and a Pegasus attached to the same port, it will probably still be a bit faster for data as the display might not use a full 10Gbps but closer to the current version in speed."
Not exactly. If the boxes that are just shipping now can only claim to be compatible with, but can't run at the new version's speed, they look like doubly bad deals already with this announcement. Late 2013 for Thunderbolt 2? That makes these expensive version 1 hubs we've been waiting look even sillier at the price.
Maybe the best thing that comes out of this announcement of V2 being on the horizon is that the V1 hubs might have to shed some price. Or is Belkin just going to announce a $550 V2 hub?
Not exactly. If the boxes that are just shipping now can only claim to be compatible with, but can't run at the new version's speed, they look like doubly bad deals already with this announcement. Late 2013 for Thunderbolt 2? That makes these expensive version 1 hubs we've been waiting look even sillier at the price.
Maybe the best thing that comes out of this announcement of V2 being on the horizon is that the V1 hubs might have to shed some price. Or is Belkin just going to announce a $550 V2 hub?
That sucks. I assumed that V2 device would still be able to work with V1 peripherals, as this how such tech usually gets some backwards compatibility.
Where are you seeing that? I'm seeing that it was 10Gb/s unidirectional and now it's 20Gb/s bidirectional.
edit: I see what you're talking about now. From Marvin's post above. "The current one is 2 x 10Gbps each way, now it's 1 x 20Gbps each way. They just aren't distinguishing display and data channels so it means that if you don't have a display attached, say just a Pegasus, it means 10Gbps won't be left unused. If you have a display and a Pegasus attached to the same port, it will probably still be a bit faster for data as the display might not use a full 10Gbps but closer to the current version in speed."
You can read about the old (current) via the link below, top of page 2, Protocol Architecture. The doc is from Feb 2011.
That sucks. I assumed that V2 device would still be able to work with V1 peripherals, as this how such tech usually gets some backwards compatibility.
I wasn't speaking from knowledge, just speculating that because it's sounding to me like "backward compatibility" is meaning it will work but I'm not seeing it stated that a V1 box will run at V2 spec. Seems like the layout the stream in V2 would require a similarly differently configured path for a device to take advantage of the improvement. No? : )
Might have missed something in all the noise, though. : )
That sucks. I assumed that V2 device would still be able to work with V1 peripherals, as this how such tech usually gets some backwards compatibility.
They will work but they might be stuck at 10Gbps due to the controllers in the peripherals. Forwards compatibility is important too so hopefully V1 devices will support V2 peripherals - those will definitely be limited to 10Gbps though.
I am waiting for comments here. Pretty sure people will have interesting stuff to add. (And yeah, I'm commenting here mainly so that the AppleInsider reminder-emai reminds me there is that article when comments get added ).
I don't gett the emails from apple insider anymore for some reason
Don't expect T2 in the new Mac Pro. I suspect it'll be released before T2 is out.
I figue the Mac Pro will feature the first, (maybe the second) this is just an addition of how the Mac Pro price does not vary much that it featuring nothing but flash (at least in some models) would be great.
Comments
Originally Posted by Haggar
Then you will agree that Apple's variations on existing connectors are equally "idiotic" and "confusing"?
Nope, because that's a completely different argument. Please try again.
Quote:
Originally Posted by Tallest Skil
Nope, because that's a completely different argument. Please try again.
I see. So it's ok for Apple to create variations of existing connectors. But when anybody else does it, it's "idiotic" and "confusing". Thanks.
1) How do those single, miniaturization options for an external display equal the same situation as found in USB?
2) The only one I'd take issue with is making mini-DVI which was just slightly too thick for the original MBA so they used micro-DVI. This is much like those god awful micro and mini-USB standards. Micro-HDMI is not a problem since it has a clear path of usage on certain portables.
3) Note that micro-DVI was only used in that one Mac or that one generation. In 2008 with the 2nd gen MBA Apple had designed, built, licensed free of charge, and had adopted into VESA's DisplayPort standard miniDisplayPort. We've had that port on Macs for 5 years as standard. You call that idiotic and confusing?
Originally Posted by Haggar
I see. So it's ok for Apple to create variations of existing connectors. But when anybody else does it, it's "idiotic" and "confusing". Thanks.
Enjoy pretending you're talking about anything that I was talking about. I'm not dealing with you anymore.
Who has enough money to buy a 4K Display or Camcorder... these days the standard HD resolution is 1920x1080 and goes all the way up to the Macbook Pro retina display dimensions... Unless you're a filthy rich movie director not many people will need this due to the fact that content is not being created at those dimensions. Heck websites still follow the 1024 x768 rule, and if they don't they're dynamic for bigger displays but not at the 4K resolution.
Originally Posted by darkdefender
…due to the fact that content is not being created at those dimensions.
We've already covered that argument.
Yep - it was 40Gbps aggregate throughput all along.
Regarding his "created' comment, in another thread we did cover that content is and has been created using 4K cameras for some time now. What's not being done is released 4K on some Blu-ray Xtreme format.
The most recent reports say the total is 20Gb/s.
Add in the two channels of the current TB chips and you get 40Gbps in the aggregate. The Falcon Ridge chipset has not changed that, except that it can combine those two channels into one pipe.
In one direction. The old chipset was, and the new chipset is, full duplex. 40Gbps in the aggregate.
Some devices are expensive such as the $1000 Pegasus drives. I expect future revisions will still be able to be backwards compatible using an adaptor.
The current one is 2 x 10Gbps each way, now it's 1 x 20Gbps each way. They just aren't distinguishing display and data channels so it means that if you don't have a display attached, say just a Pegasus, it means 10Gbps won't be left unused. If you have a display and a Pegasus attached to the same port, it will probably still be a bit faster for data as the display might not use a full 10Gbps but closer to the current version in speed.
Essentially, it treats the video like another data stream rather than reserving 10Gbps for it, which it really should have done in the first place. Maybe they wanted to ensure that display bandwidth was prioritised over data so that a drive couldn't make the display lag or something. It likely wouldn't happen in practise anyway.
Possibly, Falcon Ridge wasn't due until next year but it depends which way they go and the more important factor is the Xeon chips. I suspect the Intel spec will maintain the requirement for Displayport, which makes it difficult in machines with dedicated GPUs. Desktops/laptops have integrated GPUs in the CPUs so they have been using software (Lucid on PCs) to copy the framebuffers from the dedicated GPUs into the IGP and output them over Thunderbolt. Workstation chips typically don't have these IGPs.
Intel could perhaps put an IGP into some models of the Ivy Bridge E5 Xeon chips like they did for Ivy Bridge E3 chips. This would allow them to put PCI slots in along with Thunderbolt as well as save power. But, this means that GPUs can have direct outputs so the OS has to figure out how to route the video and that seems like it would be messy.
My personal preference with the Pro would be for them to go small form factor, use MXM GPUs (which have Tesla, Firepro and Quadro options and they can clock them higher in the Pro) as they don't have direct outputs. Then they'd have an IGP in a single CPU, which connects to 4 external Thunderbolt ports, each 20Gbps. People could still upgrade the GPUs but all IO expansion would be Thunderbolt-based. For special use cases, there would be the option of a chassis for cards:
[VIDEO]
Small form factor also means it's more suitable for server use too.
Where are you seeing that? I'm seeing that it was 10Gb/s unidirectional and now it's 20Gb/s bidirectional.
edit: I see what you're talking about now. From Marvin's post above. "The current one is 2 x 10Gbps each way, now it's 1 x 20Gbps each way. They just aren't distinguishing display and data channels so it means that if you don't have a display attached, say just a Pegasus, it means 10Gbps won't be left unused. If you have a display and a Pegasus attached to the same port, it will probably still be a bit faster for data as the display might not use a full 10Gbps but closer to the current version in speed."
Quote:
Originally Posted by Suddenly Newton
Welcome to the cutting edge of innovation!
Population: the early adopters
Expect whining from mainstream users
Not exactly. If the boxes that are just shipping now can only claim to be compatible with, but can't run at the new version's speed, they look like doubly bad deals already with this announcement. Late 2013 for Thunderbolt 2? That makes these expensive version 1 hubs we've been waiting look even sillier at the price.
Maybe the best thing that comes out of this announcement of V2 being on the horizon is that the V1 hubs might have to shed some price. Or is Belkin just going to announce a $550 V2 hub?
That sucks. I assumed that V2 device would still be able to work with V1 peripherals, as this how such tech usually gets some backwards compatibility.
You can read about the old (current) via the link below, top of page 2, Protocol Architecture. The doc is from Feb 2011.
http://www.intel.com/content/dam/doc/technology-brief/thunderbolt-technology-brief.pdf
Quote:
Originally Posted by SolipsismX
That sucks. I assumed that V2 device would still be able to work with V1 peripherals, as this how such tech usually gets some backwards compatibility.
I wasn't speaking from knowledge, just speculating that because it's sounding to me like "backward compatibility" is meaning it will work but I'm not seeing it stated that a V1 box will run at V2 spec. Seems like the layout the stream in V2 would require a similarly differently configured path for a device to take advantage of the improvement. No? : )
Might have missed something in all the noise, though. : )
They will work but they might be stuck at 10Gbps due to the controllers in the peripherals. Forwards compatibility is important too so hopefully V1 devices will support V2 peripherals - those will definitely be limited to 10Gbps though.
I figue the Mac Pro will feature the first, (maybe the second) this is just an addition of how the Mac Pro price does not vary much that it featuring nothing but flash (at least in some models) would be great.
Quote:
Originally Posted by SolipsismX
Shh… The DOJ might claim they are conspiring.
Shh... The DOJ has no sense of humor.