or Connect
New Posts  All Forums:

Posts by strobe

The display limit depends on the number of pixels you're pushing, not the GPU. You can have one large display which saturates the connection. A GPU removed from the CPU would be utterly pointless unless you wanted to add more OpenGL processors to a laptop or something (which sounds VERY impractical for various obvious reasons). There is a reason the GPU is connected to the CPU and not the monitor.
To drive the point home, the ultimate irony hit me as I have a displayport on my MacBook Pro which can't audio out. Yea, that's OK but don't you dare have a TB port which can't video out!
The more you think about it the more stupid it gets... The other repercussion of requiring display-out for every TB port is it killed potential use for servers. Since every port requires the ability to drive a display, that means you need a display driver for each port. So you can forget about massive SAN arrays using TB. Sure, most servers don't need more than six drives (or I suppose 5 if you want a display?), but some do! How many SATA connectors is on the average...
Sigh, yes, but still better than eSATA, right? Kinda makes me wish FW 3200 had been given the same "holy shit this is great! How soon can we deliver? NOT SOON ENOUGH!" treatment. Hell, that standard was announced in 2007 and was fully backward compatible with FW 800 (unlike USB3). I mean, did they really not see the potential for networking? FW networks are still around despite gigabit ethernet. They're cheaper and easier to setup. Sure, it would have required longer...
It's the Gamer Soda
BTW Intel's strategy is cunning when you think of it. Chip makers like NVIDIA, AMD, VIA and SiS didn't want to be left behind on the Next Big Thing™ so they rush ahead and push USB3 thinking that Intel had their own chip set in the works. They even accused Intel of hiding the USB3 spec so it would retain this advantage, but Intel rightfully claimed they never owned USB3 and now we see they have no interest in its success. Meanwhile NERDS rushed full bore buying USB3...
TB supports complex network topography like firewire. Meaning, yes, you can have hubs. Your rationale makes no sense to me whatsoever. A USB3 hub, just like USB2, is capped at the speed of the port its plugged into. In fact, due to the USB protocol (which sucks, always has sucked, always will suck), the protocol overhead increases EXPONENTIALLY so that you'll never see USB3 hubs used in this manner. USB 1.1 was designed for low bandwidth peripherals like mice, keyboard,...
Six devices? What is this, SCSI? Bad enough firewire only supported 63, which means it had limited use for LANs. People still used it anyway, but I would have thought 255 to be the minimum. Someone explain to me why six is acceptable. Hell, back in the day I remember hitting the SCSI limit of 7 with drives—and that was just drives! What about monitors, AV equipment, networks, adapters... SIX?!
The optical drive is the first thing I removed from my MacBook Pro 17". I replaced it with another hard drive, but could have easily replaced it with nothing and I would have still considered it an upgrade. Seriously Apple, get rid of the built-in optical drives. I haven't used one of those easily scratched things in YEARS! You ay as well put a floppy drive in there
Yea, I'm sure Daily Show viewers are all up to speed on our undeclared war in Pakistan where we're killing thousands of civilians a year. People who get their "news" from TV are beyond hope. TV news is for entertainment purposes only, and I say this especially in regards to CNN, MSNBC and FOX—not Comedy Central. The Daily Show was funny when the president had an 'R' next to his name. Now it's just lame. Maybe when O'blahblah gets the boot it'll be funny again.
New Posts  All Forums: