or Connect
New Posts  All Forums:

Posts by Elijahg

The chips in the HDDs and monitors are client chips, relatively dumb compared to the host chips in the Mac. If the iPhone was a client device, it'd have to turn the PCIe comms from TB to something the ARM CPU could understand, which'd waste a lot of CPU cycles. Right now the iPhone couldn't be a TB host, the TB controller chip is about half the width of the phone itself. But in any case, the ARM CPU doesn't have PCIe by any stretch of the imagination, so it just wouldn't...
Looks very 1990s to me. And as others have said, being "square" isn't exactly a wonderful image to paint of yourself...
It's partly to do with SSD controllers utilising RAID 0 across the many NAND chips in the SSDs, so reads/writes occur to the (individually slow) NAND chips simultaneously. iOS devices have a maximum of two NAND chips (16GB devices have one chip), so the RAID 0 only doubles the speed. I think it's possible the hardware encryption maxes out at ~20MB/sec too.So in other words, the iPhone 6/5/4SL wouldn't gain any advantage from USB 3. Apple are unlikely to use USB 3 just for...
Haha, sounds about right Or maybe OS XI: Moggy.
Oh so all those people who say about Windows ballooning in size between XP and Vista aren't talking about bloat? Sorry I didn't realise.Well if that's not a perfect definition of bloat, I don't know what is. Wait, yes I do: Software bloat is a process whereby successive versions of a computer program include an increasing proportion of unnecessary features that are not used by end users, or generally use more system resources than necessary, while offering little or no...
I'm not sure what you see as my arbitrary rules, Apple's the one creating arbitrary rules on which computers can and can't run ML, not me. They're creating unnecessary restrictions on what OSs can run Xcode too, so yes, people with Lion will magically stop being able to write apps for the App Stores at some point when Apple decides to release a Lion-incompatable Xcode update.According to "du -h /mach_kernel", the Lion kernel is 15mb, with 32 bit support. Not exactly...
I think you'll find many people with a $5000 Mac Pro 1,1 are on Lion. Even the oldest Mac Pros are fast, faster than some of Apple's latest machines. My Mac Pro 1,1 runs Rage at 40fps no problem, but it needs Lion. What happens when apps are updated to require ML, and autoupdate and break like Apple's botched iPhoto update few weeks back? As I said before and you ignored, why could they support machines for 7 years in the classic era, but not now on a modern OS?
You obviously care enough about it to "whine" in the other direction. What about devs who want to write iOS apps? If Apple's previous actions are anything to go by, the version of Xcode released after ML won't support Lion, so devs're going to need a new Mac to submit apps to Apple. Another artificial limitation to try to force people to fork out more for new hardware.
I've seen quite a few people saying how well it runs on their old machines, actually. People've been running it on P4s and it's pretty speedy.Of course Apple's so strapped for cash they can't hire a couple of devs to fix/compile/test the 64 bit kernel on 32-bit EFI machines. The kernel doesnt really change much between releases anyway, so there wouldn't be that much to test. It's mainly the frameworks above the kernel that are modified.Why was Apple advertising the Mac...
Absolutely nothing to do with that at all. Upgraded Mac Pros don't work when they have GPUs that support OpenGL 4.It's an entirely artificial limitation, as per usual with Apple, trying to force people to upgrade. Only thing is, there's so little in ML that people really need, they're not gonna plonk down $2000+ to upgrade a perfectly usable (and fast) 2006/7 Mac Pro to some "new" Mac Pro that's not been updated in years, which may well end up being unsupported three years...
New Posts  All Forums: