or Connect
New Posts  All Forums:

Posts by randian

Sandy Bridge has hardware video encoding, but it remains to be seen how good the quality is. As anybody who has seen a bad DVD or Blu-ray can attest, having the right encoder can make all the difference between a reference-quality presentation and something with the horrible black crush and the life smeared out of it. I wonder when Apple will release a version of FileVault that uses AES-NI instructions. Hardware en/decryption, if it really is fast, would go a long way...
Those Twitter accounts seem active but naturally have no recent mentions of the apps, and unfortunately there's no way I know of to search for keywords in a Twitter feed or to regress the feed back to a certain date. Oh well.
I use ForkLift myself. Why hasn't anybody else mentioned it?
Both have been stuck at their current versions for at least a year. iWallet: last version 4.2, Dream Apps The product web page is still there, but iWallet no longer appears on their main page (pointed to by the Products link) and the developers don't respond (to me anyway) to questions about it. Multiplex, last version 1.1, Indy Hall Labs The product page is still there, but the support forum has been 100% spam for months now and they don't respond to queries either.
No goalposts have been moved. Parallel computation is but a subset of multithreading. Efficiently doing large parallel computations on disjoint data sets is easy. Saving your results efficiently without locking contention killing you is hard. That's why I'm talking about I/O and IPC: they're the bottlenecks in OSX, which GC doesn't address.
Grand Central is a parallel computation framework. Please, do explain how it fixes substandard (from a parallelism point of view) locking schemes in OSX's I/O, IPC, Process, and other subsystems. It doesn't do you much good to efficiently spawn a hundred GPU cores running vector computations if you can't get all that data back to disk. Apple never did have a big iron mentality, if they did they wouldn't have used Mach, they'd have started with any of the several variants...
One of the reasons Apple doesn't go with huge numbers of cores is because OSX itself is (or at least was a couple of years ago) a dog at multithreading. It has serious bottlenecks above 4 cores or so that made running larger core counts pointless. I don't know how or if Apple has addressed parallelism in OSX, but that has always been a problem with Mach.
I have a 21.5" iMac that I never let sleep because of the long-time belief that desktop hard drives are most reliable when spinning 24/7, and that spin-up/spin-down cycles caused by sleeping or shutdowns stresses them and causes them to break. Is that still true for modern drives? Are the so-called "green" drives, seemingly designed to frequently stop/start what with their automatic sleep modes, hardened against this stress? By contrast, laptop drives are expected to be...
Architecture was always as important as clock speed. The RISC boys (MIPS, SPARC, etc) were regularly kicking Intel's butt until Intel gained the critical mass to afford really pushing its silicon process. Then it was game over, their architectural advantage rendered irrelevant because their clock speed lagged so severely. Spending a billion dollars every CPU revision makes no sense when you're only pushing 50-100k units a year, compared to the tens of millions being sold...
Even if they did, does FileVault take advantage of hardware acceleration?
New Posts  All Forums: