or Connect
AppleInsider › Forums › Software › Mac OS X › Compressed Memory in OS X 10.9 Mavericks aims to free RAM, extend battery life
New Posts  All Forums:Forum Nav:

Compressed Memory in OS X 10.9 Mavericks aims to free RAM, extend battery life

post #1 of 32
Thread Starter 
Apple has publicly touted a significant new feature in OS X 10.9 Mavericks designed to maximize RAM, storage and CPU use while also boosting power efficiency: Compressed Memory.

Mavericks


More resources, fewer drawbacks



The new Compressed Memory feature sounds absolutely utopian: the operating system applies data compression to the least important content held in memory to free up more available RAM. And it does so with such efficiency that the CPU and storage devices work less, saving battery as well.

The new feature fits particularly well into Apple's design direction for its mobile products like the MacBook Air, which aims to deliver a long battery life via SSD storage (as opposed to a mechanical hard drive), but which also does not offer any aftermarket RAM expansion potential.

Apple's current crop of MacBook Air models now provide a minimum of 4GB RAM, with an at-purchase, $100 option to install 8GB. However, earlier models capable of running OS X Mavericks were sold into 2011 with only a paltry 2GB.To get the most use out of such limited RAM resources, Apple will be using dynamic Memory Compression to automatically shrink the footprint of content that has been loaded into RAM but is not immediately needed.

To get the most use out of such limited RAM resources, Apple will be using dynamic Memory Compression to automatically shrink the footprint of content that has been loaded into RAM but is not immediately needed.

OS X has always used Virtual Memory to serve a similar purpose; with Virtual Memory, the OS "pages" less important content to disk (the hard drive or SSD), then loads it back into active memory when needed. However, this requires significant CPU and disk overhead. The closer users come to running out of available memory, the more paging to disk swap storage the system has to do.

It turns out that the operating system can compress memory on the fly even more efficiently, reducing the need for active paging under Virtual Memory. That lets the CPU and drive power down more often, saving battery consumption.

Super speedy, super efficient compression



"The more memory your Mac has at its disposal, the faster it works," Apple notes on its OS X Mavericks advanced technology page.

"But when you have multiple apps running, your Mac uses more memory. With OS X Mavericks, Compressed Memory allows your Mac to free up memory space when you need it most. As your Mac approaches maximum memory capacity, OS X automatically compresses data from inactive apps, making more memory available."

OS X 10.9 Memory Compression


Apple also notes in a Technology Overview brief that "Compressed Memory automatically compresses the least recently used items in memory, compacting them to about half their original size. When these items are needed again, they can be instantly uncompressed."

As a result, there's more free memory available to the system, which "improves total system bandwidth and responsiveness, allowing your Mac to handle large amounts of data more efficiently."

Apple is using WKdm compression, which is so efficient at packing data that the compression and decompression cycle is "faster than reading and writing to disk."

The new old technology of data compression



Memory and storage compression in general isn't new at all. In the 1980s, tools such as DiskDoubler allowed users to compress files on disk on the fly, as opposed to packing files into archives (which dates back to the beginning of computing). RAM Doubler did the same thing for memory, a technique that was essentially replaced by Virtual Memory in the late 90s.

Over time, the benefits of compressing files were largely outweighed by the overhead involved, particularly as storage grew cheaper and more plentiful and new techniques were built into the operating system. But the recent move toward mobile computing and the use of relatively expensive solid state storage (and often idle, but very fast CPU cores) have made compression popular again.

Beginning in OS X 10.6 Snow Leopard, Apple quietly added HFS+ Compression as a feature for saving disk space in system files. The benefits of this were limited by the fact that previous versions of OS X couldn't recognize these compressed files, and so the compression was not applied to files outside of the system.

Windows also uses file compression in NTFS, as does Linux' Btrfs, but in general these incur a performance penalty, making the primary benefit increased disk space at the cost of performance.Compressing the contents of volatile system memory, rather than disk storage, has remained even more experimental.

Compressing the contents of volatile system memory, rather than disk storage, has remained even more experimental. It is active by default in many virtualization products, such as VMWare's ESX. And it's also been studied for use in general computing.

Under Linux the Compcache project seeks to similarly compress memory via the LZO algorithm that would otherwise be expensively paged to disk. The benefits here too, however, were not always worth the overhead involved, or the additional complication introduced with doing things such as waking from hibernation.

Modern solutions to address new issues



Today however, the combination of speedy, often idle CPU cores; large data sets; more expensive SSD storage and the efficiency requirements of mobile computing have made memory compression a perfect solution to a variety of issues, if the compression technologies used fit the kinds of tasks being performed.

Research by Matthew Simpson of Clemson University and Dr. Rajeev Barua and Surupa Biswas, University of Maryland, examined the use of various types of memory compression a decade ago, particularly in regard to embedded systems where memory is likely to be more scarce.

At issue then were the best compression techniques to use. The research noted that "dictionary-based algorithms tend to have slow compression speeds and fast decompression speeds while statistical algorithms tend to be equally fast during compression and decompression."

OS X 10.9 Memory Compression


The WKdm compression Apple is now using (which uses a hybrid of both dictionary and statistical compression techniques) was found in their research to provide effective compression at both the fastest (and therefore most efficient) compression and decompression speeds.

Supercharged Virtual Memory



Apple's new Compressed Memory feature also works on top of Virtual Memory, making it even more efficient.

"If your Mac needs to swap files on disk [via Virtual Memory]," Apple explains, "compressed objects are stored in full-size segments, which improves read/write efficiency and reduces wear and tear on SSD and flash drives."

Apple states that its compression techniques "reduces the size of items in memory that haven?t been used recently by more than 50 percent," and states that "Compressed Memory is incredibly fast, compressing or decompressing a page of memory in just a few millionths of a second."

Compressed Memory can also take advantage of parallel execution on multiple cores "unlike traditional virtual memory," therefore "achieving lightning-fast performance for both reclaiming unused memory and accessing seldom-used objects in memory."

OS X 10.9 Memory Compression


By improving the way Virtual Memory works, it's less necessary for the system to "waste time continually transferring data back and forth between memory and storage." As a result, Apple says Memory Compression improves both general responsiveness under load and improves wake from standby times, as depicted above.

The (4) footnote Apple references in the graphic notes that "testing conducted by Apple in May 2013 and June 2013 using production 1.8GHz Intel Core i5-based 13-inch MacBook Air systems with 256GB of flash storage, 4GB of RAM, and prerelease versions of OS X v10.9 and OS X v10.8.4. Performance will vary based on system configuration, application workload, and other factors."
post #2 of 32
That's a pretty neat feature indeed, I'm still hitting a wall on my MBP 17" late 2010 with 8GB Ram (maxed out). But it helps to have SSD, too bad Purge command no longer works with Mavericks.
post #3 of 32
This feature just made me remember about quarterdeck's QEMM, it had a feature called MagnaRAM, that did the exact same thing. Compressed memory to avoid physical paging. This idea is not new, but with the amount of waste all those XML structures have while in-memory the gains will be worth the CPU effort.

Patents for these might be 20 or 30 years old.
post #4 of 32
Quote:
Originally Posted by superjunaid View Post

That's a pretty neat feature indeed, I'm still hitting a wall on my MBP 17" late 2010 with 8GB Ram (maxed out). But it helps to have SSD, too bad Purge command no longer works with Mavericks.

 

Read and see if you qualify with 10.7.5+ to upgrade to 16GB RAM with the updated firmware: http://www.everymac.com/systems/apple/macbook_pro/macbook-pro-unibody-faq/macbook-pro-13-15-17-mid-2009-how-to-upgrade-ram-memory.html

 

The details are half-way down the article.

post #5 of 32

Looks like this is just going to compress suspended applications - note that other OS changes make apps suspend by default now when not in the foreground.  When suspended for a while, rather than paging, it's going to compress the memory that is effectively idle, the idea being that compressed memory is faster to decompress than page out and page back in from disk, even an SSD disk.

 

Given the randomness of memory, the 50% compression stats must have some basis from testing a broad range of apps, but some apps will likely be incompressible, and others more compressible.. that's the nature of compression.

 

Nice concept.  Will be interesting to see real life impact.

post #6 of 32

I can't pretend to understand the details of this so I have a question for those more informed than me.

 

If this reduces the number of page in/out to the disk, is it likely to impact the life of the hard disk or SSD?

 

If this is an idiotic question I apologise in advance for my ignorance.

post #7 of 32
Quote:
Originally Posted by festerfeet View Post

If this reduces the number of page in/out to the disk, is it likely to impact the life of the hard disk or SSD?

That's not an idiotic question, and I would guess that the answer is yes. But I doubt that it's significant.

post #8 of 32
Quote:
Originally Posted by festerfeet View Post

I can't pretend to understand the details of this so I have a question for those more informed than me.

 

If this reduces the number of page in/out to the disk, is it likely to impact the life of the hard disk or SSD?

 

If this is an idiotic question I apologise in advance for my ignorance.

 

Yes, in a couple ways.  On a HD, the less you're reading/writing, the less likely it is that a jolt could cause damage.  On an SSD, there is a finite number of writes before the cells no longer hold a charge, and this could significantly help with that if you paged a lot previously.

 

Some pages of memory are going to contain almost incompressible image or sound data, but others are going to be a lot of zeroes with a few small values in them. I suspect 50% is probably a pretty reasonable average.

post #9 of 32

Anyone find any documentation anywhere suggesting or stating if this memory compression is, will be, or already has been applied to iOS?

post #10 of 32
Quote:
Originally Posted by superjunaid View Post

too bad Purge command no longer works with Mavericks.

I wonder why they'd do that. Compressing inactive memory is fine but sometimes it just needs flushed. SSDs are so fast now, I don't think having lots of things cached in inactive memory really makes a big difference. At the very least, it should have a timeout and then it gets flushed but some apps do odd things. Quicklook previews sometimes generate huge amounts of inactive memory very quickly. The Finder can act up at times too - my favourite Finder issue these days is when I have the trash open and a file selected showing the preview and hit empty trash and it says 'sorry, file is in use' and it's the Finder using it - you'd think it would know to stop using it.

I like the idea of Safari having separate processes per tab; once the tab gets closed down, the memory assigned to that process will get flushed so that should mean better memory usage over a long period of time and the days of one tab hanging the whole browser will be at an end. They did fix that auto-refresh problem with a warning dialog, which was nice but single process tabs should eliminate that altogether. It's good to see they've been working on these memory issues and time will tell if it's done right. I love the idea of slowing background processes. That means background web pages with ads won't be making everything else stutter.
post #11 of 32
Quote:
Originally Posted by chabig View Post

That's not an idiotic question, and I would guess that the answer is yes. But I doubt that it's significant.

Thank you for the info

post #12 of 32
Quote:
Originally Posted by Booga View Post

 

Yes, in a couple ways.  On a HD, the less you're reading/writing, the less likely it is that a jolt could cause damage.  On an SSD, there is a finite number of writes before the cells no longer hold a charge, and this could significantly help with that if you paged a lot previously.

 

Some pages of memory are going to contain almost incompressible image or sound data, but others are going to be a lot of zeroes with a few small values in them. I suspect 50% is probably a pretty reasonable average.

and another thank you.

post #13 of 32
Of all the things announced in the keynote, this is the only feature I don't like 100%, unless you can disable/enable it when you really need it. I agree it can be useful when I try to do stuff which would need much more RAM than I have, but I'm the kind of user who sizes the stuff I do accordingly to the resources I have. I usually move close to the limits of my RAM, but without needing disk swapping, I only hit disk swapping on very rare occasions. So, I don't like the OS deciding for me here, I don't want to have the CPU working on background compression unless I explicitly enable it.
post #14 of 32
Quote:
Originally Posted by ecs View Post

Of all the things announced in the keynote, this is the only feature I don't like 100%, unless you can disable/enable it when you really need it. I agree it can be useful when I try to do stuff which would need much more RAM than I have, but I'm the kind of user who sizes the stuff I do accordingly to the resources I have. I usually move close to the limits of my RAM, but without needing disk swapping, I only hit disk swapping on very rare occasions. So, I don't like the OS deciding for me here, I don't want to have the CPU working on background compression unless I explicitly enable it.

 

That doesn't make sense at all.

 

I think you're confusing virtual memory with inactive memory.  Virtual memory use disk/SSD for ram when there isn't enough real memory.  Inactive memory is memory that was recently used, could be used again but is not currently being used.  When it's needed again, it switches from inactive to active as opposed to being read in from disk or regenerated.  If you run out of memory and need more for active memory, the least recently used inactive memory is instantly freed up for active memory.  This is why they say, "free ram is wasted ram".

 

The whole point of memory compression is to expand the amount of inactive ram by compressing the ram that is inactive, thus allowing more of it.  If you're "managing the ram you're using" such that you never have inactive ram, then memory compression isn't going to be a factor for you.  Then again, if you don't have inactive ram, you're likely wasting ram.

 

In other words, if you never disk swap, memory compression will speed things up because you're increasing the amount of inactive memory that is available.  On the other hand, if you are disk swapping you're still speeding things up because the memory compression is still applied.

 

The only conceivable situation where memory compression wouldn't speed things up, even a little bit, would be if you somehow managed to never re-use inactive memory at all.  However, even that would be dependent on Apple poorly implementing memory compression by not isolating it to proper CPU times, which is part of what Timer Coalescing will handle.

post #15 of 32
Quote:
Originally Posted by ecs View Post

Of all the things announced in the keynote, this is the only feature I don't like 100%, unless you can disable/enable it when you really need it. I agree it can be useful when I try to do stuff which would need much more RAM than I have, but I'm the kind of user who sizes the stuff I do accordingly to the resources I have. I usually move close to the limits of my RAM, but without needing disk swapping, I only hit disk swapping on very rare occasions. So, I don't like the OS deciding for me here, I don't want to have the CPU working on background compression unless I explicitly enable it.

What makes you think it will do that if it doesn't need to? In situations when Mavericks performs compression, Mountain lion would have swapped to disk. Disabling it would cause you to spend more power and time performing disk writes, and also later if the memory becomes active again.

"Apple should pull the plug on the iPhone."

John C. Dvorak, 2007
Reply

"Apple should pull the plug on the iPhone."

John C. Dvorak, 2007
Reply
post #16 of 32

I hope these technologies are patented by Apple, if they had invented.

post #17 of 32

The pre Mavericks system will page out inactive memory regardless of the load on the system, its based on the level of inactivity of that page of memory. Otherwise the system will quickly come to the point where it has to page in and page out simultaneously, which is where most beach balling and slowdowns happen.

 

You can see this in top in terminal ( at the top of the result) - just launch Terminal.app and type top. Except for immediately after login ( and not always then) the page out figure is non zero. That main figure is since the last reboot, the parenthesis is the delta. 

 

The idea is not to wait until there needs to be a page in and page out at the same time ( when the system is out of RAM) but to mark as inactive pages, and page them to disk, long before that. To not get into that situation. The user can't really control this. It also explains the slow wakeup,and the fact that they say it is better now. If memory was just paged under pressure it wouldn't get paged out during a sleep, if it is paged out because of inactivity it would.

 

So the new algorithms will probably compress inactive memory to begin with. And then page it out. Both would be after periods of in activity, it would just be a  longer wait until the system decides to page out the (previously compressed) memory. If that page is made active it is either uncompressed, or read from disk and uncompressed if it were paged out. Only the latter is slower. Writing to disk is faster.

 

Also get away from the idea of it being an apps memory. Most of what any application touches in memory is from the System - the AppKit, Foundation, CoreFoundation etc. Some of that is always "hot" but a lot of inactive memory is shared memory and is paged out when inactivity reaches a certain level.


Edited by asdasd - 6/13/13 at 2:46am
I wanted dsadsa bit it was taken.
Reply
I wanted dsadsa bit it was taken.
Reply
post #18 of 32
Quote:
I like the idea of Safari having separate processes per tab; once the tab gets closed down, the memory assigned to that process will get flushed so that should mean better memory usage over a long period of time and the days of one tab hanging the whole browser will be at an end. They did fix that auto-refresh problem with a warning dialog, which was nice but single process tabs should eliminate that altogether. It's good to see they've been working on these memory issues and time will tell if it's done right. I love the idea of slowing background processes. That means background web pages with ads won't be making everything else stutter.

I suppose from a techy pov that's great. Now perhaps Apple could address how I can get Safari in my Mountain Lion installs to run as quick as my Snow Leopard installs. 

post #19 of 32
Quote:
Originally Posted by mdriftmeyer View Post

Read and see if you qualify with 10.7.5+ to upgrade to 16GB RAM with the updated firmware: http://www.everymac.com/systems/apple/macbook_pro/macbook-pro-unibody-faq/macbook-pro-13-15-17-mid-2009-how-to-upgrade-ram-memory.html

The details are half-way down the article.

That's a great resource to have thanks for posting. I already went through all of the excitement and disappointment with my 6,2 mid 2010 15" MBP i7 2.66. I tried and failed. It doesn't support 16 GIGs of RAM, in fact few do out of the possible list of suspects. Mine boots fine in safe mode but that's the extent of it. It seems to be related to the graphics set up. I even tried manually controlling the graphics switching to no avail. I keep an eye on the blogs referencing this topic to see if one day there may be a solution for me, but none yet.
From Apple ][ - to new Mac Pro I've used them all.
Long on AAPL so biased
Google Motto "You're not the customer. You're the product."
Reply
From Apple ][ - to new Mac Pro I've used them all.
Long on AAPL so biased
Google Motto "You're not the customer. You're the product."
Reply
post #20 of 32
Quote:
Originally Posted by Eye Forget View Post

I suppose from a techy pov that's great. Now perhaps Apple could address how I can get Safari in my Mountain Lion installs to run as quick as my Snow Leopard installs. 

They have fixed that. In Mavericks. Its faster ( the whole OS seems like lightening in fact). Thats the way it works. They needed big changes to fix it, and they did. 

I wanted dsadsa bit it was taken.
Reply
I wanted dsadsa bit it was taken.
Reply
post #21 of 32
Quote:
Originally Posted by macslut View Post

That doesn't make sense at all.

I think you're confusing virtual memory with inactive memory.  Virtual memory use disk/SSD for ram when there isn't enough real memory.  Inactive memory is memory that was recently used, could be used again but is not currently being used.  When it's needed again, it switches from inactive to active as opposed to being read in from disk or regenerated.  If you run out of memory and need more for active memory, the least recently used inactive memory is instantly freed up for active memory.  This is why they say, "free ram is wasted ram".

The whole point of memory compression is to expand the amount of inactive ram by compressing the ram that is inactive, thus allowing more of it.  If you're "managing the ram you're using" such that you never have inactive ram, then memory compression isn't going to be a factor for you.  Then again, if you don't have inactive ram, you're likely wasting ram.

In other words, if you never disk swap, memory compression will speed things up because you're increasing the amount of inactive memory that is available.  On the other hand, if you are disk swapping you're still speeding things up because the memory compression is still applied.

The only conceivable situation where memory compression wouldn't speed things up, even a little bit, would be if you somehow managed to never re-use inactive memory at all.  However, even that would be dependent on Apple poorly implementing memory compression by not isolating it to proper CPU times, which is part of what Timer Coalescing will handle.

Excellent post. It seems a lot of people confuse inactive with virtual and misunderstand this topic totally. Witness those silly utilities for purging memory sold on the Mac App Store. Reading the comments there it seems tons of users manually clear out all the inactive memory thinking they are 'speeding up there Macs'.
From Apple ][ - to new Mac Pro I've used them all.
Long on AAPL so biased
Google Motto "You're not the customer. You're the product."
Reply
From Apple ][ - to new Mac Pro I've used them all.
Long on AAPL so biased
Google Motto "You're not the customer. You're the product."
Reply
post #22 of 32
this is nonsense.

I took my RAM out, compressed it with a large hammer, and cant see any increase in performance.

(nearly friday !)
post #23 of 32
"I like the idea of Safari having separate processes per tab; once the tab gets closed down, the memory assigned to that process will get flushed so that should mean better memory usage over a long period of time and the days of one tab hanging the whole browser will be at an end. They did fix that auto-refresh problem with a warning dialog, which was nice but single process tabs should eliminate that altogether. It's good to see they've been working on these memory issues and time will tell if it's done right. I love the idea of slowing background processes. That means background web pages with ads won't be making everything else stutter."

Sounds like a good idea. Once I open my Saf' tabs on my top end iMac the window re-size is veryyyyy choooooopppeeeee.

And sometimes Saf' performance degrades.

Perhaps this is part of the solution of retuning and improving the responsive and durability of the Saf' viewing experience.

These changes should turbo boost the finder experience.

Nice to see some 'hard core' features being added to Mac Os 'Mavericks.'

An impressive release. Can't wait to upgrade.

Amen.

Lemon Bon Bon.

You know, for a company that specializes in the video-graphics market, you'd think that they would offer top-of-the-line GPUs...

 

WITH THE NEW MAC PRO THEY FINALLY DID!  (But you bend over for it.)

Reply

You know, for a company that specializes in the video-graphics market, you'd think that they would offer top-of-the-line GPUs...

 

WITH THE NEW MAC PRO THEY FINALLY DID!  (But you bend over for it.)

Reply
post #24 of 32
Quote:
Originally Posted by seanie248 View Post

this is nonsense.

I took my RAM out, compressed it with a large hammer, and cant see any increase in performance.

(nearly friday !)

 

 Nice one!

 

Your = the possessive of you, as in, "Your name is Tom, right?" or "What is your name?"

 

You're = a contraction of YOU + ARE as in, "You are right" --> "You're right."

 

 

Reply

 

Your = the possessive of you, as in, "Your name is Tom, right?" or "What is your name?"

 

You're = a contraction of YOU + ARE as in, "You are right" --> "You're right."

 

 

Reply
post #25 of 32
Quote:
Originally Posted by Booga View Post

Yes, in a couple ways.  On a HD, the less you're reading/writing, the less likely it is that a jolt could cause damage.  On an SSD, there is a finite number of writes before the cells no longer hold a charge, and this could significantly help with that if you paged a lot previously.

Both statements are true - but I doubt if it's meaningful for most users.

For most people, hard disk failures of the type you mention are quite rare. Either they fail due to factory defects or the shock is so severe that they're going to fail regardless of whether they're reading or writing. Of course, for the one person in a million whose data is saved, it's important.

For SSDs, modern SSDs have lives long enough that few people are going to exceed the lifetime, so again, it's more of a theoretical advantage than a real one.

The energy savings benefits are probably much more important.
"I'm way over my head when it comes to technical issues like this"
Gatorguy 5/31/13
Reply
"I'm way over my head when it comes to technical issues like this"
Gatorguy 5/31/13
Reply
post #26 of 32
Quote:
Originally Posted by Booga View Post

Some pages of memory are going to contain almost incompressible image or sound data, but others are going to be a lot of zeroes with a few small values in them. I suspect 50% is probably a pretty reasonable average.

Most resources stored compressed on disk are actually decompressed when read by the application that uses them, and then kept in memory that way. For instance, graphics like JPEGs become Bitmaps. This allows a JPEG to be rendered quickly each time it needs to be drawn without having to be decompressed more than once. There's a big performance advantage, but at the cost of some RAM. This kind of thing happens all the time with graphics resources applications use to not only render working media but their own user interfaces. That application data is exactly what gets marked as inactive memory (things that aren't executable code) which is why you will generally see large compression ratios.
post #27 of 32
Compressed memory sounds great on paper, but I am skeptical of the feature working well in the real world. Automatic reference counting introduced in Lion actually increased memory consumption (since cycles are often common and they are not deallocated by ARC).

In current beta, compressed memory barely works and Mavericks will run out of memory (paging out) much more quickly than Mountain Lion. (Yes, I realize Mavericks is still in beta and things should only improve as it nears GM.)

Apple will hopefully prove me wrong and Mavericks will supplement Snow Leopard as the fastest and most efficient OS X to date.
post #28 of 32
Quote:
Originally Posted by filburt View Post

Compressed memory sounds great on paper, but I am skeptical of the feature working well in the real world. Automatic reference counting introduced in Lion actually increased memory consumption (since cycles are often common and they are not deallocated by ARC).

In current beta, compressed memory barely works and Mavericks will run out of memory (paging out) much more quickly than Mountain Lion. (Yes, I realize Mavericks is still in beta and things should only improve as it nears GM.)

Apple will hopefully prove me wrong and Mavericks will supplement Snow Leopard as the fastest and most efficient OS X to date.

The new compiler has better warnings on cycles apparently.

And this feature doesn't depend on developers.
I wanted dsadsa bit it was taken.
Reply
I wanted dsadsa bit it was taken.
Reply
post #29 of 32

There has been at least 10 years of research on the impacts of compressing memory at different levels in the CPU architecture.  I remember working (10 years ago) on techniques to compress data in the cache.  The results at the time were only slightly better than net-zero, but memory is more comparatively "expensive" now that it was then.

 

Compressing data in RAM is somewhat easier to do, in that it can be controlled via software.  Moreover, there is a survey of compression techniques in the writeup, but if Apple is really smart they are using a different approach that can be computed in the GPU.  Yes, there are some ways for that :).

Cat: the other white meat
Reply
Cat: the other white meat
Reply
post #30 of 32
Quote:
Originally Posted by festerfeet View Post

I can't pretend to understand the details of this so I have a question for those more informed than me.

 

If this reduces the number of page in/out to the disk, is it likely to impact the life of the hard disk or SSD?

 

If this is an idiotic question I apologise in advance for my ignorance.

 

The short answer to all these SSD wear and tear questions is "don't worry about it". The long answer is "don't worry about it".

 

If your SSD was bought within the last half decade or so, the durability of the memory cells (in part due to better manufacturing processes, in part due to advancements in controller logic) are rated to outlive you, even if you write to it at full speed, 24/7. Of course, other things influence the longevity of a product, so it most probably won't outlive you, but it won't be because you wore out the memory cells.

 

Stories that once were true of a technology have a tendency to stick around for frustratingly long after they've stopped being true. The brittleness of SLCs and MLCs of SSDs is one such story. It is no longer true. You cannot wear out the cells on your SSD by writing to it any longer. Period.

 

Don't worry — be happy :)

post #31 of 32
Quote:
Originally Posted by superjunaid View Post

That's a pretty neat feature indeed, I'm still hitting a wall on my MBP 17" late 2010 with 8GB Ram (maxed out). But it helps to have SSD, too bad Purge command no longer works with Mavericks.

The purge command still works, it's just not an Admin permitted privilege, only root. 

Just write:

 sudo purge
Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius and a lot of courage to move in the opposite direction.
Reply
Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius and a lot of courage to move in the opposite direction.
Reply
post #32 of 32

I own a mid 2011 27" iMac i7 3.24ghz, with 8gigs of RAM.

 

Currently I have only Safari opened (and activity monitor) and I am using 7.99GB with 2.5MB swap used. And my memory pressure is maxed.

 

Everything is slow and buggy.  And launchpad is feeling this RAM use most.  Launchpad is unbelievabley buggy.  It lags when you enter. it lags when you exit.  Swiping between pages in launchpad is met with pauses and more stutter.

 

I never had these issues in mountain lion.  If Mavericks is supposed to utilize RAM better, how could I possibly be experiencing this level of lag?  My specs are nothing to sneeze at, and my RAM configuration is more than adequate.  So why the lag?

 

I initially upgraded to Mavericks from Mountain Lion.  After experiencing this lag I decided to wipe my HDD and do a clean install.  This did not resolve my lag issue.

 

No one can convince me this is normal, and suggest I upgrade my RAM.  If I was experiencing this lag only after I had a number of apps going, I might concede.  But not simply with just Safari running.

 

After searching the web, I have come to understand that many people are experiencing issues similar to my own.

 

Where and how is this RAM compression making things better?  

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Mac OS X
AppleInsider › Forums › Software › Mac OS X › Compressed Memory in OS X 10.9 Mavericks aims to free RAM, extend battery life