Compressed Memory in OS X 10.9 Mavericks aims to free RAM, extend battery life

Posted:
in macOS edited January 2014
Apple has publicly touted a significant new feature in OS X 10.9 Mavericks designed to maximize RAM, storage and CPU use while also boosting power efficiency: Compressed Memory.

Mavericks

More resources, fewer drawbacks

The new Compressed Memory feature sounds absolutely utopian: the operating system applies data compression to the least important content held in memory to free up more available RAM. And it does so with such efficiency that the CPU and storage devices work less, saving battery as well.

The new feature fits particularly well into Apple's design direction for its mobile products like the MacBook Air, which aims to deliver a long battery life via SSD storage (as opposed to a mechanical hard drive), but which also does not offer any aftermarket RAM expansion potential.

Apple's current crop of MacBook Air models now provide a minimum of 4GB RAM, with an at-purchase, $100 option to install 8GB. However, earlier models capable of running OS X Mavericks were sold into 2011 with only a paltry 2GB.To get the most use out of such limited RAM resources, Apple will be using dynamic Memory Compression to automatically shrink the footprint of content that has been loaded into RAM but is not immediately needed.

To get the most use out of such limited RAM resources, Apple will be using dynamic Memory Compression to automatically shrink the footprint of content that has been loaded into RAM but is not immediately needed.

OS X has always used Virtual Memory to serve a similar purpose; with Virtual Memory, the OS "pages" less important content to disk (the hard drive or SSD), then loads it back into active memory when needed. However, this requires significant CPU and disk overhead. The closer users come to running out of available memory, the more paging to disk swap storage the system has to do.

It turns out that the operating system can compress memory on the fly even more efficiently, reducing the need for active paging under Virtual Memory. That lets the CPU and drive power down more often, saving battery consumption.

Super speedy, super efficient compression

"The more memory your Mac has at its disposal, the faster it works," Apple notes on its OS X Mavericks advanced technology page.

"But when you have multiple apps running, your Mac uses more memory. With OS X Mavericks, Compressed Memory allows your Mac to free up memory space when you need it most. As your Mac approaches maximum memory capacity, OS X automatically compresses data from inactive apps, making more memory available."

OS X 10.9 Memory Compression


Apple also notes in a Technology Overview brief that "Compressed Memory automatically compresses the least recently used items in memory, compacting them to about half their original size. When these items are needed again, they can be instantly uncompressed."

As a result, there's more free memory available to the system, which "improves total system bandwidth and responsiveness, allowing your Mac to handle large amounts of data more efficiently."

Apple is using WKdm compression, which is so efficient at packing data that the compression and decompression cycle is "faster than reading and writing to disk."

The new old technology of data compression

Memory and storage compression in general isn't new at all. In the 1980s, tools such as DiskDoubler allowed users to compress files on disk on the fly, as opposed to packing files into archives (which dates back to the beginning of computing). RAM Doubler did the same thing for memory, a technique that was essentially replaced by Virtual Memory in the late 90s.

Over time, the benefits of compressing files were largely outweighed by the overhead involved, particularly as storage grew cheaper and more plentiful and new techniques were built into the operating system. But the recent move toward mobile computing and the use of relatively expensive solid state storage (and often idle, but very fast CPU cores) have made compression popular again.

Beginning in OS X 10.6 Snow Leopard, Apple quietly added HFS+ Compression as a feature for saving disk space in system files. The benefits of this were limited by the fact that previous versions of OS X couldn't recognize these compressed files, and so the compression was not applied to files outside of the system.

Windows also uses file compression in NTFS, as does Linux' Btrfs, but in general these incur a performance penalty, making the primary benefit increased disk space at the cost of performance.Compressing the contents of volatile system memory, rather than disk storage, has remained even more experimental.

Compressing the contents of volatile system memory, rather than disk storage, has remained even more experimental. It is active by default in many virtualization products, such as VMWare's ESX. And it's also been studied for use in general computing.

Under Linux the Compcache project seeks to similarly compress memory via the LZO algorithm that would otherwise be expensively paged to disk. The benefits here too, however, were not always worth the overhead involved, or the additional complication introduced with doing things such as waking from hibernation.

Modern solutions to address new issues

Today however, the combination of speedy, often idle CPU cores; large data sets; more expensive SSD storage and the efficiency requirements of mobile computing have made memory compression a perfect solution to a variety of issues, if the compression technologies used fit the kinds of tasks being performed.

Research by Matthew Simpson of Clemson University and Dr. Rajeev Barua and Surupa Biswas, University of Maryland, examined the use of various types of memory compression a decade ago, particularly in regard to embedded systems where memory is likely to be more scarce.

At issue then were the best compression techniques to use. The research noted that "dictionary-based algorithms tend to have slow compression speeds and fast decompression speeds while statistical algorithms tend to be equally fast during compression and decompression."

OS X 10.9 Memory Compression


The WKdm compression Apple is now using (which uses a hybrid of both dictionary and statistical compression techniques) was found in their research to provide effective compression at both the fastest (and therefore most efficient) compression and decompression speeds.

Supercharged Virtual Memory

Apple's new Compressed Memory feature also works on top of Virtual Memory, making it even more efficient.

"If your Mac needs to swap files on disk [via Virtual Memory]," Apple explains, "compressed objects are stored in full-size segments, which improves read/write efficiency and reduces wear and tear on SSD and flash drives."

Apple states that its compression techniques "reduces the size of items in memory that haven?t been used recently by more than 50 percent," and states that "Compressed Memory is incredibly fast, compressing or decompressing a page of memory in just a few millionths of a second."

Compressed Memory can also take advantage of parallel execution on multiple cores "unlike traditional virtual memory," therefore "achieving lightning-fast performance for both reclaiming unused memory and accessing seldom-used objects in memory."

OS X 10.9 Memory Compression


By improving the way Virtual Memory works, it's less necessary for the system to "waste time continually transferring data back and forth between memory and storage." As a result, Apple says Memory Compression improves both general responsiveness under load and improves wake from standby times, as depicted above.

The (4) footnote Apple references in the graphic notes that "testing conducted by Apple in May 2013 and June 2013 using production 1.8GHz Intel Core i5-based 13-inch MacBook Air systems with 256GB of flash storage, 4GB of RAM, and prerelease versions of OS X v10.9 and OS X v10.8.4. Performance will vary based on system configuration, application workload, and other factors."
«1

Comments

  • Reply 1 of 31
    That's a pretty neat feature indeed, I'm still hitting a wall on my MBP 17" late 2010 with 8GB Ram (maxed out). But it helps to have SSD, too bad Purge command no longer works with Mavericks.
  • Reply 2 of 31
    This feature just made me remember about quarterdeck's QEMM, it had a feature called MagnaRAM, that did the exact same thing. Compressed memory to avoid physical paging. This idea is not new, but with the amount of waste all those XML structures have while in-memory the gains will be worth the CPU effort.

    Patents for these might be 20 or 30 years old.
  • Reply 3 of 31
    mdriftmeyermdriftmeyer Posts: 7,503member

    Quote:

    Originally Posted by superjunaid View Post



    That's a pretty neat feature indeed, I'm still hitting a wall on my MBP 17" late 2010 with 8GB Ram (maxed out). But it helps to have SSD, too bad Purge command no longer works with Mavericks.


     


    Read and see if you qualify with 10.7.5+ to upgrade to 16GB RAM with the updated firmware: http://www.everymac.com/systems/apple/macbook_pro/macbook-pro-unibody-faq/macbook-pro-13-15-17-mid-2009-how-to-upgrade-ram-memory.html


     


    The details are half-way down the article.

  • Reply 4 of 31
    realwarderrealwarder Posts: 136member


    Looks like this is just going to compress suspended applications - note that other OS changes make apps suspend by default now when not in the foreground.  When suspended for a while, rather than paging, it's going to compress the memory that is effectively idle, the idea being that compressed memory is faster to decompress than page out and page back in from disk, even an SSD disk.


     


    Given the randomness of memory, the 50% compression stats must have some basis from testing a broad range of apps, but some apps will likely be incompressible, and others more compressible.. that's the nature of compression.


     


    Nice concept.  Will be interesting to see real life impact.

  • Reply 5 of 31
    festerfeetfesterfeet Posts: 108member


    I can't pretend to understand the details of this so I have a question for those more informed than me.


     


    If this reduces the number of page in/out to the disk, is it likely to impact the life of the hard disk or SSD?


     


    If this is an idiotic question I apologise in advance for my ignorance.

  • Reply 6 of 31
    chabigchabig Posts: 641member

    Quote:

    Originally Posted by festerfeet View Post


    If this reduces the number of page in/out to the disk, is it likely to impact the life of the hard disk or SSD?



    That's not an idiotic question, and I would guess that the answer is yes. But I doubt that it's significant.

  • Reply 7 of 31
    boogabooga Posts: 1,082member

    Quote:

    Originally Posted by festerfeet View Post


    I can't pretend to understand the details of this so I have a question for those more informed than me.


     


    If this reduces the number of page in/out to the disk, is it likely to impact the life of the hard disk or SSD?


     


    If this is an idiotic question I apologise in advance for my ignorance.



     


    Yes, in a couple ways.  On a HD, the less you're reading/writing, the less likely it is that a jolt could cause damage.  On an SSD, there is a finite number of writes before the cells no longer hold a charge, and this could significantly help with that if you paged a lot previously.


     


    Some pages of memory are going to contain almost incompressible image or sound data, but others are going to be a lot of zeroes with a few small values in them. I suspect 50% is probably a pretty reasonable average.

  • Reply 8 of 31
    rraburrabu Posts: 264member


    Anyone find any documentation anywhere suggesting or stating if this memory compression is, will be, or already has been applied to iOS?

  • Reply 9 of 31
    MarvinMarvin Posts: 15,323moderator
    too bad Purge command no longer works with Mavericks.

    I wonder why they'd do that. Compressing inactive memory is fine but sometimes it just needs flushed. SSDs are so fast now, I don't think having lots of things cached in inactive memory really makes a big difference. At the very least, it should have a timeout and then it gets flushed but some apps do odd things. Quicklook previews sometimes generate huge amounts of inactive memory very quickly. The Finder can act up at times too - my favourite Finder issue these days is when I have the trash open and a file selected showing the preview and hit empty trash and it says 'sorry, file is in use' and it's the Finder using it - you'd think it would know to stop using it.

    I like the idea of Safari having separate processes per tab; once the tab gets closed down, the memory assigned to that process will get flushed so that should mean better memory usage over a long period of time and the days of one tab hanging the whole browser will be at an end. They did fix that auto-refresh problem with a warning dialog, which was nice but single process tabs should eliminate that altogether. It's good to see they've been working on these memory issues and time will tell if it's done right. I love the idea of slowing background processes. That means background web pages with ads won't be making everything else stutter.
  • Reply 10 of 31

    Quote:

    Originally Posted by chabig View Post


    That's not an idiotic question, and I would guess that the answer is yes. But I doubt that it's significant.



    Thank you for the info

  • Reply 11 of 31

    Quote:

    Originally Posted by Booga View Post


     


    Yes, in a couple ways.  On a HD, the less you're reading/writing, the less likely it is that a jolt could cause damage.  On an SSD, there is a finite number of writes before the cells no longer hold a charge, and this could significantly help with that if you paged a lot previously.


     


    Some pages of memory are going to contain almost incompressible image or sound data, but others are going to be a lot of zeroes with a few small values in them. I suspect 50% is probably a pretty reasonable average.



    and another thank you.

  • Reply 12 of 31
    ecsecs Posts: 307member
    Of all the things announced in the keynote, this is the only feature I don't like 100%, unless you can disable/enable it when you really need it. I agree it can be useful when I try to do stuff which would need much more RAM than I have, but I'm the kind of user who sizes the stuff I do accordingly to the resources I have. I usually move close to the limits of my RAM, but without needing disk swapping, I only hit disk swapping on very rare occasions. So, I don't like the OS deciding for me here, I don't want to have the CPU working on background compression unless I explicitly enable it.
  • Reply 13 of 31
    macslutmacslut Posts: 514member

    Quote:

    Originally Posted by ecs View Post



    Of all the things announced in the keynote, this is the only feature I don't like 100%, unless you can disable/enable it when you really need it. I agree it can be useful when I try to do stuff which would need much more RAM than I have, but I'm the kind of user who sizes the stuff I do accordingly to the resources I have. I usually move close to the limits of my RAM, but without needing disk swapping, I only hit disk swapping on very rare occasions. So, I don't like the OS deciding for me here, I don't want to have the CPU working on background compression unless I explicitly enable it.


     


    That doesn't make sense at all.


     


    I think you're confusing virtual memory with inactive memory.  Virtual memory use disk/SSD for ram when there isn't enough real memory.  Inactive memory is memory that was recently used, could be used again but is not currently being used.  When it's needed again, it switches from inactive to active as opposed to being read in from disk or regenerated.  If you run out of memory and need more for active memory, the least recently used inactive memory is instantly freed up for active memory.  This is why they say, "free ram is wasted ram".


     


    The whole point of memory compression is to expand the amount of inactive ram by compressing the ram that is inactive, thus allowing more of it.  If you're "managing the ram you're using" such that you never have inactive ram, then memory compression isn't going to be a factor for you.  Then again, if you don't have inactive ram, you're likely wasting ram.


     


    In other words, if you never disk swap, memory compression will speed things up because you're increasing the amount of inactive memory that is available.  On the other hand, if you are disk swapping you're still speeding things up because the memory compression is still applied.


     


    The only conceivable situation where memory compression wouldn't speed things up, even a little bit, would be if you somehow managed to never re-use inactive memory at all.  However, even that would be dependent on Apple poorly implementing memory compression by not isolating it to proper CPU times, which is part of what Timer Coalescing will handle.

  • Reply 14 of 31
    ecs wrote: »
    Of all the things announced in the keynote, this is the only feature I don't like 100%, unless you can disable/enable it when you really need it. I agree it can be useful when I try to do stuff which would need much more RAM than I have, but I'm the kind of user who sizes the stuff I do accordingly to the resources I have. I usually move close to the limits of my RAM, but without needing disk swapping, I only hit disk swapping on very rare occasions. So, I don't like the OS deciding for me here, I don't want to have the CPU working on background compression unless I explicitly enable it.

    What makes you think it will do that if it doesn't need to? In situations when Mavericks performs compression, Mountain lion would have swapped to disk. Disabling it would cause you to spend more power and time performing disk writes, and also later if the memory becomes active again.
  • Reply 15 of 31
    chandra69chandra69 Posts: 638member


    I hope these technologies are patented by Apple, if they had invented.

  • Reply 16 of 31
    asdasdasdasd Posts: 5,686member


    The pre Mavericks system will page out inactive memory regardless of the load on the system, its based on the level of inactivity of that page of memory. Otherwise the system will quickly come to the point where it has to page in and page out simultaneously, which is where most beach balling and slowdowns happen.


     


    You can see this in top in terminal ( at the top of the result) - just launch Terminal.app and type top. Except for immediately after login ( and not always then) the page out figure is non zero. That main figure is since the last reboot, the parenthesis is the delta. 


     


    The idea is not to wait until there needs to be a page in and page out at the same time ( when the system is out of RAM) but to mark as inactive pages, and page them to disk, long before that. To not get into that situation. The user can't really control this. It also explains the slow wakeup,and the fact that they say it is better now. If memory was just paged under pressure it wouldn't get paged out during a sleep, if it is paged out because of inactivity it would.


     


    So the new algorithms will probably compress inactive memory to begin with. And then page it out. Both would be after periods of in activity, it would just be a  longer wait until the system decides to page out the (previously compressed) memory. If that page is made active it is either uncompressed, or read from disk and uncompressed if it were paged out. Only the latter is slower. Writing to disk is faster.


     


    Also get away from the idea of it being an apps memory. Most of what any application touches in memory is from the System - the AppKit, Foundation, CoreFoundation etc. Some of that is always "hot" but a lot of inactive memory is shared memory and is paged out when inactivity reaches a certain level.

  • Reply 17 of 31
    eye forgeteye forget Posts: 154member

    Quote:

    I like the idea of Safari having separate processes per tab; once the tab gets closed down, the memory assigned to that process will get flushed so that should mean better memory usage over a long period of time and the days of one tab hanging the whole browser will be at an end. They did fix that auto-refresh problem with a warning dialog, which was nice but single process tabs should eliminate that altogether. It's good to see they've been working on these memory issues and time will tell if it's done right. I love the idea of slowing background processes. That means background web pages with ads won't be making everything else stutter.


    I suppose from a techy pov that's great. Now perhaps Apple could address how I can get Safari in my Mountain Lion installs to run as quick as my Snow Leopard installs. 

  • Reply 18 of 31
    MacProMacPro Posts: 19,727member
    Read and see if you qualify with 10.7.5+ to upgrade to 16GB RAM with the updated firmware: http://www.everymac.com/systems/apple/macbook_pro/macbook-pro-unibody-faq/macbook-pro-13-15-17-mid-2009-how-to-upgrade-ram-memory.html

    The details are half-way down the article.

    That's a great resource to have thanks for posting. I already went through all of the excitement and disappointment with my 6,2 mid 2010 15" MBP i7 2.66. I tried and failed. It doesn't support 16 GIGs of RAM, in fact few do out of the possible list of suspects. Mine boots fine in safe mode but that's the extent of it. It seems to be related to the graphics set up. I even tried manually controlling the graphics switching to no avail. I keep an eye on the blogs referencing this topic to see if one day there may be a solution for me, but none yet.
  • Reply 19 of 31
    asdasdasdasd Posts: 5,686member

    Quote:

    Originally Posted by Eye Forget View Post


    I suppose from a techy pov that's great. Now perhaps Apple could address how I can get Safari in my Mountain Lion installs to run as quick as my Snow Leopard installs. 



    They have fixed that. In Mavericks. Its faster ( the whole OS seems like lightening in fact). Thats the way it works. They needed big changes to fix it, and they did. 

  • Reply 20 of 31
    MacProMacPro Posts: 19,727member
    macslut wrote: »
    That doesn't make sense at all.

    I think you're confusing virtual memory with inactive memory.  Virtual memory use disk/SSD for ram when there isn't enough real memory.  Inactive memory is memory that was recently used, could be used again but is not currently being used.  When it's needed again, it switches from inactive to active as opposed to being read in from disk or regenerated.  If you run out of memory and need more for active memory, the least recently used inactive memory is instantly freed up for active memory.  This is why they say, "free ram is wasted ram".

    The whole point of memory compression is to expand the amount of inactive ram by compressing the ram that is inactive, thus allowing more of it.  If you're "managing the ram you're using" such that you never have inactive ram, then memory compression isn't going to be a factor for you.  Then again, if you don't have inactive ram, you're likely wasting ram.

    In other words, if you never disk swap, memory compression will speed things up because you're increasing the amount of inactive memory that is available.  On the other hand, if you are disk swapping you're still speeding things up because the memory compression is still applied.

    The only conceivable situation where memory compression wouldn't speed things up, even a little bit, would be if you somehow managed to never re-use inactive memory at all.  However, even that would be dependent on Apple poorly implementing memory compression by not isolating it to proper CPU times, which is part of what Timer Coalescing will handle.

    Excellent post. It seems a lot of people confuse inactive with virtual and misunderstand this topic totally. Witness those silly utilities for purging memory sold on the Mac App Store. Reading the comments there it seems tons of users manually clear out all the inactive memory thinking they are 'speeding up there Macs'.
Sign In or Register to comment.