Apple previews Mac OS X Snow Leopard Server

Posted:
in macOS edited January 2014
Apple at its annual developers conference Monday revealed that Snow Leopard Server, the next generation of Mac OS X Server, will deliver new core software technologies and services designed to better connect businesses, unleash the power of modern hardware, and lay the foundation for a new wave of innovations over the next several years.



Multicore, 64-Bit, and OpenCL



Like its Mac OS X Snow Leopard client cousin, the new version of Server will deliver support for multicore processors with “Grand Central,” a new set of built-in technologies that makes all of Mac OS X Server multicore aware and optimized for allocating tasks across Macs that ship with multiple cores and processors. Similarly, the software will also use 64-bit kernel technology to support up to a theoretical 16 terabytes of RAM -- or 500 times what is possible today -- and leverage OpenCL to allow any application to tap into the vast gigaflops of GPU computing power previously available only to graphics applications



iCal Server 2



Building on the initial release of iCal Server, Snow Leopard Server will include a new version of the open standards-based calendaring and scheduling service that will include group and shared calendars, push notifications, the ability to send email invitations to non-iCal Server users, and a browser-based application that lets users access their calendars on the web when they’re away from their Mac.



Podcast Producer 2



Likewise, the first major overhaul to the system's Podcast Producer will feature an new workflow editor that leads users through all the key steps involved in creating a successful podcast. This includes everything from selecting videos, transitions, titles, and effects to adding watermarks and overlays to specifying encoding formats and target destinations — wiki, blog, iTunes U, Podcast Library — for the finished podcast.







Additionally, support for dual-video source capture will let users record both a presenter and a presentation screen, allowing a picture-in-picture style ideal for podcasting lectures. The 2.0 release will also include a new Podcast Library, which lets users host locally stored podcasts and make them available for subscription by category via automatically generated Atom web feeds.



Collaboration & Remote Access



For business, Snow Leopard Server will offer the power of online group collaboration through the use of wikis, blogs, mailing lists, and RSS feeds. More specifically, Apple said it will further the collaboration with wiki and blog templates optimized for viewing on iPhone; content searching across multiple wikis; and attachment viewing in Quick Look. It will also introduce My Page, which gives users one convenient place to access their web applications, receive notifications, and view activity streams.







Also targeted at business will be improvements to Remote Access, such as push notifications to mobile users outside a firewall, and a proxy service that offers them secure remote access to email, address book contacts, calendars, and select internal websites.



New Address Book Server



Meanwhile, one completely new feature to the sever OS will be Apple's first open standards-based Address Book Server aimed at making it easier to share contacts across multiple computers. Based on the emerging CardDAV specification, which uses WebDAV to exchange vCards, Address Book Server will let users share personal and group contacts across multiple computers and remotely access contact information without the schema limitations and security issues associated with LDAP.



Improved Mail Server and ZFS support



Among the other features planned for Snow Leopard Server are an overhauled Mail Server engine designed to handle thousands of simultaneous connections, and read and write support for the high-performance, 128-bit ZFS file system.



Mail services will also be enhanced to include server-side email rules and vacation messages, Apple said.
«1

Comments

  • Reply 1 of 29
    mstonemstone Posts: 11,510member
    Quote:
    Originally Posted by AppleInsider View Post


    Among the other features planned for Snow Leopard Server are an overhauled Mail Server engine designed to handle thousands of simultaneous connections, and read and write support for the high-performance, 128-bit ZFS file system.



    So it has support for ZFS but does that mean that the default file system is still HFS+
  • Reply 2 of 29
    minderbinderminderbinder Posts: 1,703member
    "a theoretical 16 terabytes of RAM -- or 500 times what is possible today"



    Hmm, that's odd. According to the leopard page:



    "64-bit addressing of up to 16 exabytes of virtual memory and 4 terabytes of physical memory"



    http://www.apple.com/macosx/technology/64bit.html



    So assuming that's true, then Servicepack Leopard will offer FOUR times what is possible today.



    Looks like either the Leopard team or the SPLeopard team is full of crap. Either way, with the bogus claims, can we really trust that Apple will deliver what they promise?
  • Reply 3 of 29
    kreshkresh Posts: 379member
    Quote:
    Originally Posted by AppleInsider View Post




    and leverage OpenCL to allow any application to tap into the vast gigaflops of GPU computing power previously available only to graphics applications




    Wahoo, Xserves with 8800GTS cards pre-installed
  • Reply 4 of 29
    foo2foo2 Posts: 1,077member
    Shame we'll have to wait for Snow to get a multi-core aware kernel. If you've got MenuMeters installed, it's quite appalling watching a CPU hog application being hopped all over coretown.
  • Reply 5 of 29
    ZFS is gonna be a big but underrated addition. I can't wait.
  • Reply 6 of 29
    solipsismsolipsism Posts: 25,726member
    Quote:
    Originally Posted by minderbinder View Post


    "a theoretical 16 terabytes of RAM -- or 500 times what is possible today"



    Hmm, that's odd. According to the leopard page:



    "64-bit addressing of up to 16 exabytes of virtual memory and 4 terabytes of physical memory"



    http://www.apple.com/macosx/technology/64bit.html



    So assuming that's true, then Servicepack Leopard will offer FOUR times what is possible today.



    Looks like either the Leopard team or the SPLeopard team is full of crap. Either way, with the bogus claims, can we really trust that Apple will deliver what they promise?



    Not a lie, just a a marketing. spin While Leopard does have the potential to offer address 4TB of RAM that is still not physically possible. The most is still 32GB.



    Apple marketing used the real world metric of 32GB which the new theoretical limit of 16TB to obtain that figure. There would be an issue if they didn't qualify it with "what is possibly today."
  • Reply 7 of 29
    solipsismsolipsism Posts: 25,726member
    Quote:
    Originally Posted by mstone View Post


    So it has support for ZFS but does that mean that the default file system is still HFS+



    Perhaps, I'm thinking that it will be the default if Apple can swing it. I hope that ZFS is at least an option for the non-server version.



    As Melgross pointed out to me last night, there may be some issues with getting ZFS on the next version of OS X.
  • Reply 8 of 29
    aegisdesignaegisdesign Posts: 2,914member
    So this is essentially 'Exchange for XServe' being added here but based on open standards. This is an important update and IMHO by far the most important one of WWDC.



    I would guess that essentially this is the technology behind MobileMe but made into a product for OS X Server users. Fantastic news. A Mac Mini running Snow Leopard Server stuck in a data center or even on an ADSL connection would solve many a problem for me.
  • Reply 9 of 29
    webfrassewebfrasse Posts: 147member
    Quote:
    Originally Posted by Foo2 View Post


    Shame we'll have to wait for Snow to get a multi-core aware kernel. If you've got MenuMeters installed, it's quite appalling watching a CPU hog application being hopped all over coretown.



    Where do you see the mentioning of a multicore aware kernel in the section below? That's right, nowhere. It talks about making all of Mac OS X Server multicore aware. Mac OS X Server is more than the kernel...right?



    "Like its Mac OS X Snow Leopard client cousin, the new version of Server will deliver support for multicore processors with “Grand Central,” a new set of built-in technologies that makes all of Mac OS X Server multicore aware and optimized for allocating tasks across Macs that ship with multiple cores and processors. Similarly, the software will also uses 64-bit kernel technology to support up to a theoretical 16 terabytes of RAM -- or 500 times what is possible today -- and leverage OpenCL to allow any application to tap into the vast gigaflops of GPU computing power previously available only to graphics applications"



    /Mikael
  • Reply 10 of 29
    minderbinderminderbinder Posts: 1,703member
    Quote:
    Originally Posted by solipsism View Post


    Not a lie, just a a marketing. spin While Leopard does have the potential to offer address 4TB of RAM that is still not physically possible. The most is still 32GB.



    Apple marketing used the real world metric of 32GB which the new theoretical limit of 16TB to obtain that figure. There would be an issue if they didn't qualify it with "what is possibly today."



    That's still a bogus comparison. They need to compare theoretical to theoretical or real world to real world. Their copy makes it sound like current OS is the reason we're limited to 32 gigs, while 4 terrabytes is theoretically possible already but we won't see anything close to that in real world use for years. It's a minor improvement and a useless one until apple ships machines that can handle that much physical memory, and yet apple is hyping it like this.
  • Reply 11 of 29
    solipsismsolipsism Posts: 25,726member
    Quote:
    Originally Posted by minderbinder View Post


    That's still a bogus comparison. They need to compare theoretical to theoretical or real world to real world. Their copy makes it sound like current OS is the reason we're limited to 32 gigs, while 4 terrabytes is theoretically possible already but we won't see anything close to that in real world use for years. It's a minor improvement and a useless one until apple ships machines that can handle that much physical memory, and yet apple is hyping it like this.



    I agree, but it's still not a lie with that qualifier. It's just a shoddy comparison. The whole idea of even touting 16TB RAM is really pointless when you can't even get close to what Leopard can potentially address now, but that is marketing for you. At least we can look past the lipstick.
  • Reply 12 of 29
    foo2foo2 Posts: 1,077member
    Quote:
    Originally Posted by webfrasse View Post


    Where do you see the mentioning of a multicore aware kernel in the section below? That's right, nowhere. It talks about making all of Mac OS X Server multicore aware. Mac OS X Server is more than the kernel...right?



    "Like its Mac OS X Snow Leopard client cousin, the new version of Server will deliver support for multicore processors with ?Grand Central,? a new set of built-in technologies that makes all of Mac OS X Server multicore aware and optimized for allocating tasks across Macs that ship with multiple cores and processors. Similarly, the software will also uses 64-bit kernel technology to support up to a theoretical 16 terabytes of RAM -- or 500 times what is possible today -- and leverage OpenCL to allow any application to tap into the vast gigaflops of GPU computing power previously available only to graphics applications"



    /Mikael



    I do believe the kernel is covered by "all of Mac OS X Server," don't you? The kernel is where the action is. The rest of the new system will undoubtably be an API, which could range widely in its flexibility, probably providing more complete support for POSIX threads management, and then the use of that API in places around the OS in general where it matters. But why do we need a major new release just to get a modicum of kernel intelligence about the scheduling of CPU hogs?
  • Reply 13 of 29
    Quote:
    Originally Posted by minderbinder View Post


    "a theoretical 16 terabytes of RAM -- or 500 times what is possible today"



    Hmm, that's odd. According to the leopard page:



    "64-bit addressing of up to 16 exabytes of virtual memory and 4 terabytes of physical memory"



    http://www.apple.com/macosx/technology/64bit.html



    So assuming that's true, then Servicepack Leopard will offer FOUR times what is possible today.



    Looks like either the Leopard team or the SPLeopard team is full of crap. Either way, with the bogus claims, can we really trust that Apple will deliver what they promise?







    Can you tell me what machine is available from Apple Right now that you can install with 4 TB of RAM? Look at the statement, the 16 TB of RAM IS 500 times more than the 16GB Maximum you can put into a MacPro at the moment.
  • Reply 14 of 29
    Quote:
    Originally Posted by roehlstation View Post


    ...than the 16GB Maximum you can put into a MacPro at the moment.



    You mean 32 GB right?
  • Reply 15 of 29
    minderbinderminderbinder Posts: 1,703member
    Quote:
    Originally Posted by roehlstation View Post


    Can you tell me what machine is available from Apple Right now that you can install with 4 TB of RAM? Look at the statement, the 16 TB of RAM IS 500 times more than the 16GB Maximum you can put into a MacPro at the moment.



    Actually 32 gigs right now.



    You miss my point - if they're going to brag about raising the limitations of the SOFTWARE, they should compare it to current SOFTWARE. If they shipped SL tomorrow, we'd all still be limited to 32 gigs of ram. It's a bogus comparison, apples and oranges.



    The point is, they're hyping something that for the most part, we already have in 10.5. Until the hardware catches up, this isn't an improvement at all - they've improved something that was already orders of magnitude beyond what we can used, it was fixing something that isn't broken.



    Do you honestly think going from 4TB to 16TB is something to get excited about?
  • Reply 16 of 29
    robin huberrobin huber Posts: 3,173member
    You talk about an update to the system's Podcast Producer? I did a search on my system for the old version and can't find it. Does it exist only in Snow Leopard?
  • Reply 17 of 29
    jeffdmjeffdm Posts: 12,946member
    Quote:
    Originally Posted by kresh View Post


    Wahoo, Xserves with 8800GTS cards pre-installed



    I remember way back when Virginia Tech assembled their G5 cluster was that they were intending to tap the GPU to do some of the work. I don't know if they actually managed to do it, but an Apple-supported framework would make it much easier to develop such an app.
  • Reply 18 of 29
    jeffdmjeffdm Posts: 12,946member
    Quote:
    Originally Posted by Robin Huber View Post


    You talk about an update to the system's Podcast Producer? I did a search on my system for the old version and can't find it. Does it exist only in Snow Leopard?



    I don't understand that either, what is it about the program that makes it server-only? Does it tie into the server so it automatically posts what you record?



    I can't piece together any kind of a case for the software as it is, it looks out of place.
  • Reply 19 of 29
    macserverxmacserverx Posts: 217member
    Podcast Producer is in Leopard. The Podcast Capture application (Utilities folder) requires a Podcast Producer Server to even run.



    I can't think of any hard reason why Podcast Producer is Server only. Like Xgrid, it has different binaries for Client-Server functions, and the Server binary is just not included in non-server Leopard.



    Pcast Producer does rely on some of the authentication facilities in server and on xgrid for processing, but I'm fairly certain those could be run on another machine.



    For this update, it seems to be simply adding the polish that is needed to make it really useful.



    I've heard of people submitting geological data (instead of video) to Pcast Producer, because it allows them to do all sorts of workflows with the data (in Ruby or any scripting language) across an Xgrid, greatly simplifying Xgrid usage.
  • Reply 20 of 29
    ipeonipeon Posts: 1,122member
    Quote:
    Originally Posted by minderbinder View Post


    You miss my point - if they're going to brag about raising the limitations of the SOFTWARE, they should compare it to current SOFTWARE. If they shipped SL tomorrow, we'd all still be limited to 32 gigs of ram. It's a bogus comparison, apples and oranges.



    I think you are missing the point. Sorry to be blunt. I'm not a programmer I will confess, but I do get it.



    It has nothing to do with current limitations or software so why compare it to current limitations? It has NOTHING to do with it. It has to do with setting up OS X to be ready for future "potential" code and technologies. Leopard has a maximum theoretical limit as well as road blocks preventing faster processing. Snow Leopard will have a much higher limit and be much faster. That's all they are saying. Will that make any difference now or in the near future? No. It's a "let's pause, clean up this code and get this think ready for the next big thing. We are looking at 10, 20, 30, 40 ,50 years down the road." Get it?



    So yea, Leopard has a theoretical limit that hasn't yet been achieved. So what? Why would raising that limit even higher be a bad thing? It's called "Let's plan for the future today so we don't get into trouble tomorrow."



    Since you used the term "Servicepack Leopard" I will assume that you are a Windows guy. That would explain why you are so puzzled.
Sign In or Register to comment.