There are so many new things coming in Tiger that it is difficult to guess on speed. Personally, I do believe a combination of some of the new technologies will improve responsiveness.
There will be speed optimizations, but not necessarily a lot of them for the G5 processor compared to the other processors. If you wonder if there will be any speed optimizations because the G5 is 64-bit, there will be very, very few, if any. And they won't be noticeable in general.
Not much speedup, if any. It's primary benefit should be toward the types of apps that may need a 64-bit address space - which mostly falls into HPC/scientific compuing.
Not much speedup, if any. It's primary benefit should be toward the types of apps that may need a 64-bit address space - which mostly falls into HPC/scientific compuing.
or you know, the pro video apps, which are used by tons of people.
Without sounding too stupid, what is so important about 64 bit to the average user (considering Tiger and the G5's)?
Eric
In ten years, we will be happy to have a 64 bits chips. Only developpers starting from scrash and using tiger, will be able to take advantage of the 64 bit thing.
Huge corps who have invested in millions of line in their soft, like Photoshop, or office will not make a 64 bit version of their soft : way too expansive.
64 bit computing will be completely meaningless to most users.
It offers no performance boost and, if anything, slows things down. (For most tasks) However, 64bit computing provides the scientific community with the ability to more quickly run large simulations or models. Video production will also benefit from the ability to address insane amounts of memory. Other 64bit optimizations will eventually come trickling down the line but none will affect normal users.
More important though, are other optimizations in tiger. In particular, much of the code responsible for putting things on the screen is vastly improved. Previously dormant GPU power is being tapped for more and more things in the GUI. Also, much of quicktime has been completely rewritten. Not only are the underpinnings cleaner, they'll also be GPU accelerated.
Besides the promises of GPU acceleration, I expect many other optimizations. This is a pleasant change of pace for those familiar with upgrading other OSes. With OS9, we always anticipated more features but with more sluggish response.
OSX has been the opposite; each release seems to improve performance. There are two ways to look at this. OSX could be poorly written and thus simply has more room for improvement. Or, OSX could be well written and easily extensible without sacrificing performance. In my opinion, both are true. OSX was raw and non-optimized in its first few releases. Yet it also deserves credit for having a robust and extensible architecture.
I think there will again come a time when each MacOS release will mean slower performance. Developers and users, when satisfied by speed, tend to start focusing on stability and compatibility. This is coupled with the diminishing returns from further optimizations of a mature code base.
How far out is this? I will go so far as to say that 10.4 and 10.5 should deliver sizeable performance boosts. Past that, you?d have to be pretty well versed in the current state of OSX APIs and project financing/staffing at Apple. I?ll leave more precise predictions to some of the professional Mac programmers who frequent these boards. Anyone want to hazard a guess as to when we?ll stop seeing so much flux and optimization with each new OSX release?
64 bit computing will be completely meaningless to most users.
It offers no performance boost and, if anything, slows things down. (For most tasks) However, 64bit computing provides the scientific community with the ability to more quickly run large simulations or models. Video production will also benefit from the ability to address insane amounts of memory. Other 64bit optimizations will eventually come trickling down the line but none will affect normal users.
Actually, any program that needs or can use more than 2GB of addressable RAM will benefit. 32 bits gives you 4GB, but the kernel address space takes up half of that (I think). So as a 32bit program, you would need all sorts of kludgery to use more than 2GB of RAM, with OS support (e.g. look at Windows server OS'es and SQL server, etc.). With multimedia apps becoming more mainstream, being able to run a 64 bit app will helpd tremendously. But you are right, from a pure OS level, probably not enough to make a noticeable difference. An system call to do something like spawn a thread or open a file can only be sped up so much
On a G5 processor you probably won't see any speed improvments except for scrolling. The G5 is kinda so fast things can't get that much faster. But on older machines you will for sure see an improvment. I've been working with tiger on a G3 600Mhz 512MB Ram, and it's a great deal faster than panther.
Can someone explain to me why 32 bit only can address 4 gig of ram and 64 bit can address 8 gig? Is this a math thing?
Yes, it is a math thing. It is all about how you refer to/name/address a piece of ram in the system... after all, if you are going to call for it you have to have a name for it... So, in most OS's today the pointers (names) to the the memory locations are stored as 32 bit numbers. Since each bit can be a 0 or a 1 that gives us 2 to the 32nd power or 4,294,967,296 locations... and the common practice is that each location is one byte (8 bits usually) so:
4,294,967,296 bytes * (1 Kilobyte / 1024 bytes) * (1 Megabyte / 1024 Kilobyte) * (1 Gigabyte / 1024 Megabyte) = 4 Gigabytes. Once again, for architectural reasons most operating systems reserve half the memory space for themselves (not half the RAM, just half the names), so a process on a true 32 bitOS such as MacOS 10.2 can at most ask for 2 Gigabytes.
With a 64bit address space that becomes 2 to the 64th power or
4,294,967,296 * 4 Gigabytes.... a really big number... Most practical limits disappear into the realm of the imagination.
For most applications there is no advantage in going to 64bits... 32 bits is a big enough number. But when you do need numbers that big, it is incredibly useful to have them native (emulating them is really slow).
i'm not really sure what youre addressing, but i'll try...
the pro video apps are already 64 bit, so all the people that use them already greatly benefit from having a G5 since its 64 bit.
if youre talking about my reference to the number of people using the pro video apps, all i can say is that my local production house, which teaches classes and is extremely hardcore Avid & PC based, is now considering adding Final Cut Pro b/c it is gaining a decent marketshare.
edit: we do some pretty big things (not just local stuff)
Will encoding an iMovie into iDVD for a burn speed up any for those who have a G5?
Will exporting a QuickTime movie into an AVI file speed up?
If exporting QuickTime movies gets improved along with the QT rewrite, then encoding to any format will be generally faster. The 64-bitness of the G5 won't improve encoding small movies. iMovie and iDVD are not "Pro Apps" and most likely aren't 64-bit-aware.
Comments
And possibly future Tiger based apps.
Eric
Originally posted by MCQ
Not much speedup, if any. It's primary benefit should be toward the types of apps that may need a 64-bit address space - which mostly falls into HPC/scientific compuing.
or you know, the pro video apps, which are used by tons of people.
Eric
Originally posted by aplnub
Without sounding too stupid, what is so important about 64 bit to the average user (considering Tiger and the G5's)?
Eric
In ten years, we will be happy to have a 64 bits chips. Only developpers starting from scrash and using tiger, will be able to take advantage of the 64 bit thing.
Huge corps who have invested in millions of line in their soft, like Photoshop, or office will not make a 64 bit version of their soft : way too expansive.
It offers no performance boost and, if anything, slows things down. (For most tasks) However, 64bit computing provides the scientific community with the ability to more quickly run large simulations or models. Video production will also benefit from the ability to address insane amounts of memory. Other 64bit optimizations will eventually come trickling down the line but none will affect normal users.
More important though, are other optimizations in tiger. In particular, much of the code responsible for putting things on the screen is vastly improved. Previously dormant GPU power is being tapped for more and more things in the GUI. Also, much of quicktime has been completely rewritten. Not only are the underpinnings cleaner, they'll also be GPU accelerated.
Besides the promises of GPU acceleration, I expect many other optimizations. This is a pleasant change of pace for those familiar with upgrading other OSes. With OS9, we always anticipated more features but with more sluggish response.
OSX has been the opposite; each release seems to improve performance. There are two ways to look at this. OSX could be poorly written and thus simply has more room for improvement. Or, OSX could be well written and easily extensible without sacrificing performance. In my opinion, both are true. OSX was raw and non-optimized in its first few releases. Yet it also deserves credit for having a robust and extensible architecture.
I think there will again come a time when each MacOS release will mean slower performance. Developers and users, when satisfied by speed, tend to start focusing on stability and compatibility. This is coupled with the diminishing returns from further optimizations of a mature code base.
How far out is this? I will go so far as to say that 10.4 and 10.5 should deliver sizeable performance boosts. Past that, you?d have to be pretty well versed in the current state of OSX APIs and project financing/staffing at Apple. I?ll leave more precise predictions to some of the professional Mac programmers who frequent these boards. Anyone want to hazard a guess as to when we?ll stop seeing so much flux and optimization with each new OSX release?
Originally posted by dfiler
64 bit computing will be completely meaningless to most users.
It offers no performance boost and, if anything, slows things down. (For most tasks) However, 64bit computing provides the scientific community with the ability to more quickly run large simulations or models. Video production will also benefit from the ability to address insane amounts of memory. Other 64bit optimizations will eventually come trickling down the line but none will affect normal users.
Actually, any program that needs or can use more than 2GB of addressable RAM will benefit. 32 bits gives you 4GB, but the kernel address space takes up half of that (I think). So as a 32bit program, you would need all sorts of kludgery to use more than 2GB of RAM, with OS support (e.g. look at Windows server OS'es and SQL server, etc.). With multimedia apps becoming more mainstream, being able to run a 64 bit app will helpd tremendously. But you are right, from a pure OS level, probably not enough to make a noticeable difference. An system call to do something like spawn a thread or open a file can only be sped up so much
Eric
Originally posted by aplnub
Can someone explain to me why 32 bit only can address 4 gig of ram and 64 bit can address 8 gig? Is this a math thing?
Yes, it is a math thing. It is all about how you refer to/name/address a piece of ram in the system... after all, if you are going to call for it you have to have a name for it... So, in most OS's today the pointers (names) to the the memory locations are stored as 32 bit numbers. Since each bit can be a 0 or a 1 that gives us 2 to the 32nd power or 4,294,967,296 locations... and the common practice is that each location is one byte (8 bits usually) so:
4,294,967,296 bytes * (1 Kilobyte / 1024 bytes) * (1 Megabyte / 1024 Kilobyte) * (1 Gigabyte / 1024 Megabyte) = 4 Gigabytes. Once again, for architectural reasons most operating systems reserve half the memory space for themselves (not half the RAM, just half the names), so a process on a true 32 bitOS such as MacOS 10.2 can at most ask for 2 Gigabytes.
With a 64bit address space that becomes 2 to the 64th power or
4,294,967,296 * 4 Gigabytes.... a really big number... Most practical limits disappear into the realm of the imagination.
For most applications there is no advantage in going to 64bits... 32 bits is a big enough number. But when you do need numbers that big, it is incredibly useful to have them native (emulating them is really slow).
edit: added commas in the numbers for clarity.
Originally posted by Karl Kuehn
With a 64bit address space that becomes 2 to the 64th power or
4,294,967,296 * 4 Gigabytes.... a really big number... Most practical limits disappear into the realm of the imagination.
It's 18 exabytes Although the G5 only has a 42 bit address space for memory, for a measly 4 terabytes
http://www.apple.com/g5processor/architecture.html
Eric
Originally posted by ipodandimac
or you know, the pro video apps, which are used by tons of people.
what makes you think that?
Originally posted by applenut
what makes you think that?
i'm not really sure what youre addressing, but i'll try...
the pro video apps are already 64 bit, so all the people that use them already greatly benefit from having a G5 since its 64 bit.
if youre talking about my reference to the number of people using the pro video apps, all i can say is that my local production house, which teaches classes and is extremely hardcore Avid & PC based, is now considering adding Final Cut Pro b/c it is gaining a decent marketshare.
edit: we do some pretty big things (not just local stuff)
Will exporting a QuickTime movie into an AVI file speed up?
That would be cool!!!
Eric
Originally posted by aplnub
Will encoding an iMovie into iDVD for a burn speed up any for those who have a G5?
Will exporting a QuickTime movie into an AVI file speed up?
If exporting QuickTime movies gets improved along with the QT rewrite, then encoding to any format will be generally faster. The 64-bitness of the G5 won't improve encoding small movies. iMovie and iDVD are not "Pro Apps" and most likely aren't 64-bit-aware.