The more I think about this, as I position a few investment dollars, the more I think that this is the machine for all those film telecine guys out there. The 64 bit system. 4GB will hardly be too much for such a use. There are cameras on the market already and coming to market in the next 2-3 years that will deliver between 300-400MB per second of info into a hard disk array. 4GB will hold roughly 8-15 seconds! Depending on the equipment in use. There's a big FILM, not VIDEO, market waiting for this right now.
The high end film production market could come down to Avid and Mac. Untill now mac hasn't really had the hardware to make this happen, but with 64bits and hopefully (more than 4GB max memory) the pieces are falling into place for Apple to go after the real high-end in a very big way!
If Apple built a KILLER 4 way 970 with 16GB+ maxx RAM capacity, a fast on board SATA based raid, and links to control an Xserve RAID, they could basically charge whatever they want for it and people would pay. 10 Grand? No problem. This would NOT, however, be a powermac, but it would be good because the profits would allow PM's and consumer macs to get cheaper and more competitive, which is what will lead to more market share, more developers, more respect from business and serivice providers, and a generally healthier mac life for all of us.
Programmer, here's my straight-in-the-face question. If the 970 does not have a clock multiplier, do you think the 2.3 or 2.25GHz figures technically possible in regard with the bus frequency? If so, do you believe they're possible at Apple? I am just asking about your opinion. No sarcasm meant.
Why wouldn't they be? Pretty much any clock rate is possible up to the limit imposed by the processor, and the bus will run at half that rate. You guys are making this way too complicated. Forget all that nonesense about what speeds are "possible"... that was based on the fact that MPX was very limited in what clock speeds it could run at, and the limited number off clock multipliers that the chip had. The information on the 970 is that it does not have different multipliers, it always runs at 4x the bus speed (but the bus is double pumped so often 2x is quoted). The bus, however, is designed to be able to run at very high speeds so they don't need the multiplier flexibility. So choose a processor speed, divide by 4 (or 2 if you like DDR speeds) and there is your bus speed. The companion chip will probably run at the bus speed and have an asynchronous buffered connection to the memory.
the impression i get is that the initial offerings of the 970's, if this rumor from macbidouille (sp?) is to be believed, will be a new breed of powermac. the never-before-seen higher high end. the 64-bit will mean something to this crowd. as matsu alluded to, for the video pros who need scary kind of bandwidth and processing power and boatloads of RAM. remember, apple is pushing the video market like never before, with acquisitions, FCP4 and DSP2 expected at NAB. shake, chalice, etc. well, you get the idea...
the 970's will be monster machines, BUT also be priced accordingly. along the lines of amorph's description of how apple brought the cinema display to market. they could charge US$5K (which was insanely pricey for a monitor), and people STILL bought it, because the next thing closest up to that point was twice that price (or more). so i wouldn't be surprised to see 970-based machines in a very depressingly high cost level to start.
that will allow apple to still offer G4-based towers for those folks who need fast layout machines, but don't need to be pushing gigs of real time dv footage and effects. the G4 single would become the realm of consumer-level machines and laptops, fast G4-duals the space for high end graphics machines, and then these bad boys for 3D rendering, etc.
i know i'm not bringing much to the table with this post, but knowing what i know about apple, mixed with what folks have said here, and the rumors sites and apple's strange schedule rearrangements and announcements... well, this is how i see it shaking out between now and september.
the thing that really puts a kink in my theory is steve's declaration of 2003 as the year of the laptop, which screamed to me that they would not have anything worth bragging about desktop-wise until next year. but steve has been known to change his mind... so who knows?
1) only really high end compositing software will be much benefit
2) not really, most of the time is spent on the encoding so the overhead of streaming at the same time is miniscule
3) perhaps for extremely large models
4) probably although I can't point to particular examples which benefit because I don't have enough experience with them -- to benefit they either require lots of RAM or do lots of 64-bit integer calculations. Most sci apps probably use 64-bit double precision which has been supported in the PowerPC hardware since 1994. The 970 will improve those apps, but they need not be converted to 64-bit apps.
5) Why?
6) Primarily for large database / server apps which have insane amounts of memory in order to service many clients at once with the fastest possible queries. Google, for example, probably needs 64-bit machines to run at a decent speed.
Just because an app is computationally expensive does not mean it will benefit from 64-bitness. Either is has to use an enormous amount of data in a random fashion (thus requiring >4 GB of physical RAM), or it can benefit greatly from huge sparse data structures (the searching of which have to be computationally heavy), or it has to do a whole lot of 64-bit integer math (which is very unusual as most apps go to floating point or simply aren't compute bound on 64-bit numbers).
the thing that really puts a kink in my theory is steve's declaration of 2003 as the year of the laptop, which screamed to me that they would not have anything worth bragging about desktop-wise until next year. but steve has been known to change his mind... so who knows?
Well, of course, all speculation here has focused on PowerMacs. But what Steve meant is that the laptop has reached the point where it's a primary computer — even a primary workstation — in its own right, not just an adjunct to a desktop. In real-world terms, most of the compromises involved in choosing a laptop have been mooted. Sure, they're less powerful than desktops in absolute terms, but they're powerful enough to do almost anything, and that's enough.
Oh, and there's a 1.2v version of the 970 with the PowerBook's name on it...
6) Primarily for large database / server apps which have insane amounts of memory in order to service many clients at once with the fastest possible queries. Google, for example, probably needs 64-bit machines to run at a decent speed.
Most apps will not require 64-bitness.
While I agree with your post, I just want to clarify some things:
Google actually uses commodity X86 [32bit] boxen, as does HotJobs/Yahoo. This is mostly due to the ubiquity of hardware and cost.
I can tell you that they *do* run into the upper bounds of 32bit computation/memory limit [more the latter, than the former]. To get around this, they segment the database, then aggrigate the results. Think massivly parallel machines....
Some of the cooler aspects that make this work is the ability to do remote machine shared memory. Data that changes in the shared memory pool on one machine gets propagated to the "hive" of other machines [or pointers to them, anyway]. It's a giant hack, but it's a good hack. The Yahoo design uses custom *everything*... kernel, apache, other daemons, etc.
But yes, they do *need* 64bit machines, they just won't migrate for a *very* long time
Sorry for being off-topic, but I wanted to shed some light on a subject I know a bit about
Correction: Most current apps will not require 64-bitness. Most current apps don't even really need 32-bitness. Those that require 32-bitness really require it, though - the software would not be possible without 32-bit processors.
A probably apocryphal story:
Shortly after acceding to the throne (ca. 1840), Queen Victoria toured Michael Faraday's laboratories. After Faraday had expounded at great length on the fascinating properties of the obscure natural force he was researching called "electricity", the Queen looked at him quizzically and asked, "But Mr. Faraday, of what use is it?" Faraday, completely unabashed, calmly replied, "But Your Highness, of what use is a baby?"
I see 64-bit computing in much the same way. Current software has been conceived and designed within the assumptions of 32-bitness. That framework fundamentally constrains engineers' concepts of what is even possible in computing. There are some current apps that are bumping against the 32-bit ceiling which will immediately benefit from 64 bits. That misses the point, though, IMHO. Thousands of way-cool software ideas have died in brain-storming sessions simply because they were beyond the capabilities of current or near-future hardware.
64-bitness in and of itself isn't all that important. What is important is that it forces the rethinking of assumptions about what is possible in the world of software. I'm anticipating a whole new generation of applications that take advantage of the new possibilities that 64-bit processors bring. I suspect in a few years there will be programmers lamenting about the constraints of the 64-bit world, longing for the day when 128- or 256-bit processors become available!
A corrolary to Moore's Law is that software expands in power and size to fill available hardware. More powerful hardware means more powerful software becomes possible, driving hardware to even higher levels to keep up with what the coders want to do. I don't see any end to the spiral for the foreseeable future.
Correction: Most current apps will not require 64-bitness. Most current apps don't even really need 32-bitness. Those that require 32-bitness really require it, though - the software would not be possible without 32-bit processors.
I agree. 64-bitness is an enabling technology and hopefully we will see lots of cool things that need it... but I'm somewhat doubtful that there will be too much of this. It hasn't shown up on the 64-bit machines that have been around for 10 years now, so what you are really talking about is the set of software solutions which require 64-bit and are unique to the Macintosh. My guess is that this is a very small set. Doesn't mean it can't be important though.
I agree. 64-bitness is an enabling technology and hopefully we will see lots of cool things that need it... but I'm somewhat doubtful that there will be too much of this. It hasn't shown up on the 64-bit machines that have been around for 10 years now, so what you are really talking about is the set of software solutions which require 64-bit and are unique to the Macintosh. My guess is that this is a very small set. Doesn't mean it can't be important though.
What's exciting to me is that the new 64-bit processors also have lots of supporting technology to go along with them - like graphics subsystems and high speed memory and high-bandwidth interconnects and SIMD engines, and the like - at prices that make mass production for the consumer market feasible. In some ways, software evolution parallels biological evolution in that it tends to show "punctuated equilibrium." Things stay roughly constant for a while as incremental advances in a large number of technologies gradually pile up until some critical mass is achieved. Then there is a major leap in software as the threshold is crossed to enable a whole new category of software. Then things stabilize around the new paradigm for a while, with further incremental advances in supporting technology until another threshold is crossed.
We've been in a stagnant period for quite a while now. One of the reasons (besides a tanked economy) that the hardware sector is in a slump is that current hardware really is good enough to do most of what people want to do, so there is no strongly compelling reason for upgrading. I have a gut feeling that this round of 64-bit processors (preferably 970s, though I won't snub Opterons and Itaniums/Yamhills if they turn out to do the job better) may overcome the last barrier to a breakthrough in software.
Something akin to the Holodecks on Star Trek NG may not be too far away (i.e. a decade or two, not next year) from practical reality, for example. Gives the "Recreation Room" a whole new meaning
[rolls over so Cindy Crawford can continue giving her sim-massage]
One importent thing that Apple can do with 64 is to show a clear path for its hard and software so that when and if 64 is needed they will have it.
Intel-AMD-Microsoft does look less straight forward to say the least. For Apple to become viable in the enterprise market, they need to build credability and the clear path to 64 bit is a step in the right direction. So the main point with the migration is not to use 64 right away but to eliminate a uncertainty, a very good point
The logical aim is to have a low end PM that is on par with midrange Pentiums currently 2 MHz or so a SP at 1.2-1.5 GHz 970 would fit the bill. Top of the line will aim for a true Pentium crusher so dual 970 as far above 1.5 GHz as possible it the ting to aim for. The midrange is very hard to predict. Apple has used SP-DP-DP and DP-DP-DP as well as SP-SP-DP and sometimes even a midrange SP at a higher clock frequency than the top of the line dual. So what it will be is any ones guess outside Apple. Cost of parts really does not helt that much to figure out that The price difference between a DP and SP mother board for AMD processors is less than 100 dollars, and the price differnecen between 256 MB DDRAM at 233 to 400 MHz is about 10 dollars (and still only half the price of RAMBUS memory).
One major difference compared to previous G4 towers is that were upgrades has been a one step bump, were the previous top of the line becames mid range, the mid range becomes low end and a new top of the line is 30% or so faster than the previous. The current DP 1.42 did not make it very difficult to sell old generation 1.25 DP at a decent price or even the DP 1 GHz before that. Not so with the 970, a dual will kill the DP 1.42 sales totaly. So on one hand Apple want once the cat is out of the bag to have the 970 out ASAP on the other hand they want to push the limit to really beat the Pentiums (for real this time) My guess is that they will go for short, very short, delays once the 970 is out
As for #3, soon there won't be much any other kind. What we see now is a pathetic pittance of what will be possible in the mid term. Just watch the 3D satellite photo extrapolations CNN has access to for their news coverage. Now we aren't that far off from having all those individual buildings as high res models of their own. Put that all in a single database and shake it up with a real kick-a$$ scene-graph manager and you go well into the need for exceptionally large (by current standards) real time accessible databases (which is all 3D models are, really).
The stuff on CNN is such crap I can't believe it.
In current generation games, which are a much better guide to what is possible, typical model densities that are seen in a simple triangle database are ~20-50 bytes per triangle, so you could store ~100M triangles in a 32-bit address space without even resorting to more efficient means of storage (model reuse, height maps, curved surfaces). Given a framerate of 60 that is about 50 times what current top-of-the-line GPU hardware can draw per frame, ignoring various other limiting factors like bus bandwidth, and CPU speed. With these numbers streaming data from disk is perfectly acceptable, and will be for some time given the high frame-to-frame coherency of a 3D scene. So a 32-bit address space for 3D uses has a long way to go before it is truly tapped out. And streaming data is necessary even with a 64-bit address space because nobody wants to wait a minute or more for the data to load before you see anything at all, not to mention what happens when a remote database changes and needs updating.
Note that non-RT work doesn't really change anything... fewer frames exist in memory at once but there is more time to stream in the replacement data. Also, higher end graphics systems exist but not in the desktop systems we're talking about because the supporting memory and buses are too expensive. When they come down in price and performance climbs significantly then I'll agree that 32-bit address spaces are limiting 3D graphics -- but that time is a ways off.
Quote:
As for #5, see above, thinking in desired resolutions on the order of millimeters and including all the higher order physics necessary to handle high fidelity directed energy applications and hypersonic projectiles. Yeah, this is more than a couple years off, but the progression of HW is taking this ever closer to clustered desktops, vice the latest in the ASCII Series. In the reasonable term, #3 alone is an exceptional tool for military applications.
Besides, for immediate tommorrow use, why should the military have to continue to buy $15k+ Sun boxes to run a run-of-the-mill 64-bit X11 application bundle that is currently fielded DoD wide? And often run it painfully slowly at that. Theres a HUGE corporate market that craves power at any price and would gleefully buy 3 boxes for the price of one.
Heh, cool stuff. Okay, this stuff falls into the "scientific/academic" app category that I agree may have uses for 64-bit machines. It is this area where Apple shipping a 64-bit machine & OS would increase their market penetration further -- Unix got the ball rolling, the 970 will give it a solid push. Its hardly something your typical Mac user will see but it does increase the number of a-typical users Apple will attract. And that's a good thing.
The 3-D pictures shown on CNN are generated using satellite/Air Photos (50 MB to >100 MB dependent on resolution and area coverage) that are overlayed on Digital Elevation Data (DEM). The DEM data used in the height-field generations are typically 10m or 30m linear ground-spacing distances (that the USGS use), higher resolution data are available to the DoD (1m). I have some high-resolution files that cover 100 sq-mi that are over 1 GB that take 3-hours to generate in PovRay on DP 1GHz G4. (PovRay sucks, but free, and so do G4's on rendering).
The DEM data was fed into cruise missles to generate ground field elevation for flight paths analysis in the past. Newer methods may be used now. 8)
Heh, cool stuff. Okay, this stuff falls into the "scientific/academic" app category that I agree may have uses for 64-bit machines. It is this area where Apple shipping a 64-bit machine & OS would increase their market penetration further -- Unix got the ball rolling, the 970 will give it a solid push. Its hardly something your typical Mac user will see but it does increase the number of a-typical users Apple will attract. And that's a good thing.
CNN? Did you mean the "Cable News Network" or the "Cabbie News Network"? I've found you'll tend to get plenty of interesting news tidbits from any Arabic speaking form of the latter ...
Anyway ... the other great thing about getting OSX into the hands of the "scientific/academic" Geek-set, is the "Next Killer App" potential therein ... witness the Web (yes, THAT example again - NeXT, Berners-Lee et all) ...
The nice thing about this end of the market is that it tends to get paid to think about cool ways of using technology all day long (well, some of them anyway) ... and if your stuff just happens to be around when they need to quickly impliment the next "Eureka", well
bully for you.
Wonder what might happen if The-Next-Killer-App turns up on a 64bit Mac sitting on some geeks desk, and really really needs all 64 of 'em.
Way back in the Dark Ages people were speculating whether Apple would keep Web Objects and if so what would they do with it?
One of the more interesting threads concerned using Web Objects as a trojan horse in to the enterprise. Some of NeXT's technology allowed for very flexible and mobile use of objects across a LAN. If the clients were also OpenStep/Cocoa then one could provide a large and complex range of services to and between clients using a design model that was surprisingly simple (at least conceptually).
I've never used Web Objects and my last line of real code was more than a decade ago but I've remained intrigued by the possibilities. Especially in light of all the .NET blather I can't help but cast my imagination to Apple's "Stealth Jewel."
The point here is that if you really want to create large "object ecologies" then you need a very large address space: hence 64 bit.
Apple could have some *very* interesting directions in the next few years. Web Objects could finally be about to get its day in the sun.
Currently the way I make "fly-overs" of areas (remembering that the areas are all georeferenced) is to render each scene individually and make a quicktime movie out of them. Again, the individual screen (image) renders take up to a couple of hours. I did the same thing in the '70's with a couple of CDC 7600's where it could be done on the fly with each scene rendered as needed (the resolutions for height fields are much better now). It would be nice to render these scenes on the fly and the 970 may make this possible if the processor/FSB truly match expectations. It would be nice for what I do. (Hell I remember 2 years ago writing a 1GB tiff file to disk was a major chore on a G4 733)
Comments
The high end film production market could come down to Avid and Mac. Untill now mac hasn't really had the hardware to make this happen, but with 64bits and hopefully (more than 4GB max memory) the pieces are falling into place for Apple to go after the real high-end in a very big way!
If Apple built a KILLER 4 way 970 with 16GB+ maxx RAM capacity, a fast on board SATA based raid, and links to control an Xserve RAID, they could basically charge whatever they want for it and people would pay. 10 Grand? No problem. This would NOT, however, be a powermac, but it would be good because the profits would allow PM's and consumer macs to get cheaper and more competitive, which is what will lead to more market share, more developers, more respect from business and serivice providers, and a generally healthier mac life for all of us.
Originally posted by costique
Programmer, here's my straight-in-the-face question. If the 970 does not have a clock multiplier, do you think the 2.3 or 2.25GHz figures technically possible in regard with the bus frequency? If so, do you believe they're possible at Apple? I am just asking about your opinion. No sarcasm meant.
Why wouldn't they be? Pretty much any clock rate is possible up to the limit imposed by the processor, and the bus will run at half that rate. You guys are making this way too complicated. Forget all that nonesense about what speeds are "possible"... that was based on the fact that MPX was very limited in what clock speeds it could run at, and the limited number off clock multipliers that the chip had. The information on the 970 is that it does not have different multipliers, it always runs at 4x the bus speed (but the bus is double pumped so often 2x is quoted). The bus, however, is designed to be able to run at very high speeds so they don't need the multiplier flexibility. So choose a processor speed, divide by 4 (or 2 if you like DDR speeds) and there is your bus speed. The companion chip will probably run at the bus speed and have an asynchronous buffered connection to the memory.
the 970's will be monster machines, BUT also be priced accordingly. along the lines of amorph's description of how apple brought the cinema display to market. they could charge US$5K (which was insanely pricey for a monitor), and people STILL bought it, because the next thing closest up to that point was twice that price (or more). so i wouldn't be surprised to see 970-based machines in a very depressingly high cost level to start.
that will allow apple to still offer G4-based towers for those folks who need fast layout machines, but don't need to be pushing gigs of real time dv footage and effects. the G4 single would become the realm of consumer-level machines and laptops, fast G4-duals the space for high end graphics machines, and then these bad boys for 3D rendering, etc.
i know i'm not bringing much to the table with this post, but knowing what i know about apple, mixed with what folks have said here, and the rumors sites and apple's strange schedule rearrangements and announcements... well, this is how i see it shaking out between now and september.
the thing that really puts a kink in my theory is steve's declaration of 2003 as the year of the laptop, which screamed to me that they would not have anything worth bragging about desktop-wise until next year. but steve has been known to change his mind... so who knows?
Originally posted by Leonis
I am not Programmer
1) Video Software
2) Encoder / Decoder
3) 3D apps
4) Scientific software (Astronomy, Geology, Biotechnology, etc)
5) Military software
6) Enterprise solutions
1) only really high end compositing software will be much benefit
2) not really, most of the time is spent on the encoding so the overhead of streaming at the same time is miniscule
3) perhaps for extremely large models
4) probably although I can't point to particular examples which benefit because I don't have enough experience with them -- to benefit they either require lots of RAM or do lots of 64-bit integer calculations. Most sci apps probably use 64-bit double precision which has been supported in the PowerPC hardware since 1994. The 970 will improve those apps, but they need not be converted to 64-bit apps.
5) Why?
6) Primarily for large database / server apps which have insane amounts of memory in order to service many clients at once with the fastest possible queries. Google, for example, probably needs 64-bit machines to run at a decent speed.
Just because an app is computationally expensive does not mean it will benefit from 64-bitness. Either is has to use an enormous amount of data in a random fashion (thus requiring >4 GB of physical RAM), or it can benefit greatly from huge sparse data structures (the searching of which have to be computationally heavy), or it has to do a whole lot of 64-bit integer math (which is very unusual as most apps go to floating point or simply aren't compute bound on 64-bit numbers).
Most apps will not require 64-bitness.
Originally posted by rok
the thing that really puts a kink in my theory is steve's declaration of 2003 as the year of the laptop, which screamed to me that they would not have anything worth bragging about desktop-wise until next year. but steve has been known to change his mind... so who knows?
Well, of course, all speculation here has focused on PowerMacs. But what Steve meant is that the laptop has reached the point where it's a primary computer — even a primary workstation — in its own right, not just an adjunct to a desktop. In real-world terms, most of the compromises involved in choosing a laptop have been mooted. Sure, they're less powerful than desktops in absolute terms, but they're powerful enough to do almost anything, and that's enough.
Oh, and there's a 1.2v version of the 970 with the PowerBook's name on it...
Originally posted by Programmer
6) Primarily for large database / server apps which have insane amounts of memory in order to service many clients at once with the fastest possible queries. Google, for example, probably needs 64-bit machines to run at a decent speed.
Most apps will not require 64-bitness.
While I agree with your post, I just want to clarify some things:
Google actually uses commodity X86 [32bit] boxen, as does HotJobs/Yahoo. This is mostly due to the ubiquity of hardware and cost.
I can tell you that they *do* run into the upper bounds of 32bit computation/memory limit [more the latter, than the former]. To get around this, they segment the database, then aggrigate the results. Think massivly parallel machines....
Some of the cooler aspects that make this work is the ability to do remote machine shared memory. Data that changes in the shared memory pool on one machine gets propagated to the "hive" of other machines [or pointers to them, anyway]. It's a giant hack, but it's a good hack. The Yahoo design uses custom *everything*... kernel, apache, other daemons, etc.
But yes, they do *need* 64bit machines, they just won't migrate for a *very* long time
Sorry for being off-topic, but I wanted to shed some light on a subject I know a bit about
[EDIT: I'm an idiot and switched compute/mem]
Originally posted by Programmer
Most apps will not require 64-bitness.
Correction: Most current apps will not require 64-bitness. Most current apps don't even really need 32-bitness. Those that require 32-bitness really require it, though - the software would not be possible without 32-bit processors.
A probably apocryphal story:
Shortly after acceding to the throne (ca. 1840), Queen Victoria toured Michael Faraday's laboratories. After Faraday had expounded at great length on the fascinating properties of the obscure natural force he was researching called "electricity", the Queen looked at him quizzically and asked, "But Mr. Faraday, of what use is it?" Faraday, completely unabashed, calmly replied, "But Your Highness, of what use is a baby?"
I see 64-bit computing in much the same way. Current software has been conceived and designed within the assumptions of 32-bitness. That framework fundamentally constrains engineers' concepts of what is even possible in computing. There are some current apps that are bumping against the 32-bit ceiling which will immediately benefit from 64 bits. That misses the point, though, IMHO. Thousands of way-cool software ideas have died in brain-storming sessions simply because they were beyond the capabilities of current or near-future hardware.
64-bitness in and of itself isn't all that important. What is important is that it forces the rethinking of assumptions about what is possible in the world of software. I'm anticipating a whole new generation of applications that take advantage of the new possibilities that 64-bit processors bring. I suspect in a few years there will be programmers lamenting about the constraints of the 64-bit world, longing for the day when 128- or 256-bit processors become available!
A corrolary to Moore's Law is that software expands in power and size to fill available hardware. More powerful hardware means more powerful software becomes possible, driving hardware to even higher levels to keep up with what the coders want to do. I don't see any end to the spiral for the foreseeable future.
Originally posted by TJM
Correction: Most current apps will not require 64-bitness. Most current apps don't even really need 32-bitness. Those that require 32-bitness really require it, though - the software would not be possible without 32-bit processors.
I agree. 64-bitness is an enabling technology and hopefully we will see lots of cool things that need it... but I'm somewhat doubtful that there will be too much of this. It hasn't shown up on the 64-bit machines that have been around for 10 years now, so what you are really talking about is the set of software solutions which require 64-bit and are unique to the Macintosh. My guess is that this is a very small set. Doesn't mean it can't be important though.
Originally posted by Programmer
I agree. 64-bitness is an enabling technology and hopefully we will see lots of cool things that need it... but I'm somewhat doubtful that there will be too much of this. It hasn't shown up on the 64-bit machines that have been around for 10 years now, so what you are really talking about is the set of software solutions which require 64-bit and are unique to the Macintosh. My guess is that this is a very small set. Doesn't mean it can't be important though.
What's exciting to me is that the new 64-bit processors also have lots of supporting technology to go along with them - like graphics subsystems and high speed memory and high-bandwidth interconnects and SIMD engines, and the like - at prices that make mass production for the consumer market feasible. In some ways, software evolution parallels biological evolution in that it tends to show "punctuated equilibrium." Things stay roughly constant for a while as incremental advances in a large number of technologies gradually pile up until some critical mass is achieved. Then there is a major leap in software as the threshold is crossed to enable a whole new category of software. Then things stabilize around the new paradigm for a while, with further incremental advances in supporting technology until another threshold is crossed.
We've been in a stagnant period for quite a while now. One of the reasons (besides a tanked economy) that the hardware sector is in a slump is that current hardware really is good enough to do most of what people want to do, so there is no strongly compelling reason for upgrading. I have a gut feeling that this round of 64-bit processors (preferably 970s, though I won't snub Opterons and Itaniums/Yamhills if they turn out to do the job better) may overcome the last barrier to a breakthrough in software.
Something akin to the Holodecks on Star Trek NG may not be too far away (i.e. a decade or two, not next year) from practical reality, for example. Gives the "Recreation Room" a whole new meaning
[rolls over so Cindy Crawford can continue giving her sim-massage]
Intel-AMD-Microsoft does look less straight forward to say the least. For Apple to become viable in the enterprise market, they need to build credability and the clear path to 64 bit is a step in the right direction. So the main point with the migration is not to use 64 right away but to eliminate a uncertainty, a very good point
The logical aim is to have a low end PM that is on par with midrange Pentiums currently 2 MHz or so a SP at 1.2-1.5 GHz 970 would fit the bill. Top of the line will aim for a true Pentium crusher so dual 970 as far above 1.5 GHz as possible it the ting to aim for. The midrange is very hard to predict. Apple has used SP-DP-DP and DP-DP-DP as well as SP-SP-DP and sometimes even a midrange SP at a higher clock frequency than the top of the line dual. So what it will be is any ones guess outside Apple. Cost of parts really does not helt that much to figure out that The price difference between a DP and SP mother board for AMD processors is less than 100 dollars, and the price differnecen between 256 MB DDRAM at 233 to 400 MHz is about 10 dollars (and still only half the price of RAMBUS memory).
One major difference compared to previous G4 towers is that were upgrades has been a one step bump, were the previous top of the line becames mid range, the mid range becomes low end and a new top of the line is 30% or so faster than the previous. The current DP 1.42 did not make it very difficult to sell old generation 1.25 DP at a decent price or even the DP 1 GHz before that. Not so with the 970, a dual will kill the DP 1.42 sales totaly. So on one hand Apple want once the cat is out of the bag to have the 970 out ASAP on the other hand they want to push the limit to really beat the Pentiums (for real this time) My guess is that they will go for short, very short, delays once the 970 is out
Originally posted by AirSluf
As for #3, soon there won't be much any other kind. What we see now is a pathetic pittance of what will be possible in the mid term. Just watch the 3D satellite photo extrapolations CNN has access to for their news coverage. Now we aren't that far off from having all those individual buildings as high res models of their own. Put that all in a single database and shake it up with a real kick-a$$ scene-graph manager and you go well into the need for exceptionally large (by current standards) real time accessible databases (which is all 3D models are, really).
The stuff on CNN is such crap I can't believe it.
In current generation games, which are a much better guide to what is possible, typical model densities that are seen in a simple triangle database are ~20-50 bytes per triangle, so you could store ~100M triangles in a 32-bit address space without even resorting to more efficient means of storage (model reuse, height maps, curved surfaces). Given a framerate of 60 that is about 50 times what current top-of-the-line GPU hardware can draw per frame, ignoring various other limiting factors like bus bandwidth, and CPU speed. With these numbers streaming data from disk is perfectly acceptable, and will be for some time given the high frame-to-frame coherency of a 3D scene. So a 32-bit address space for 3D uses has a long way to go before it is truly tapped out. And streaming data is necessary even with a 64-bit address space because nobody wants to wait a minute or more for the data to load before you see anything at all, not to mention what happens when a remote database changes and needs updating.
Note that non-RT work doesn't really change anything... fewer frames exist in memory at once but there is more time to stream in the replacement data. Also, higher end graphics systems exist but not in the desktop systems we're talking about because the supporting memory and buses are too expensive. When they come down in price and performance climbs significantly then I'll agree that 32-bit address spaces are limiting 3D graphics -- but that time is a ways off.
As for #5, see above, thinking in desired resolutions on the order of millimeters and including all the higher order physics necessary to handle high fidelity directed energy applications and hypersonic projectiles. Yeah, this is more than a couple years off, but the progression of HW is taking this ever closer to clustered desktops, vice the latest in the ASCII Series. In the reasonable term, #3 alone is an exceptional tool for military applications.
Besides, for immediate tommorrow use, why should the military have to continue to buy $15k+ Sun boxes to run a run-of-the-mill 64-bit X11 application bundle that is currently fielded DoD wide? And often run it painfully slowly at that. Theres a HUGE corporate market that craves power at any price and would gleefully buy 3 boxes for the price of one.
Heh, cool stuff.
The DEM data was fed into cruise missles to generate ground field elevation for flight paths analysis in the past. Newer methods may be used now. 8)
Originally posted by Programmer
The stuff on CNN is such crap I can't believe it.
Heh, cool stuff.
CNN? Did you mean the "Cable News Network" or the "Cabbie News Network"? I've found you'll tend to get plenty of interesting news tidbits from any Arabic speaking form of the latter ...
Anyway ... the other great thing about getting OSX into the hands of the "scientific/academic" Geek-set, is the "Next Killer App" potential therein ... witness the Web (yes, THAT example again - NeXT, Berners-Lee et all) ...
The nice thing about this end of the market is that it tends to get paid to think about cool ways of using technology all day long (well, some of them anyway) ... and if your stuff just happens to be around when they need to quickly impliment the next "Eureka", well
bully for you.
Wonder what might happen if The-Next-Killer-App turns up on a 64bit Mac sitting on some geeks desk, and really really needs all 64 of 'em.
Might be a nice place for Apple to be.
3D Conceptual Map of the Entire Web Anybody?
One of the more interesting threads concerned using Web Objects as a trojan horse in to the enterprise. Some of NeXT's technology allowed for very flexible and mobile use of objects across a LAN. If the clients were also OpenStep/Cocoa then one could provide a large and complex range of services to and between clients using a design model that was surprisingly simple (at least conceptually).
I've never used Web Objects and my last line of real code was more than a decade ago but I've remained intrigued by the possibilities. Especially in light of all the .NET blather I can't help but cast my imagination to Apple's "Stealth Jewel."
The point here is that if you really want to create large "object ecologies" then you need a very large address space: hence 64 bit.
Apple could have some *very* interesting directions in the next few years. Web Objects could finally be about to get its day in the sun.