Project Dark Star

124

Comments

  • Reply 61 of 88
    Quote:

    Originally posted by Ensign Pulver

    What, you mean a clean, inviting atmosphere, meticulously maintained demo machines and a knowledgable sales staff? What's so hard to figure out about that?



    CompUSA and Fry's couldn't find their ass with both hands and a map. Don't even get me started on Best Buy.




    I served 5 years time at a local Tech/Mac dealer, and as nasty as the job was, we did get to be known for a while, pretty much across North America, as perhaps the best in our field



    So, what was the big secret?



    #1 - Get Geeks to Hire Geeks, who are interested in the same geeky things the store tries to be an expert in.



    There's just too many large department stores that hire people thru a massive HR apparatus, based upon their ability to smile and look good on paper: the people who do the hiring are the kinds of people who did real good in Hi-School, because they excelled at regurgitating vast amounts of personally useless information quickly and accurately on paper: but who's VCR's at home still flash 12:00am three years later (or they've taped something over the display). In short, they tend to hire people like themselves, and they tend to look down upon people who are geeks (so even if they hire them, they don't treat them very well) ... the end result?



    1 - Nobody learns anything unless they're sent on very expensive courses, where their natural skills - regurgitation - are put back into action.



    2 - Those few who did get hired who can learn on their own aren't listened to and are isolated.



    The end result?



    A bunch of machines, with lot's of pretty, slightly out of date posters, sitting off to the side, that after a few months don't work anymore - a few frustrated geeks who soon leave, and upper management who does what their best at - looking at numbers - that soon says "hmmmm, Mac's don't sell, and after we spent all that money in training too".



    No, they don't get it, they'll never get it, and Apple really shouldn't be wasting their time waiting until they do - this is why I think Apple did exactly the right thing by opening their own stores, so they could show the regurgitators how it's done ... because waiting for them to figure it out on their own is like waiting for Tammy Faye Baker to get a real job.
  • Reply 62 of 88
    yevgenyyevgeny Posts: 1,148member
    Quote:

    Originally posted by OverToasty

    I served 5 years time at a local Tech/Mac dealer, and as nasty as the job was, we did get to be known for a while, pretty much across North America, as perhaps the best in our field



    So, what was the big secret?



    #1 - Get Geeks to Hire Geeks, who are interested in the same geeky things the store tries to be an expert in.



    There's just too many large department stores that hire people thru a massive HR apparatus, based upon their ability to smile and look good on paper: the people who do the hiring are the kinds of people who did real good in Hi-School, because they excelled at regurgitating vast amounts of personally useless information quickly and accurately on paper: but who's VCR's at home still flash 12:00am three years later (or they've taped something over the display). In short, they tend to hire people like themselves, and they tend to look down upon people who are geeks (so even if they hire them, they don't treat them very well) ... the end result?



    1 - Nobody learns anything unless they're sent on very expensive courses, where their natural skills - regurgitation - are put back into action.



    2 - Those few who did get hired who can learn on their own aren't listened to and are isolated.



    The end result?



    A bunch of machines, with lot's of pretty, slightly out of date posters, sitting off to the side, that after a few months don't work anymore - a few frustrated geeks who soon leave, and upper management who does what their best at - looking at numbers - that soon says "hmmmm, Mac's don't sell, and after we spent all that money in training too".



    No, they don't get it, they'll never get it, and Apple really shouldn't be wasting their time waiting until they do - this is why I think Apple did exactly the right thing by opening their own stores, so they could show the regurgitators how it's done ... because waiting for them to figure it out on their own is like waiting for Tammy Faye Baker to get a real job.




    Nothing sells a customer on a product like knowing that it solves their problem. Geeks with decent communication skills (because communication skills ARE important) can sell like few other salesmen in the world because they can understand the problems that a customer faces, come up with a solution and explain it so that the customer understands why they need to buy something. (I am speaking from personal experience as someone who was both a geek and a comptuer salesman)
  • Reply 63 of 88
    rolorolo Posts: 686member
    OverToasty and Yevgeny, as a Mac-only consultant with 16 years experience, I like and appreciate your take on the geek thing. I don't sell anything, just advise. As an independent consultant (ACN), clients know that I just give the best advice I can because I have no sales agenda.



    OK, back to Dark Star. Take a look at the competition from SGI and HP for an idea as to why Apple might embark upon something like this: Videography: The New Workstations



    Of course, Dark Star could be just a figment of someone's imagination but there's no denying Apple wants to be seen as being at the forefront of powerful desktop computing but you have to wonder if Apple would like to go beyond that. With help from IBM, it's at least possible.
  • Reply 64 of 88
    macroninmacronin Posts: 1,174member
    Quote:

    Originally posted by Rolo

    Take a look at the competition from SGI and HP for an idea as to why Apple might embark upon something like this: Videography: The New Workstations



    Of course, Dark Star could be just a figment of someone's imagination but there's no denying Apple wants to be seen as being at the forefront of powerful desktop computing but you have to wonder if Apple would like to go beyond that. With help from IBM, it's at least possible.




    I like the part where the SGI guy tries to justify the US$20,000.00 (!!!) entrance price for the single CPU Tezro...



    Tezro is available in two configurations, as a tower or rackmount unit, and offers 7 PCIX slots. The pricing starts at $20,500 for a tower, single processor. However, with regard to pricing, Danielson stresses, "if you're looking at other workstations coming onto the market, they tend to really focus on a low entrance price. But, by the time you put memory in these systems and storage to start working, and video I/O cards, etc., the real street or list price is often triple or quadruple.



    Now, for quadruple the cost of a G5 (entry dual CPU is US$3,000.00), I could max the RAM & HDDs (Apple markup pricing) and have US$8,000.00 left over for a video i/o card (which the SGI Tezro DOESN'T include in its US$20,000.00 entry price!)...



    As for the HP offering, well; you get what you pay for...



    US$800.00 seems cheap, but it is just a fast single CPU box. IM(not so)HO multi-CPU support is a requirement for DCC work...



    I am ready for my new quad CPU DCC workstation Apple, just don't forget the workstation-class OpenGL card to go with it this time!



    ;^p
  • Reply 65 of 88
    First of all, thanks to MacRonin and Rolo for giving me a reason to go and check out sgi's website!



    Whilst the Tezro is obviously a potential target for a future 4-way PMac G5, it's the other July 14 announcement - the Onyx 4 - that is actually relevant to the alleged "DarkStar".



    Apple seem to have spent a little time of late turning necessity into the mother of invention. The lack of native DDR support in the G4 forced Apple's hardware elves to develop an architecture where the performance of the xServe, MDD PMacs and most recent PowerBooks was at least as optimal as it could be given the fact that they were making a purse out of what had become a sough's ear.



    However, that creativity appears to have been redrawn in the PMac G5 so that some components - chiefly the 970s - each have direct paths to major subsystem components.



    Now what if there was another evolution in Dark Star, so that a system to rival Onyx 4 could be built with equivalent or better performance but "cheaper" because of the use of mainstream components.



    Could it be that IBM is assisting Apple - perhaps alongside nVidia - to design a graphics-optimised superserver aimed at putting sgi out of its misery, and that some of the crossover technology will find its way into an IBM database-optimised box designed to go after Sun's Starfire.



    How does a 2.5 GHz 970 perform against a 700 MHz MIPS R16000? Anyone care to contribute a valid opinion.



    The name Dark Star now has two connotations: I seem to remember that in the film, the Dark Star used to wander around blowing up asteroids that were blocking the main shipping lanes - does Dark Star signify that IBM and Apple see sgi and Sun as nuisance obstructions that need to be removed. Or is the name an allusion to a black hole, as the system is designed to pull customers into its machinery through massive gravity.



    I do love letting my imagination run wild at this time of night, it's such a great way to relax before bed.



    And as a final thought before I go to bed, all of the rumours say that 980 will still be single core, but what if 990 was dual-core with hyperthreading et al and landed up in a future Dark Star sometime in 2007. MIPS will have difficulty breathing by that time, because sgi's split personality with regards to the Itanium family will be as self-destructive and confusing as Intel's Xeon/Itanium dilemma.



    I can't help but feel that the future may well be brighter than any of us would have expected five years ago.
  • Reply 66 of 88
    programmerprogrammer Posts: 3,458member
    Quote:

    Originally posted by Mark- Card Carrying FanaticRealist

    How does a 2.5 GHz 970 perform against a 700 MHz MIPS R16000? Anyone care to contribute a valid opinion.





    In a one-on-one comparison of pure processor performance the new Apple G5s should crush the R16K in the new SGI machines. Better than double the memory bandwidth, nearly three times the clock rate, and arguably a superiour archicture -- especially if the SIMD is considered. The MIPS processor has more floating point units, but there are diminishing returns as the number of units is increased.



    In a two-on-two comparison the Apple machines should still crush the SGI machines for processor performance. Apple's memory implementation is strong and each 970 gets its own FSB.



    The MIPS R16K is designed for many-way SMP machines, however, and that's where it (and SGI's NUMA) shines. The 970 is good at SMP as well, however, so if Apple or IBM did the work to build a competitive NUMA system, I'd say SGI is in trouble.



    Graphics are still SGI's strength, however.
  • Reply 67 of 88
    Quote:

    Originally posted by Programmer

    In a one-on-one comparison of pure processor performance the new Apple G5s should crush the R16K in the new SGI machines. Better than double the memory bandwidth, nearly three times the clock rate, and arguably a superiour archicture -- especially if the SIMD is considered. The MIPS processor has more floating point units, but there are diminishing returns as the number of units is increased.



    In a two-on-two comparison the Apple machines should still crush the SGI machines for processor performance. Apple's memory implementation is strong and each 970 gets its own FSB.



    The MIPS R16K is designed for many-way SMP machines, however, and that's where it (and SGI's NUMA) shines. The 970 is good at SMP as well, however, so if Apple or IBM did the work to build a competitive NUMA system, I'd say SGI is in trouble.



    Graphics are still SGI's strength, however.




    Programmer,



    I believe that history also indicates that the MIPS family is relatively slow to scale so I can only imagine that the gap you describe will become more pronounced.



    I agree with your comments re: SGI's graphics strengths, but I also remember in my maiden thread of around 12-18 months ago (the marathon "Apple should buy sgi and pick the meat off the bones" thread), some were arguing that much of sgi's stellar talent and technology had long since defected or been acquired by nVidia.



    If that last statement is true, is its feasible that Apple, IBM and nVidia (who I think are also supposed to be using East Fishface as a foundry) are working on some Skunkworks project to dethrone sgi as king of the visualisation hill?



    The other part of your post initiates an interesting question: How easy/feasible is it to make OS X NUMA-capable? Presumably, the major part of the work is related to memory management - is that a kernel issue or in the wider OS? Also you strike me as the kind of chap who would remember the reason SMP delivers diminishing returns past 32 processors - I remember reading up on it whilst researching the products of Kendall Square Research over a decade ago, but have long since consigned all of their literature to the bin.
  • Reply 68 of 88
    macroninmacronin Posts: 1,174member
    Quote:

    Originally posted by Mark- Card Carrying FanaticRealist

    Programmer,



    I believe that history also indicates that the MIPS family is relatively slow to scale so I can only imagine that the gap you describe will become more pronounced.



    I agree with your comments re: SGI's graphics strengths, but I also remember in my maiden thread of around 12-18 months ago (the marathon "Apple should buy sgi and pick the meat off the bones" thread), some were arguing that much of sgi's stellar talent and technology had long since defected or been acquired by nVidia.



    If that last statement is true, is its feasible that Apple, IBM and nVidia (who I think are also supposed to be using East Fishface as a foundry) are working on some Skunkworks project to dethrone sgi as king of the visualisation hill?



    The other part of your post initiates an interesting question: How easy/feasible is it to make OS X NUMA-capable? Presumably, the major part of the work is related to memory management - is that a kernel issue or in the wider OS? Also you strike me as the kind of chap who would remember the reason SMP delivers diminishing returns past 32 processors - I remember reading up on it whilst researching the products of Kendall Square Research over a decade ago, but have long since consigned all of their literature to the bin.




    In light of the above, and in regards to Visulization...



    nVidia QuadroFX 3000G



    Very interesting, but how to get multiple units into a single chassis?!? Multiple AGP slots? Or do these need to be in seperate machines, which are 'doubly clustered'?!? By that I mean machines connected/communicating via Gigabit Ethernet & via the 3000Gs interconnects... Or, add in a SAN solution, and a third connection comes about in regards to shared disk space...



    Just REALLY hoping the QuadroFX line comes to the Mac platform soon!



    ;^p
  • Reply 69 of 88
    overtoastyovertoasty Posts: 439member
    Quote:

    Originally posted by MacRonin



    Just REALLY hoping the QuadroFX line comes to the Mac platform soon!



    ;^p




    What's keeping it?
  • Reply 70 of 88
    programmerprogrammer Posts: 3,458member
    Quote:

    Originally posted by Mark- Card Carrying FanaticRealist

    I believe that history also indicates that the MIPS family is relatively slow to scale so I can only imagine that the gap you describe will become more pronounced.



    Primarily because MIPS doesn't have the huge monetary resources behind it that PowerPC and x86 do.



    Quote:



    I agree with your comments re: SGI's graphics strengths, but I also remember in my maiden thread of around 12-18 months ago (the marathon "Apple should buy sgi and pick the meat off the bones" thread), some were arguing that much of sgi's stellar talent and technology had long since defected or been acquired by nVidia.





    These two lines of reasoning aren't mutually exclusive. There is nothing from SGI that is worth acquiring, at least not by Apple. nVidia might want to cherry pick, for example. SGI's graphics expertise is built on their proprietary software and hardware. Apple uses commodity graphics and are better off for it, and they can probably reproduce SGI's high end graphics software advantage with better results than trying to acquire it (and the dead weight that comes with it).



    Quote:



    If that last statement is true, is its feasible that Apple, IBM and nVidia (who I think are also supposed to be using East Fishface as a foundry) are working on some Skunkworks project to dethrone sgi as king of the visualisation hill?




    It is certainly possible. Mostly its a matter of trying to do so. So far neither Apple nor IBM has really tried. nVidia has kept themselves in the low end of the market so far.



    Quote:



    The other part of your post initiates an interesting question: How easy/feasible is it to make OS X NUMA-capable? Presumably, the major part of the work is related to memory management - is that a kernel issue or in the wider OS? Also you strike me as the kind of chap who would remember the reason SMP delivers diminishing returns past 32 processors - I remember reading up on it whilst researching the products of Kendall Square Research over a decade ago, but have long since consigned all of their literature to the bin.




    NUMA, on a smallish scale anyhow, can be done with little or no OS support -- it can be just a hardware thing. Adding support to the OS to understand the memory layout and the affinity of memory to particular processes shouldn't be too much work for any OS with a well architected virtual memory system... and from what I know of Mac OS X, it seems like they've done a good job of that architecture. Adding such support to Apple's lineup would not be a huge leap, and I think we'll see it relatively soon -- as soon as IBM moves the memory controller onto the processor. NUMA on a large scale is harder to do efficiently, but once you've got the small scale stuff in place it should be a fairly natural migration. The main question is whether Apple sees any benefit in doing it.



    There are a couple of reasons SMP has diminishing returns...



    The first is that any time you have a situation involving communication or resource sharing then you have to spend some of your time communicating and sharing. The more you have to communicate or share, the more time you have to devote to communicating or sharing. That time spent is time that you're not spending doing "real" work, hence adding a processor costs you time on each processor and the return for adding a processor diminishes.



    The second is that there are typically a limited number of tasks to be done in any given problem and once you have a processor (or hyperthread) working on each task, adding a additional capability to run another task becomes redundant. Tasks like graphics can often be split effectively many many ways, and if you know you're going to be running on a highly-SMP machine you can often do things to restructure your task in a way that takes advantage of more tasks...



    A dumb example of this: imagine that you have a problem that you can evaluate a couple of different ways, and each way goes at a different speed depending on the data it is fed as input. Sometimes one algorithm is faster, sometimes a different one is. On a single processor machine you just choose one algorithm, and try your best to make it as fast as possible. On a multi-processor machine you could just implement all the algorithms and send one to each processor, stopping them all when whichever finishes first reports back with the results. If you have more processors than algorithms, however, then you've got nothing to use them for.





    By the way: the law of diminishing returns can be applied to pretty much anything, even if you don't really know the reason for it in a particular case. The value added by adding yet one more "something" is generally less than the value you got by adding the previous one. There is such thing as too much of a good thing.
  • Reply 71 of 88
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by MacRonin

    In light of the above, and in regards to Visulization...



    nVidia QuadroFX 3000G



    Very interesting, but how to get multiple units into a single chassis?!?




    If memory serves, AGP 8x (Pro?) allows for two AGP slots on one board for the first time.
  • Reply 72 of 88
    macroninmacronin Posts: 1,174member
    Quote:

    Originally posted by Amorph

    If memory serves, AGP 8x (Pro?) allows for two AGP slots on one board for the first time.



    So the best they could do with AGP slots would be a dual pipeline workstation...



    Drop quad PPC970s in there, and maybe a hardware RAID implementation option... Serial ATA RAID, or Ultra320 SCSI for the really well-heeled.....? Can I get that striped & mirrored; and throw in some parity while you're at it, thanks!



    Or will Apple (speculating totally here of course) go for the deskside/refrigerator motif, and stuff multiple PCI Express slots in there, for a cluster of QuadroFX 3000Gs in there?!?



    Didn't we hear things about ATi working on multiple pipelines (multiple graphics cards interconnected to work as one massive card) with their future products...?!?



    Or is thinking outside of the box (rough quote from Mr. NSX, our 'local' ATi rep here on AI...) in reference to an external chassis for PCI Express graphics cards? But they would be forced to connect via PCI-X then, and that would probably bottleneck rather quickly...



    Just some random after lunch thoughts, process as you will...



    I will go for the quad CPU/dual 3000G model myself, feeding a 30" Apple Cinema Display...



    Mmmm...



    ;^p
  • Reply 73 of 88
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by MacRonin

    So the best they could do with AGP slots would be a dual pipeline workstation...



    Or will Apple (speculating totally here of course) go for the deskside/refrigerator motif, and stuff multiple PCI Express slots in there, for a cluster of QuadroFX 3000Gs in there?!?




    Don't forget HyperTransport, which is a motherboard connector that can go over cables, too.



    With HT, you could abandon cards altogether if you wanted to, and just plug in whatever you wanted.
  • Reply 74 of 88
    grecygrecy Posts: 15member
    Quote:

    Originally posted by MacRonin

    I will go for the quad CPU/dual 3000G model myself, feeding a 30" Apple Cinema Display...

    ;^p




    Forget about that puny 30" Cinema...



    Here at work we have an SGI Onyx 3 driving six projectors at 1280X1024 across a screen that is 4 meters across, giving passive stereo projection....



    Talk is we are sick of paying the service contracts and want to go with a Linux Cluster option.. what I wouldn't give to see it replaced by an Apple/nVidia option.



    -Dan
  • Reply 75 of 88
    airslufairsluf Posts: 1,861member
    Kickaha and Amorph couldn't moderate themselves out of a paper bag. Abdicate responsibility and succumb to idiocy. Two years of letting a member make personal attacks against others, then stepping aside when someone won't put up with it. Not only that but go ahead and shut down my posting priviledges but not the one making the attacks. Not even the common decency to abide by their warning (afer three days of absorbing personal attacks with no mods in sight), just shut my posting down and then say it might happen later if a certian line is crossed. Bullshit flag is flying, I won't abide by lying and coddling of liars who go off-site, create accounts differing in a single letter from my handle with the express purpose to decieve and then claim here that I did it. Everyone be warned, kim kap sol is a lying, deceitful poster.



    Now I guess they should have banned me rather than just shut off posting priviledges, because kickaha and Amorph definitely aren't going to like being called to task when they thought they had it all ignored *cough* *cough* I mean under control. Just a couple o' tools.



    Don't worry, as soon as my work resetting my posts is done I'll disappear forever.
  • Reply 76 of 88
    chrischris Posts: 2member
    Hi,

    This message was on the Apple SciTech lists from

    "Dean Dauger" <[email protected]> might be worth contacting?



    =================================================

    I've been asked about creating a "3D cave" solution using a Mac cluster before. We sketched out a design one Mac holding the main 3D model distributing data to the other cluster nodes,each node displaying its view through its own graphics card. The nodes could easily be Xserves or Power Macs.



    I think the most difficult part was to make sure that the models inside the nodes are in sync with the main model (assuming the model changes) at the necessary frame rate. With some "back of the envelope" numbers, we figured out that Gigabit could be enough if you could send messages only about the portions of the model that changed (e.g., if it was a purely polygon-based model, only the changed polygons), or do even better if you could limit the updates to just the portion of the model each node's camera sees. The principle much like how QuickTime and MPEG interframe compression works, but applied to 3D models. I've read about this kind of thing on Linux and have been approached regarding similar projects more than once.
  • Reply 77 of 88
    airslufairsluf Posts: 1,861member
    Kickaha and Amorph couldn't moderate themselves out of a paper bag. Abdicate responsibility and succumb to idiocy. Two years of letting a member make personal attacks against others, then stepping aside when someone won't put up with it. Not only that but go ahead and shut down my posting priviledges but not the one making the attacks. Not even the common decency to abide by their warning (afer three days of absorbing personal attacks with no mods in sight), just shut my posting down and then say it might happen later if a certian line is crossed. Bullshit flag is flying, I won't abide by lying and coddling of liars who go off-site, create accounts differing in a single letter from my handle with the express purpose to decieve and then claim here that I did it. Everyone be warned, kim kap sol is a lying, deceitful poster.



    Now I guess they should have banned me rather than just shut off posting priviledges, because kickaha and Amorph definitely aren't going to like being called to task when they thought they had it all ignored *cough* *cough* I mean under control. Just a couple o' tools.



    Don't worry, as soon as my work resetting my posts is done I'll disappear forever.
  • Reply 78 of 88
    rhumgodrhumgod Posts: 1,289member
    Title: Senior Signal Integrity Engineer

    Req. ID: 1918208

    Location: Santa Clara Valley, California

    Country: United States



    This person will develop high-bandwidth interconnect systems for future microprocessor + chipset platforms. Co-define intergrated circuit IO cell and package requirements with chip design and semiconductor partners. Co-define PCB and connector channel architectures. Model and verify in simulaiton integrated interconnect signal integrity as part of the chipset chip development process. Author system channel routing guidelines for use in a variety of systems. Validate models and system performance by taking hardware measurements. Investigate and resolve any interconnect problems that arise in systems designs. Interfaces include DDR memory, microprocessor interfaces, SATA, Firewire, etc.



    BSEE + >10 years relevant industry experience.
  • Reply 79 of 88
    haraldharald Posts: 2,152member
    64 way POWER chips from IMB.



    That's according to The Register.



    IBM has already struck fear in the hearts of its competitors with the dual-core Power4 chip and looks set to apply more pressure on rivals with the future processors.



    The Power5 processor will first appear in 2004 at the heart of the Squadron family of servers. These systems will scale from 1 to 64 processors. IBM's current large SMP - the p690 - only makes it up to 32 Power4 chips.




    I wonder ...
  • Reply 80 of 88
    The POWER chips are designed for high-end servers and workstations. Why would they make a derivative to be used in desktops and small blade servers, only to then use them in a high-end server or workstation?



    As other posts have mentioned, the 970's VMX unit would be a waste of silicon in a 64-chip machine. The dual core Power4+ or POWER5 is the chip that a computer like this would use.



    I would be happy to see IBM servers and workstations all coming with the option of Mac OS X.



    But I don't want the money I give Apple through purchases to be spent on R&D in a market that I won't benefit from. I would much rather they spent it on making an awesome lineup of consumer computers and life-style devices like the iPod, encouraging IBM to get a G4 to market (using the lastest G3s with VMX added), and making some great software, such as an Office alternative.
Sign In or Register to comment.